id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2307.12038
Optimization of a Runge-Kutta 4th Order Method-based Airbrake Control System for High-Speed Vehicles Using Neural Networks
The Runge-Kutta 4th Order (RK4) technique is extensively employed in the numerical solution of differential equations for airbrake control system design. However, its computational efficacy may encounter restrictions when dealing with high-speed vehicles that experience intricate aerodynamic forces. Using a Neural Network, a unique technique to improving the RK4-based airbrakes code is provided. The Neural Network is trained on numerous aspects of the high-speed vehicle as well as the current status of the airbrakes. This data was generated through the traditional RK4-based simulations and can predict the state of the airbrakes for any given state of the rocket in real-time. The proposed approach is demonstrated on a high-speed airbrakes control system, achieving comparable or better performance than the traditional RK4-based system while significantly reducing computational time by reducing the number of mathematical operations. The proposed method can adapt to changes in flow conditions and optimize the airbrakes system in real-time.
Tanvi Agrawal, Utkarsh Anand
2023-07-22T10:12:53Z
http://arxiv.org/abs/2307.12038v1
Optimization of a Runge-Kutta 4th Order Method-based Airbrake Control System for High-Speed Vehicles Using Neural Networks ###### Abstract The Runge-Kutta 4th Order (RK4) technique is extensively employed in the numerical solution of differential equations for airbrake control system design. However, its computational efficacy may encounter restrictions when dealing with high-speed vehicles that experience intricate aerodynamic forces. Using a Neural Network, a unique technique to improving the RK4-based airbrakes code is provided. The Neural Network is trained on numerous aspects of the high-speed vehicle as well as the current status of the airbrakes. This data was generated through the traditional RK4-based simulations and can predict the state of the airbrakes for any given state of the rocket in real-time. The proposed approach is demonstrated on a high-speed airbrakes control system, achieving comparable or better performance than the traditional RK4-based system while significantly reducing computational time by reducing the number of mathematical operations. The proposed method can adapt to changes in flow conditions and optimize the airbrakes system in real-time. Keywords: Airbrakes, Control System, High-speed vehicle, Runge-Kutta 4th Order, Neural Network ## I Introduction Airbrakes are essential safety mechanisms for high-speed vehicles, such as aircraft and sounding rockets. They help to control and reduce the speed of the vehicle by converting its kinetic energy into other forms of energy. Airbrakes control systems rely on solving complex differential equations that describe the physics of the vehicle's motion and aerodynamic forces. The Runge-Kutta 4th Order (RK4) method is a prevalent numerical approach utilized in the resolution of these differential equations within the context of airbrake control system design[1]. However, the computational efficiency of the RK4 method can be limited, especially for high-speed vehicles with complex aerodynamic forces.[2] In recent times, there has been a remarkable surge in the field of Artificial Intelligence, with Neural Networks emerging as formidable instruments for addressing intricate problems[3]. The objective of this study is to furnish empirical substantiation concerning the effectiveness of the proposed methodology in governing high-speed airbrakes. We will conduct a comparative analysis of the performance of a neural network-based airbrakes control system with that of a conventional RK4-based system, and evaluate the findings. Moreover, we will investigate the adaptability of the proposed technique to varying flow conditions, as well as its ability to optimize the airbrakes system in real-time. This research proposes a novel and effective method for optimizing airbrakes control systems that utilize the RK4 method by incorporating neural networks. This method has the potential to reduce computational time and improve the accuracy of airbrakes control systems for high-speed vehicles. Furthermore, it can be implemented for other systems requiring real-time control and optimization. ## II Background Theory ### _Runge-Kutta 4th Order Method_ The Runge-Kutta 4th Order (RK4) method is a numerical approach employed in various scientific and engineering domains, such as physics, chemistry, and control systems. It is particularly valuable for solving ordinary differential equations (ODEs) when analytical solutions are either unavailable or challenging to compute. The RK4 method involves generating four intermediate estimates of the dependent variable, which are subsequently weighted and combined to obtain the final estimate at the subsequent time step [4]. Consider a dependent variable, y(t), representing the value at time t. The RK4 method determines the value of y at the next time step, t + h, through a series of equations. Initially, k1 is computed by evaluating the product of the step size, h, and the rate of change function, f(t, y), at the current time and state [5]. The RK4 method computes the value of y at the next time step, t + h, as follows: \[k1=h*f(t,y) \tag{1}\] \[k2=h*f(t+h/2,y+k1/2) \tag{2}\] \[k3=h*f(t+h/2,y+k2/2) \tag{3}\] \[k4=h*f(t+h,y+k3) \tag{4}\] \[y(t+h)=y(t)+(k1+2k2+2k3+k4)/6 \tag{5}\] where f(t, y) is the function that describes the rate of change of y with respect to t, and h is the step size. The intermediate estimates k1, k2, k3, and k4 are calculated at different points within the time step, and their weighted sum is used to update the value of y at the next time step[6] Notably, the RK4 method exhibits fourth-order precision, implying that the error in the approximation is directly proportional to the fourth power of the step size, h. This characteristic renders the RK4 method highly accurate and efficient, particularly for systems characterized by complex dynamics[7]. The widespread use of the RK4 method in scientific and engineering disciplines underscores its significance. Researchers extensively rely on the RK4 method to simulate and analyze intricate systems that lack analytical solutions or present challenges in their calculation[8]. Within the domain of airbrake control systems for high-speed vehicles, the RK4 method serves as a valuable tool for modeling system behavior and predicting responses to diverse control inputs. To design a more effective and efficient airbrake control system, researchers can combine the RK4 method with a neural network-based optimization approach. The neural network can be trained to optimize the parameters of the RK4 method, such as the step size and the number of time steps, to achieve desired performance characteristics for the airbrake system. This integration presents an opportunity to enhance the capabilities of airbrake control systems and attain improved levels of safety and efficiency. ### _Artificial Neural Networks_ Neural networks, drawing inspiration from the intricate structure and functionality of the human brain, represent a category of machine learning algorithms. These networks are comprised of interconnected nodes, akin to neurons, which effectively undertake the processing of information and facilitate the transmission of signals to other neurons within the network. At the core of a neural network lies the fundamental unit known as a perceptron. The perceptron operates by receiving multiple inputs, assigning weights to each input, and subsequently passing the weighted sum through an activation function to generate an output. Through the aggregation of multiple perceptrons, a multi-layer perceptron (MLP) can be constructed, which comprises one or more hidden layers positioned between the input and output layers within the neural network architecture [10]. The process of training a neural network encompasses the adaptation of perceptron weights to minimize the disparity between the obtained output and the desired output (known as the target). Typically, this is accomplished through the utilization of a well-established algorithm known as backpropagation. By computing the gradient of the error function with respect to the weights, backpropagation facilitates the iterative adjustment of these weights to progressively align the network's output with the target [11]. Neural networks have demonstrated their efficacy in diverse problem domains, encompassing image recognition, natural language processing, and control systems, among others, with successful applications observed. [12]. In the context of airbrake control systems for high-speed vehicles, neural networks can be used to optimize the parameters of the control algorithm, such as the gain values and the time constants. This can improve the performance of the airbrake system by making it more responsive and efficient, while also reducing the risk of instability or oscillation. The Runge-Kutta 4th Order Method (RK4) is widely recognized as a prevalent numerical technique for solving differential equations and can be effectively employed to simulate the dynamics of the airbrake system. By synergistically integrating the RK4 method with a neural network-driven optimization approach, it becomes feasible to devise an airbrake control system that exhibits superior effectiveness and efficiency, specifically tailored for high-speed vehicles. ### _Airbrakes_ Airbrakes are a type of braking system that use compressed air to slow down or stop a vehicle. They are commonly used in high-speed vehicles, such as trains and commercial aircraft, Figure 1: Deep Neural Network, from [9] where traditional friction brakes may not be effective due to their limited ability to dissipate heat[13]. Airbrakes work by releasing compressed air from the braking system, which applies force to the braking mechanism and slows down the vehicle. The amount of braking force can be controlled by adjusting the pressure of the compressed air, and the braking can be applied to specific wheels or sections of the vehicle to optimize its stopping performance[14]. Airbrakes can be controlled by a variety of different systems, including mechanical, hydraulic, and electronic systems. In high-speed vehicles, electronic airbrake control systems are often used due to their precision and responsiveness. These systems use sensors and computer algorithms to monitor the vehicle's speed and acceleration, and adjust the air pressure in the braking system to achieve the desired braking performance[15]. The optimization of airbrake control systems for high-speed vehicles is an active area of research, as it can have significant impacts on the safety and efficiency of these vehicles. By using numerical methods like the RK4 method and neural network-based optimization approaches, it is possible to design airbrake control systems that are more effective and efficient, while also reducing the risk of instability or oscillation[16]. ## III Methodology ### **Data Collection** The dataset used in this study consists of 3699 instances of airbrake state data computed using the RK4 based MATLAB Code on the flight data from various sounding rocket flights. Each instance contains five input features and two output features. The input features include altitude, velocity, acceleration along X,Y & Z axes while the output features are the state of the airbrakes system (Open or Closed). The data was preprocessed by scaling and batch normalizing the input features using batch sizes of 8. ### **Neural Network Architecture** The neural network used in this study has **10** hidden linear layers, with **[2048, 1024, 512, 256, 128, 64, 32, 16, 8, 4]** neurons in each layer, respectively. The input layer has **5** input neurons corresponding to the 5 input parameters in the dataset and the output layer has **2** neurons, corresponding to the two possible states of the airbrakes (1 for Open and 0 for Close). The **Rectified Linear Unit (ReLU)** activation function is used for all hidden layers due to it being computationally simple (Our major objective of going with Neural Networks was to reduce the computational complexity) and less likelihood of the gradient vanishing.**Softmax** activation function is used for the output layer to convert the output values into probabilities. ### **Training & Optimization** #### Iii-C1 Optimizer:: **Adam** is a popular optimizer which is popular among varied applications due to fast computational times. #### Iii-C2 Loss Function:: **Cross-Entropy Loss** was chosen as the loss function for training the neural network. To address the issue of high class imbalance within our dataset, a weighted loss function was employed, with specific weights of **0.05** and **0.90** assigned to the open and closed states, respectively. These weights were determined by computing the ratios between the number of data points in each class and the total sum of all data points across all classes. Additionally, the dataset underwent augmentation using the Synthetic Minority Oversampling Technique **SMOTE** as an additional measure. #### Iii-C3 Batch Size and Iterations:: The neural network underwent training for a total of **100** epochs, utilizing a batch size of **32**. The learning rate was established at **0.0003**, while the momentum parameter was assigned a value of **0.87**. ### **Evaluation** The performance of the neural network was evaluated using the **F1 score** and **Binary Accuracy** metrics. The dataset was split into training and testing and Validation sets with a ratio of **7:2:1**. The final model was evaluated on the testing set, and the results were compared with the traditional Runge-Kutta method. ## IV Results & Discussions The accurate prediction of airbrake system states is crucial in the design and control of high-speed vehicles. This study proposes a novel approach to predicting the state of airbrakes for sounding rockets using a neural network, which outperformed the traditional Runge-Kutta method in terms of accuracy, precision, recall, and F1 score metrics. The results of the study demonstrate that the trained neural network was highly effective in predicting the state of airbrakes, achieving an impressive F1 score of 0.9447. This result surpassed the traditional Runge-Kutta method's accuracy of 0.924, which is a significant improvement. The precision, recall, and F1 score metrics were also significantly improved with the proposed method. Figure 2: Neural Network Architecture The authors used the Runge-Kutta 4th order method to solve the differential equation governing the airbrake system, which is a widely used method for solving differential equations. However, the neural network-based approach outperformed the traditional method in terms of accuracy and computational efficiency. The neural network was trained using a dataset of input-output pairs generated from the MATLAB code, and the backpropagation algorithm was used to train the network. The neural network was designed to use the same inputs and outputs as the MATLAB code, making it a feasible replacement for the traditional method. The study's findings have significant implications for high-speed vehicle design. The improved computational efficiency of the neural network-based approach could make it possible to implement more complex airbrake control systems that were previously not possible due to computational limitations. This could lead to the development of more sophisticated and efficient airbrake control systems, resulting in improved safety and performance for high-speed vehicles. Moreover, the study observed a significant reduction in the occurrence of false positives by using the neural network, which is an essential consideration in high-speed vehicle design. False positives can lead to unnecessary braking, resulting in reduced performance and potentially dangerous situations. The reduction in false positives by using the neural network is, therefore, a significant improvement in the airbrake control system's overall performance. This study proposes a novel approach to predicting the state of airbrakes for sounding rockets using a neural network. The results indicate that the proposed method outperforms the traditional Runge-Kutta method in terms of accuracy, precision, recall, and F1 score metrics. The improved computational efficiency of the neural network-based approach has significant implications for high-speed vehicle design, enabling more complex and efficient airbrake control systems to be developed. The reduction in false positives observed by using the neural network is also a crucial consideration in high-speed vehicle design, ensuring safe and efficient operation. ## V Conclusion Aircraft systems play a critical role in aviation safety and are subject to continuous development and improvement. One area that has received considerable attention is airbrake ejection, which is an essential component of an aircraft's control system. The traditional approach to airbrake ejection involves using MATLAB code to control the system. However, this paper proposes a new approach to airbrake ejection that utilizes neural networks. The principal aim of this research was to establish the superior performance of the neural network-based approach in comparison to the conventional MATLAB method, specifically concerning accuracy, speed, and efficiency. The findings from the investigation exhibited that the neural network successfully achieved precise predictions of the airbrake control system outputs across a diverse range of inputs. This accomplishment was attained through the training of the neural network using a dedicated set of training data, followed by rigorous testing employing an independent test dataset. Comparative analysis of the computational time required by the MATLAB code and the neural network revealed that the neural network was significantly faster. This is because neural networks are capable of parallel processing, enabling them to perform multiple computations simultaneously[17]. In contrast, MATLAB code operates sequentially, which limits its speed and efficiency. Additionally, neural networks can learn from the data they are fed, making them adaptable and efficient. The findings of this research bear noteworthy implications for the aviation sector. The integration of neural networks in airbrake control holds the potential to enhance the speed and precision of aircraft management, consequently yielding advancements in safety and efficiency. Subsequent studies could explore the feasibility of implementing this approach in real-time applications, necessitating the development of dedicated hardware capable of executing parallel computations. Moreover, the application of neural networks can be extended to encompass other critical aircraft systems, including engine control systems, flight control systems, and navigation systems. Such systems stand to gain notable advantages in terms of augmented accuracy, swifter performance, and heightened efficiency through the incorporation of neural network techniques. Therefore, this study has successfully established the efficacy of employing a neural network-based approach for airbrake ejection in aircraft systems. The obtained results unequivocally indicated the superior performance of the neural network when compared to the conventional MATLAB code, showcasing heightened accuracy, faster processing speed, and improved efficiency. Subsequent research endeavors should persist in delving into the applications of neural networks in aviation, while concurrently examining the feasibility of implementing this approach in real-time scenarios. Ultimately, Figure 3: Loss vs Epochs Graph the utilization of neural networks holds transformative potential for the aviation industry, with the capacity to significantly enhance safety and operational efficiency.
2305.12133
Loss Spike in Training Neural Networks
In this work, we investigate the mechanism underlying loss spikes observed during neural network training. When the training enters a region with a lower-loss-as-sharper (LLAS) structure, the training becomes unstable, and the loss exponentially increases once the loss landscape is too sharp, resulting in the rapid ascent of the loss spike. The training stabilizes when it finds a flat region. From a frequency perspective, we explain the rapid descent in loss as being primarily influenced by low-frequency components. We observe a deviation in the first eigendirection, which can be reasonably explained by the frequency principle, as low-frequency information is captured rapidly, leading to the rapid descent. Inspired by our analysis of loss spikes, we revisit the link between the maximum eigenvalue of the loss Hessian ($\lambda_{\mathrm{max}}$), flatness and generalization. We suggest that $\lambda_{\mathrm{max}}$ is a good measure of sharpness but not a good measure for generalization. Furthermore, we experimentally observe that loss spikes can facilitate condensation, causing input weights to evolve towards the same direction. And our experiments show that there is a correlation (similar trend) between $\lambda_{\mathrm{max}}$ and condensation. This observation may provide valuable insights for further theoretical research on the relationship between loss spikes, $\lambda_{\mathrm{max}}$, and generalization.
Xiaolong Li, Zhi-Qin John Xu, Zhongwang Zhang
2023-05-20T07:57:15Z
http://arxiv.org/abs/2305.12133v2
# Loss Spike in Training Neural Networks ###### Abstract In this work, we study the mechanism underlying loss spikes observed during neural network training. When the training enters a region, which has a smaller-loss-as-sharper (SLAS) structure, the training becomes unstable and loss exponentially increases once it is too sharp, i.e., the rapid ascent of the loss spike. The training becomes stable when it finds a flat region. The deviation in the first eigen direction (with maximum eigenvalue of the loss Hessian (\(\lambda_{\max}\)) is found to be dominated by low-frequency. Since low-frequency is captured very fast (frequency principle), the rapid descent is then observed. Inspired by our analysis of loss spikes, we revisit the link between \(\lambda_{\max}\) flatness and generalization. For real datasets, low-frequency is often dominant and well-captured by both the training data and the test data. Then, a solution with good generalization and a solution with bad generalization can both learn low-frequency well, thus, they have little difference in the sharpest direction. Therefore, although \(\lambda_{\max}\) can indicate the sharpness of the loss landscape, deviation in its corresponding eigen direction is not responsible for the generalization difference. We also find that loss spikes can facilitate condensation, i.e., input weights evolve towards the same, which may be the underlying mechanism for why the loss spike improves generalization, rather than simply controlling the value of \(\lambda_{\max}\). ## 1 Introduction Many experiments have observed a phenomenon, called the edge of stability (EoS) (Wu et al., 2018; Cohen et al., 2021; Arora et al., 2022), that during the neural network (NN) training, the maximum eigenvalue of the loss Hessian, \(\lambda_{\max}\), progressively increases until it reaches \(2/\eta\) (\(\eta\) is learning rate), and then \(\lambda_{\max}\) stays around \(2/\eta\). At the EoS stage, the loss would continuously decrease, sometimes with slight oscillation. Training with a larger learning rate leads to a solution with smaller \(\lambda_{\max}\). Since \(\lambda_{\max}\) is often used to indicate the sharpness of the loss landscape, a larger learning rate results in a flatter solution. Intuitively as shown in Fig. 1, the flat solution is more robust to perturbation and has better generalization performance (Keskar et al., 2016; Hochreiter and Schmidhuber, 1997). Therefore, training with a larger learning rate would achieve better generalization performance. In this work, we argue this intuitive analysis in Fig. 1 with \(\lambda_{\max}\) as the sharpness measure, which encounters difficulty in NNs through the study of loss spikes. In a neural network training process, one may sometimes observe a phenomenon of loss spike, where the loss rapidly ascends and then descends to the value before the ascent. Typical examples are shown in Fig. 2. We show a special loss landscape structure underlying the loss spike, which is called a smaller-loss-as-sharper (SLAS) structure. In the SLAS structure, the training is driven by descending the loss while entering an increasingly sharp region. Once the sharpness is too large, the loss would ascend exponentially fast. To explain why the loss can descend so fast, we provide a frequency perspective analysis. We find that the deviation in the ascending stage is dominated by low-frequency components. Based on the frequency principle (Xu et al., 2019, 2020) that low-frequency converges faster than high-frequency, we rationalize the fast descent. The study of loss spike provides an important information that the deviation at the first eigen direction is dominated by low-frequency. We then further argue the link between \(\lambda_{\max}\) flatness and generalization. In practical datasets, low-frequency information is often dominant and shared by both the training and the test datasets. Therefore, the training can learn low-frequency well. Since the sharpest direction, indicated by the maximum eigenvalue of the loss Hessian, relates more to the low-frequency, a solution with good generalization and a solution with bad generalization have little difference in the sharpest direction, verified by a series of experiments. Hence, \(\lambda_{\max}\) with the intuitive explanation in Fig. 1 encounters difficulty in understanding the generalization of neural networks, such as why a larger learning rate results in better generalization for networks with EoS training. We also find that a loss spike can facilitate condensation, that is, the input weights of different neurons in the same layer evolve towards the same, which would reduce the network's effective size. Condensation is a non-linear feature learning phenomenon in neural networks, which may be the underlying mechanism for why the loss spike improves generalization (He et al., 2019; Jastrzebski et al., 2017), rather than simply controlling the value of \(\lambda_{\max}\). This work studies the loss spike from the landscape perspective and the frequency perspective, and revisits the relation between the generalization and the flatness, defined by the maximum eigenvalue of the loss Hessian. This work also conjectures the loss spike may improve generalization via the facilitation of condensation. ## 2 Related works Previous works (Cohen et al., 2021; Wu et al., 2018; Xing et al., 2018; Ahn et al., 2022; Lyu et al., 2022; Wang et al., 2022) conduct an extensive study of the EoS phenomenon under various settings. Lewkowycz et al. (2020) observe that when the initial sharpness exceeds \(2/\eta\), gradient descent "catapults" into a stable region and converges. Arora et al. (2022) analyze progressive sharpening and the edge of stability phenomenon under specific settings, such as normalized gradient descent. Damian et al. (2022) show that the third-order terms bias towards flatter minima to understand EoS. Ma et al. (2022) attribute the progressive sharpening to a subquadratic structure of the loss landscape, i.e., the maximum eigenvalue of the loss Hessian is larger when the loss is smaller in a direction. They also propose a flatness-driven motion to study the EoS stage, that is, the training would move towards a flatter minimum, such that the fixed flatness can correspond to points with smaller and smaller loss values due to the subquadratic property. We call this structure a smaller-loss-as-flatter (SAF) structure. The SLAF structure should expect a continuous decrease in the loss rather than a loss spike. Agarwala et al. (2022) use a quadratic regression model with MSE to study EoS. Similarly, in their model, the loss spike can not happen. Ma et al. (2022) study the loss spike from the perspective of Figure 1: Schematic illustration of an ideal explanation for why flat solutions generalize well (Keskar et al., 2016). adaptive gradient optimization algorithms, while we focus on the loss landscape structure and use gradient descent training in this paper. A series of works link the generalization performance of solutions to the landscape of loss functions through the observation that flat minima tend to generalize better (Hochreiter and Schmidhuber, 1997; Wu et al., 2017; Ma and Ying, 2021). Algorithms that favor flat solutions are designed to improve the generalization of the model (Izmailov et al., 2018; Chaudhari et al., 2019; Lin et al., 2018; Zheng et al., 2021; Foret et al., 2020). On the other hand, Dinh et al. (2017) show that sharp minimum can also generalize well by rescaling the parameters at a flat minimum with ReLU activation. In this work, we study the relationship between flatness and generalization from a new perspective, i.e., the frequency perspective, without the limitation of the activation function. Luo et al. (2021); Zhou et al. (2022) mainly identify the linear regime and the condensed regime of the parameter initialization for two-layer and three-layer wide ReLU NNs, which determines the final fitting result of the network. In the linear regime (Jacot et al., 2018; Arora et al., 2019), the training dynamics of NNs are approximately linear and similar to a random feature model. On the contrary, in the condensed regime, active neurons are condensed at several discrete orientations. At this point, the network is equivalent to another network with a reduced width, which may explain why NNs outperform traditional algorithms (Breiman, 1995; Zhang et al., 2021). For the initial stage of training, A series of works (Zhou et al., 2021; Chen et al., 2023; Maennel et al., 2018; Pellegrini and Biroli, 2020) study the characteristics of the initial condensation for different activation functions. Andriushchenko et al. (2022) find that stochastic gradient descent (SGD) with a large learning rate can facilitate sparse solutions and attributes it to the noise structure of SGD. In our work, we find that for the noise-free full-batch gradient descent algorithm, the loss spike can also facilitate the condensation phenomenon, implying that the noise structure is not the intrinsic cause of condensation. The frequency principle is examined in extensive datasets and deep neural network models (Xu et al., 2019; Xu and Zhou, 2021; Rahaman et al., 2019). Subsequent theoretical studies show that the frequency principle holds in the general setting with infinite samples (Luo et al., 2021). An overview for frequency principle is referred to Xu et al. (2022). Based on the theoretical understanding, the frequency principle inspires the design of deep neural networks to learn a function with high-frequency fast (Liu et al., 2020; Jagtap et al., 2020; Biland et al., 2019). ## 3 Preliminary: Linear stability in training quadratic model We consider a simple quadratic model with the loss \(R(\theta)=\lambda\theta^{2}/2\) trained by gradient descent with learning rate \(\eta\), \(\theta(t+1)=\theta(t)-\eta\cdot dR(\theta)/d\theta\). To ensure the linear stability of the training, it requires \(|\theta(t+1)|<|\theta(t)|\), which implies \(|1-\lambda\eta|<1\), i.e., otherwise, the training will diverge. Note that \(\lambda\) is the Hessian of \(R(\theta)\). Similarly, to ensure the linear stability of training a neural network, it requires that the maximum eigenvalue of the loss Hessian is smaller than \(2/\eta\), i.e., 2 over the learning rate. Therefore, the maximum eigenvalue of the loss Hessian is often used as the measure of the sharpness of the loss landscape. ## 4 Loss spike In this section, we study the phenomenon of loss spike, where the loss would suddenly increase and decrease rapidly. For example, as shown in Fig. 2(a, d), we train a tanh fully-connected neural network (FNN) with 20 hidden neurons for a one-dimensional fitting problem, and a ReLU convolutional neural network (CNN) for the CIFAR10-1k classification problem with MSE. Both two models experience loss spikes. The red curves, i.e., the \(\lambda_{\max}\) value, show that the loss spikes occur at the EoS stage. ### Typical loss spike experiments To observe the loss spike clearly, we zoom in on the training epochs around the spike, shown in Fig. 2(b, e). The selected epochs are marked green in Fig. 2(a, d). When the maximum eigenvalue of Hessian \(\lambda_{\max}\) (red) exceeds \(2/\eta\) (black dashed line), the loss increases, and when \(\lambda_{\max}<2/\eta\), the loss decreases, which are consistent with the linear stability analysis. We then study the parameter space for more detailed characterization. Given \(t\) training epochs, and let \(\mathbf{\theta}_{i}\) denote model parameters at epoch \(i\), we apply PCA to the matrix \(M=[\mathbf{\theta}_{1}-\mathbf{\theta}_{t},\cdots,\mathbf{\theta}_{t}-\mathbf{\theta}_{t}]\), and then select the first two eigen directions \(\mathbf{e}_{1}\), \(\mathbf{e}_{2}\). The two-dimensional loss surface based on \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) can be calculated by \(R_{S}(\mathbf{\theta}_{t}+\alpha\mathbf{e}_{1}+\beta\mathbf{e}_{2})\), where \(\alpha\), \(\beta\) are the step sizes, and \(R_{S}\) is the loss function under the dataset \(S\). The trajectory point of parameter \(\mathbf{\theta}_{i}\) can be calculated by the projection of \(\mathbf{\theta}_{i}-\mathbf{\theta}_{t}\) in the PCA directions, i.e., \(((\mathbf{\theta}_{i}-\mathbf{\theta}_{t},\mathbf{e}_{1}),\langle\mathbf{\theta}_{i}-\mathbf{ \theta}_{t},\mathbf{e}_{2}\rangle)\). Parameter trajectories (blue dots) and loss surfaces along PCA directions are shown in Fig. 2(c, f). In two distinct examples, they exhibit similar behaviors. At the beginning of the ascent stage of the spike, the parameter is at a small-loss region, where the opening of the contour lines is towards the left, indicating a leftward component of descent direction. In the left region, the contour lines are denser, implying a sharper loss surface. Once \(\lambda_{\max}>2/\eta\), the parameters become unstable, and the loss value increases exponentially. In the large-loss region, the opening of the contour shifts to the right, indicating a rightward component of the descent direction, resulting in a sparser contour, i.e., a flatter loss surface. After several steps, when \(\lambda_{\max}<2/\eta\), the training returns to the stable stage. ### Smaller-loss-as-sharper (SLAS) structure The above experiments reveal a common structure that causes a loss spike, namely, the \(\lambda_{\max}\) sharpness increases in the direction of decreasing loss. We call this structure smaller-loss-as-sharper (SLAS) structure. The SLAS structure differs from the SLAF (smaller-loss-as-flatter) structure studied in Ma et al. (2022), which is also common in the EoS stage as shown in Fig. 3(a). A toy example of the SLAS structure is shown in Fig. 3(b). The left cross-section of the loss landscape has a flatter curvature while the right one has a sharper curvature. At the minimum of the left cross-section (the \(L_{1}\) dashed line), the opening of the contour lines towards the right and the parameter point will also move right, which makes the curvature sharper. Once \(\eta>2/\lambda_{\max}\), it starts to diverge to a large-loss region and the opening of the contour turns left (the \(L_{2}\) dashed line), which makes the curvature flatter. The following quadratic model is a simple example of SLAS structure, \[f(x,y)=(50x+200)y^{2}-x+5, \tag{1}\] Figure 2: (a, d) The loss value (black) and \(\lambda_{\max}\) (red) vs. training epoch, where the \(\lambda_{\max}\) is calculated every 100 epoch. (b, e) The loss value and \(\lambda_{\max}\) of a specific epoch interval, which is marked green in (a, d), respectively. (c, f) The loss surface and the trajectory of the model parameters along the first two PCA directions. (a, b, c) Two-layer tanh NN with width 20. The sum of the explained variance ratios of the first two PCA directions is 0.9895. (d, e, f) Two-layer ReLU CNN with Max Pooling. The sum of the explained variance ratios of the first two PCA directions is 0.9882. where \((x,y)\in(-4,+\infty)\times\mathbb{R}\). For any constant \(C\), \(y=0\) is the minimum point of \(f(C,y)\), and the larger \(x\) is, the sharper the loss landscape in the \(y\)-direction. As shown in Fig. 3(c, d), the loss curve and the trajectory of parameters are similar to the realistic example above, where the parameters move toward the sharp direction at the beginning of the loss spike, and then move toward the flat direction. The intuitive explanation for the above phenomenon is that as \(x\) increases, \(f(x,0)\) decreases, which means that \(f(x,0)\) has a smaller value at the sharp region, i.e., the SLAS structure, which makes the opening of the contour lines towards different directions at different loss levels. For this example, we can exactly compute the derivative of Eq. (1) as follows: \[\frac{\partial f(x,y)}{\partial x}=50y^{2}-1.\] Thus we have \[\frac{\partial f(x,y)}{\partial x}\begin{cases}>0&\text{if }f(x,y)<9\\ =0&\text{if }f(x,y)=9\\ <0&\text{if }f(x,y)>9\end{cases},\] which indicates that the toy model has a positive gradient component in the \(x\) direction when the parameters are in the small-loss region (\(f(x,y)<9\)), while a negative gradient component in the \(x\) direction when the parameters are in the large-loss region (\(f(x,y)>9\)). Although the SLAS structure can explain the mechanism of the ascent stage based on the toy model, it can not explain the reason for the rapid descent of the loss in the descent phase of the loss spike, which takes much fewer steps than the training from the same level loss at the initialization. For instance, for the quadratic model in the Preliminary section, the descent would be very slow if the learning rate is slightly smaller than \(2/\lambda_{\max}\). Moreover, due to the high dimensionality of the parameter space, the parameter trajectory does not always align with the first eigen direction, otherwise, as shown in the toy model, the loss would not decrease continuously. In the following, we take a step toward understanding the rapid decrease from the frequency perspective. ### Frequency perspective for understanding descent stage In this subsection, we study the mechanism of the rapid loss descent during the descent stage in a loss spike from the perspective of frequency. We base our analysis on a common phenomenon of frequency principle (Xu et al., 2019, 2020; Zhang et al., 2021; Luo et al., 2021; Rahaman et al., 2019; Ronen et al., 2019), which states that deep NNs often fit target functions from low to high frequencies during the training. A series of frequency principle works show that low-frequency can converge faster than high-frequency. Compared to the peak point of the loss spike with the point with the same loss value at the initial training, the descent during the spike should eliminate more low-frequency with a fast speed while the descent from the initial model should eliminate more high-frequency with a slow speed. To verify this conjecture, we study the frequency distribution of the converged part during the descent stage. Figure 3: (a) The loss surface and the trajectory of the model parameters along the first two PCA directions in the EoS stage. (b) Schematic illustration of SLAS structure. (c) The loss value and the maximum eigenvalue of the Hessian matrix of a loss spike process of the toy model. (d) The loss surface and the GD trajectory of the two-dimensional parameters of the toy model. The peak of the loss spike is denoted as \(\mathbf{\theta}_{\max}\), the initial point which has the similar loss of \(\mathbf{\theta}_{\max}\) is denoted as \(\mathbf{\theta}_{\rm ini,m}\), the parameter at the end of the loss spike (a point is roughly selected when the descent is slow) is denoted as \(\mathbf{\theta}_{\rm end}\). We then study the frequency distribution of spike output difference \(f_{\rm peak,diff}:=f_{\mathbf{\theta}_{\max}}-f_{\mathbf{\theta}_{\rm end}}\) and initial output difference \(f_{\rm ini,diff}:=f_{\mathbf{\theta}_{\rm ini,m}}-f_{\mathbf{\theta}_{\rm end}}\). For comparison, we also randomly select parameter \(\mathbf{\theta}_{\rm rnd}:=\mathbf{\theta}_{\rm end}+(\|\mathbf{\theta}_{\rm end}-\mathbf{ \theta}_{\max}\|_{2}/\|\varepsilon\|_{2})\varepsilon\), where \(\varepsilon\sim N(0,I)\) is a random variable. We then study the frequency distribution of random output difference \(f_{\rm rnd,diff}:=f_{\mathbf{\theta}_{\rm end}}-f_{\mathbf{\theta}_{\rm end}}\). We characterize the frequency distribution by taking different low-frequency thresholds to study low-frequency proportion. For a low-frequency threshold \(K\), a low-frequency proportion (LFP) is defined as follows to characterize the power proportion of the low-frequency component over the whole spectrum, \[\mathrm{LFP}(K)=\frac{\sum_{k\leq K}\|\hat{f}_{\mathbf{\theta}}(k)\|^{2}}{\sum_{k }\|\hat{f}_{\mathbf{\theta}}(k)\|^{2}}, \tag{2}\] where \(\hat{f}_{\mathbf{\theta}}\) indicates the Fourier transform of function \(f_{\mathbf{\theta}}\). As shown in Fig. 4, the low-frequency proportion of the spike output difference is significantly larger than the low-frequency proportion of the initial output difference and the random output difference, where we take 100 samples of random variable \(\varepsilon\) for the mean value and the error bar for each low-frequency threshold. The large low-frequency proportion of the spike output difference is the key reason for the rapid drop in the loss value during the descent stage, as suggested by the frequency principle. ## 5 Revisit the flatness-generalization picture Motivated by the loss spike analysis from the frequency perspective, we further revisit the common flatness-generalization picture. A series of previous works (Hochreiter and Schmidhuber, 1997; Li et al., 2017) attempt to link the flatness of the loss landscape with generalization, so as to characterize the model through flatness conveniently. A classic empirical illustration is shown in Fig. 1, which vividly expresses the reason why flat solutions tend to have better generalization. Usually, the training loss landscape and the test landscape do not exactly coincide due to sampling noise. A flat solution would be robust to the perturbation while a sharp solution would not. For such a one-dimensional case, this analysis is valid, but the loss landscape of a NN case is very high-dimensional, and such simple visualization or explanation is yet to be validated. The first eigen direction of the loss Hessian, i.e., the eigen direction corresponding to the maximum eigenvalue, is the sharpest direction. Based on the flatness-generalization picture, it is natural to use the maximum eigenvalue as the measure for the flatness, which can also indicate generalization. However, this naive analysis is not always correct for neural networks. Figure 4: Low-frequency proportion for different low-frequency thresholds. The NN we used is a two-layer tanh NN with width 20. For the random output difference, we calculate the mean value and the error bar with 100 random samples. ### Frequency perspective Since the maximum eigenvalue of the loss Hessian can indicate the linear stability of the training, it is often used as a measure for flatness/sharpness, that is, a larger maximum eigenvalue indicates a sharper loss landscape. As shown by the linear stability analysis, once the maximum eigenvalue is larger than \(2/\eta\), the training would oscillate and diverge along the first eigen direction. Meanwhile, as the parameter moves away from the minimum point along the first eigen direction, the loss spike is mainly due to the large low-frequency difference as shown in Fig. 4. Therefore, the deviation in the first eigen direction of the loss Hessian mainly leads to the deviation of low-frequency components. In order to examine the above analysis, we first obtain the model parameter \(\mathbf{\theta}_{\rm train}\) with poor generalization by training the model initialized in the linear regime (Luo et al., 2021), and then further train the model parameter \(\mathbf{\theta}_{\rm train}\) on the test dataset with a small learning rate to obtain the model parameter \(\mathbf{\theta}_{\rm test}\). We study the impact of each eigen direction on the test loss by eliminating the difference between \(\mathbf{\theta}_{\rm train}\) and \(\mathbf{\theta}_{\rm test}\) in the \(i\)-th eigen direction \(\mathbf{\nu}_{i}\), where \(i\) is the index of eigenvalues. As shown in Fig. 5(a), we study the change of the test loss \(L(i)\) with the eigenvalue index \(i\) as follows to study the effect of eigenvectors on generalization, \[L(i)=R_{S_{\rm test}}\left(\mathbf{\theta}_{\rm train}+\sum_{j=1}^{i}\langle\mathbf{ \theta}_{\rm test}-\mathbf{\theta}_{\rm train},\mathbf{\nu}_{j}\rangle\mathbf{\nu}_{j} \right)\,,\] where \(S_{\rm test}\) is the test dataset. The movement of parameters on the eigenvectors corresponding to large eigenvalues has a weak impact on the test loss, while the movement of parameters on the eigenvectors corresponding to small eigenvalues has a significant impact on the test loss. A reasonable explanation from the perspective of frequency is as follows. In common datasets, low-frequency components often dominate over high-frequency ones. For noisy sampling, the dominant low-frequency is shared by both the training and the test data. When the parameters move along the eigen directions corresponding to the large eigenvalues, the network output often changes at low-frequency, which is already captured by both \(\mathbf{\theta}_{\rm train}\) and \(\mathbf{\theta}_{\rm test}\). Therefore, the improvement of model generalization often requires certain high-frequency changes. As shown in Fig. 5(b), we move the corresponding \(\mathbf{\theta}_{\rm train}\) along the first nine eigen directions, and show the difference between the network outputs before and after the movement, i.e., \(f_{\mathbf{\theta}_{\rm train}+\mathbf{\nu}_{i}/\sqrt{\lambda_{i}}}-f_{\mathbf{\theta}_{ \rm train}}\), where the \(1/\sqrt{\lambda_{i}}\) item is to make the loss of the network moved in different eigen directions approximately the same. From the difference between the outputs before and after the movement, it can be seen that when the parameters move along the eigen direction corresponding to the larger eigenvalue, the change of the model output is often less oscillated, i.e., dominated by the lower-frequency. Since the low-frequency is captured by both \(\mathbf{\theta}_{\rm train}\) and \(\mathbf{\theta}_{\rm test}\), they should be close in the eigen directions corresponding to large eigenvalues, which is verified in the following subsection. ### Difference on each eigen direction We then examine the projection of \(\mathbf{\theta}_{\rm test}-\mathbf{\theta}_{\rm train}\) in each eigen direction of \(H(\mathbf{\theta}_{\rm train})\). As shown in Fig. 6, we show the projection of \(\mathbf{\theta}_{\rm test}-\mathbf{\theta}_{\rm train}\) on each eigenvector \(\mathbf{\nu}_{i}\) (blue bar) for the FNN on function fitting problem and the CNNs on CIFAR10 classification problem. Due to the high complexity of calculating the eigenvectors of the large-size Hessian matrix, we use the Lanczos method (Cullum and Willoughby, 2002) to numerically compute the first \(N\) eigenvalues and their corresponding eigenvectors. For \(n<N\), we use \(\sum_{i=1}^{n}\lambda_{i}^{2}/\sum_{i=1}^{N}\lambda_{i}^{2}\) to represent the explained variance ratio, i.e., to measure how much flatness information the first \(n\) eigen directions (orange line) can explain. For different network structures and model tasks, the projection value of \(\mathbf{\theta}_{\rm test}-\mathbf{\theta}_{\rm train}\) on the eigenvector \(\mathbf{\nu}_{i}\) has a positive correlation with the eigenvalue index \(i\), which confirms that \(\mathbf{\theta}_{\rm train}\) and \(\mathbf{\theta}_{\rm test}\) have little difference on low-frequency part. Note that in Fig. 6(d), the two minima, \(\mathbf{\theta}_{\rm small}\) and \(\mathbf{\theta}_{\rm large}\), are found by small and large batch sizes, respectively, and they also have little difference in eigen directions corresponding to large eigenvalues. ### Implications The above analysis suggests the following implications: i) The maximum eigenvalue of the loss Hessian is a good measure of sharpness for whether the training is linearly stable but not a good measure for generalization; ii) The common low-dimensional flatness-generalization picture suffers difficulty in understanding the high-dimensional loss landscape of neural network. The generalization performance is a combined effect of most eigen directions, including those with small eigenvalues. ## 6 Loss spike facilitates condensation From the analysis above, the restriction on \(\lambda_{\max}\) does not seem to be the essential reason why loss spike affects the generalization of the model. In this section, we study the effect of loss spike on condensation, which may improve the model's generalization in some situations (He et al., 2019; Jastrzebski et al., 2017). A condensed network, which refers to a network with neurons condensing in several discrete directions, is equivalent to another smaller network (Zhou et al., 2021; Luo et al., 2021). It has a lower effective complexity than it appears. The embedding principle (Zhang et al., 2021, 2022; Fukumizu et al., 2019; Simsek et al., 2021) shows that a condensed network, although equivalent to a smaller one in approximation, has more degeneracy and descent directions that may accelerate the training process. The low effective complexity and simple training process may be underlying reasons for good generalization. We show that the loss spike can facilitate the condensation phenomenon for the noise-free full-batch gradient descent algorithm. As shown in Fig. 7, we train a tanh NN with 100 hidden neurons for the one-dimensional fitting problem to fit the data using MSE as the loss function. Additional experimental verification on ReLU NNs is provided in Appendix B.1. To clearly study the effect of loss spike on condensation, we Figure 5: Two-layer tanh FNN with a width of 500. (a) The variation of the test loss with the eigenvalue index \(i\) when eliminating the difference between \(\mathbf{\theta}_{\rm train}\) and \(\mathbf{\theta}_{\rm test}\) in the first \(i\) eigen directions. (b) The output difference before and after moving \(\theta_{\rm train}\) in the first nine eigen directions of its Hessian matrix. Each subset corresponds to the case of one eigen direction. Figure 6: Blue bar: (a, b, c) show the projection values of in each eigen direction of \(H(\mathbf{\theta}_{\rm train})\) for \(\mathbf{\theta}_{\rm test}-\mathbf{\theta}_{\rm train}\), and (d) for \(\mathbf{\theta}_{\rm large}-\mathbf{\theta}_{\rm small}\). Orange line: the sum of the first \(n\) eigenvalues over all eigenvalues. (a) Two-layer tanh FNN for the one-dimensional fitting problem. (b) Two-layer ReLU CNN with Max Pooling for the CIFAR10 classification problem. (c) Three-layer ReLU CNN with Max Pooling for the CIFAR10 classification problem. (d) Five-layer ReLU CNN with Max Pooling for the CIFAR10 classification problem. take the parameter initialization distribution in the linear regime (Luo et al., 2021) that does not induce condensation without additional constraints. For NNs with identical initialization, we train the network separately with a small learning rate (blue) and a large learning rate (orange). For the left subfigure in Fig. 7, the loss value has a significant spike for the large learning rate, but not for the small one. At the same time, the middle subfigure reveals that the model output without a loss spike (blue) during the training process has more oscillation than the model output with a loss spike (orange). We study the features of parameters to understand the underlying effect of loss spike better. To study the parameter features, we measure each parameter pair \((a_{j},\mathbf{w}_{j})\) by the feature direction \(\hat{\mathbf{w}}_{j}=\mathbf{w}_{j}/\|\mathbf{w}_{j}\|_{2}\) and amplitude 2\(A_{j}=|a_{j}|\|\mathbf{w}_{j}\|_{2}\). For a NN with one-dimensional input, after incorporating the bias term, \(\mathbf{w}_{j}\) is two-dimensional, and we use the angle between \(\mathbf{w}_{j}\) and the unit vector \((1,0)\) to indicate the orientation of each neuron. The scatter plots of \(\{(\hat{\mathbf{w}}_{j},|a_{j}|)\}_{j=1}^{m}\) and \(\{(\hat{\mathbf{w}}_{j},\|\mathbf{w}_{j}\|_{2})\}_{j=1}^{m}\) of tanh activation are presented in Appendix B to eliminate the impact of the non-homogeneity of tanh activation. Footnote 2: The amplitude accurately describes the contribution of ReLU neurons due to the homogeneity. For tanh neurons, there is a positive correlation between their amplitude and contribution. Appendix B provides a more refined characterization of tanh network features. The scatter plots of \(\{(\hat{\mathbf{w}}_{j},A_{j})\}_{j=1}^{m}\) of the NN is shown in the right subfigure of Fig. 7. Parameters without loss spikes (blue) are closer to the initial values (green) than those with loss spikes (orange). For the case with loss spikes, non-zero parameters tend to condense in several discrete orientations, showing a tendency to condensation. ## 7 Conclusion and discussion In this work, we provide an explanation for loss spikes in neural network training. We explain the ascent stage based on the landscape structure, i.e., the SLAS structure, and for the descent stage, we explain it from the perspective of frequency. We revisit the common flatness-generalization picture based on the frequency analysis. We also find that noise-free gradient descent with loss spikes can facilitate condensation, which may be an underlying reason for the good generalization in some situations. Obviously, many questions remain open. For example, why the eigen direction corresponding to a large eigenvalue is dominated by low-frequency? Why the loss spike can facilitate the condensation? We leave the discussion of these important questions to future work. ## Acknowledgments This work is sponsored by the National Key R&D Program of China Grant No. 2022YFA1008200, the Shanghai Sailing Program, the Natural Science Foundation of Shanghai Grant No. 20ZR1429000, the National Natural Science Foundation of China Grant No. 62002221, Shanghai Municipal of Science Figure 7: Comparison of two-layer tanh NNs with identical initialization but different learning rates \(\eta\). The loss spike occurs at a large learning rate (orange), while not at a small learning rate (blue). Left: loss vs. epoch. The small picture in the upper right corner shows the occurrence of the loss spike in more detail. Middle: output. Right: The weight feature distribution of the trained models and the initial one. and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center, and the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University.
2308.14555
Kernel Limit of Recurrent Neural Networks Trained on Ergodic Data Sequences
Mathematical methods are developed to characterize the asymptotics of recurrent neural networks (RNN) as the number of hidden units, data samples in the sequence, hidden state updates, and training steps simultaneously grow to infinity. In the case of an RNN with a simplified weight matrix, we prove the convergence of the RNN to the solution of an infinite-dimensional ODE coupled with the fixed point of a random algebraic equation. The analysis requires addressing several challenges which are unique to RNNs. In typical mean-field applications (e.g., feedforward neural networks), discrete updates are of magnitude $\mathcal{O}(\frac{1}{N})$ and the number of updates is $\mathcal{O}(N)$. Therefore, the system can be represented as an Euler approximation of an appropriate ODE/PDE, which it will converge to as $N \rightarrow \infty$. However, the RNN hidden layer updates are $\mathcal{O}(1)$. Therefore, RNNs cannot be represented as a discretization of an ODE/PDE and standard mean-field techniques cannot be applied. Instead, we develop a fixed point analysis for the evolution of the RNN memory states, with convergence estimates in terms of the number of update steps and the number of hidden units. The RNN hidden layer is studied as a function in a Sobolev space, whose evolution is governed by the data sequence (a Markov chain), the parameter updates, and its dependence on the RNN hidden layer at the previous time step. Due to the strong correlation between updates, a Poisson equation must be used to bound the fluctuations of the RNN around its limit equation. These mathematical methods give rise to the neural tangent kernel (NTK) limits for RNNs trained on data sequences as the number of data samples and size of the neural network grow to infinity.
Samuel Chun-Hei Lam, Justin Sirignano, Konstantinos Spiliopoulos
2023-08-28T13:17:39Z
http://arxiv.org/abs/2308.14555v2
# Kernel Limit of Recurrent Neural Networks Trained on Ergodic Data Sequences ###### Abstract Mathematical methods are developed to characterize the asymptotics of recurrent neural networks (RNN) as the number of hidden units, data samples in the sequence, hidden state updates, and training steps simultaneously grow to infinity. In the case of an RNN with a simplified weight matrix, we prove the convergence of the RNN to the solution of an infinite-dimensional ODE coupled with the fixed point of a random algebraic equation. The analysis requires addressing several challenges which are unique to RNNs. In typical mean-field applications (e.g., feedforward neural networks), discrete updates are of magnitude \(\mathcal{O}(\frac{1}{N})\) and the number of updates is \(\mathcal{O}(N)\). Therefore, the system can be represented as an Euler approximation of an appropriate ODE/PDE, which it will converge to as \(N\to\infty\). However, the RNN hidden layer updates are \(\mathcal{O}(1)\). Therefore, RNNs cannot be represented as a discretization of an ODE/PDE and standard mean-field techniques cannot be applied. Instead, we develop a fixed point analysis for the evolution of the RNN memory states, with convergence estimates in terms of the number of update steps and the number of hidden units. The RNN hidden layer is studied as a function in a Sobolev space, whose evolution is governed by the data sequence (a Markov chain), the parameter updates, and its dependence on the RNN hidden layer at the previous time step. Due to the strong correlation between updates, a Poisson equation must be used to bound the fluctuations of the RNN around its limit equation. These mathematical methods give rise to the neural tangent kernel (NTK) limits for RNNs trained on data sequences as the number of data samples and size of the neural network grow to infinity. ## 1 Introduction Recurrent neural networks (RNN) are widely used to model sequential data. Examples include natural language processing (NLP) and speech recognition [1, 2]. The key architectural feature of an RNN is a hidden layer which is updated at each time step of the sequence. This hidden layer - sometimes referred to as a "memory layer" - is a nonlinear representation of the history of the data sequence. Using its hidden layer, the RNN can - in principle - learn functions which map the path of a sequence (of arbitrary length) to fixed-dimensional vector predictions. The RNN's hidden layer therefore provides a parsimonious, nonlinear representation of the data in the sequence up until the current time. An RNN is trained by minimizing an appropriate loss function over a high-dimensional set of parameters using a gradient-descent-type algorithm. The mathematical theory for RNNs is limited. In this article, we study the asymptotics of a single-layer RNN as the number of hidden units, training steps, and data samples in the sequence tend to infinity. In the case of an RNN with a simplified weight matrix, we prove the convergence of the RNN to the solution of an infinite-dimensional ODE coupled with the fixed point of a random algebraic equation. For feedforward neural networks with i.i.d. data samples, limits have been proven as the number of hidden units, training steps, and data samples tend to infinity. The dynamics of the output of the trained network converges to either an ODE (the Neural Tangent Kernel NTK limit) [3] or a (random) PDE (the mean-field limit) [4, 5, 6, 7] depending upon the normalization used for the neural network output. For the NTK case, the equation for the limit neural network can be studied to prove global convergence of the neural network to the global minimizer of the objective function. Proving limit results for RNNs is substantially more challenging. The data sequence is not i.i.d., which complicates the analysis of the evolution of the trained neural network. Furthermore, the RNNs cannot be studied using standard mean-field or weak convergence analysis (e.g., as is true for feedforward neural networks). We explain in more detail below. Consider a classic recurrent neural network (the standard Elman network [8]) with one hidden layer that takes in the input sequence \(X=(X_{k})_{k\geq 0}\) and outputs a prediction \((\hat{Y}_{k}^{N})_{k\geq 0}\) for the target data \((Y_{k}^{N})_{k\geq 0}\). The RNN predictions are given by the model outputs \(\hat{Y}_{k}^{N}=g_{k}^{N}(X;\theta)\). The RNN depends on the parameters \(\theta=(C,W,B)\), which must be trained on data. In particular, for all \(k\geq 0\), the RNN hidden layer \(S_{k}^{N}\) and predictions \(\hat{Y}_{k}^{N}\) are updated as: \[S_{k+1}^{i,N}(X;\theta) :=\sigma\left((W^{i})^{\top}X_{k}+\frac{1}{N^{\beta_{1}}}\sum_{j= 1}^{N}B^{ij}S_{k}^{j,N}(X;\theta)\right),\quad S_{0}^{i,N}(X;\theta)=0 \tag{1.1}\] \[g_{k}^{N}(X;\theta) :=\frac{1}{N^{\beta_{2}}}\sum_{i=1}^{N}C^{i}S_{k+1}^{i,N}(X; \theta), \tag{1.2}\] where * \(N\) is the number of hidden units in the memory states \(S_{k}^{i,N}(X;\theta)\), * \(\beta_{1}=1\) and \(\beta_{2}\) determine the scaling used to normalise the outputs of the network, * \(C\in\mathbb{R}^{N}\), with \(C^{i}\) representing the \(i\)-th component of \(C\), * \(W\in\mathbb{R}^{N\times d}\), with \(W^{i}\) representing the \(i\)-th column of \(W\) as a column vector, and * \(B\in\mathbb{R}^{N\times N}\), with \(B^{ij}\) is the \((i,j)\)-entries of \(B\). The data samples \(X_{k}\), which are elements of a data sequence, are _not_ i.i.d. (unlike for feedforward neural networks). In our mathematical analysis, we will make the simplifying assumption that all columns of \(B\) are equal, i.e. for all \(j\) we have \(B^{ij}=B^{j}\) for some \(B^{j}\). The memory states \(S_{k}^{N,i}(X;\theta)\) is a nonlinear representation of the history of the data sequence \((X_{j})_{j=0}^{k-1}\). Using this nonlinear representation - which is learned from the data by training the parameters \(\theta\) - the RNN generates a prediction \(\hat{Y}_{k}^{N}\) for the target data \(Y_{k}\). Notice that if we fix \(X_{k}=x\) and \(B^{ij}=0\), (1.2) becomes a standard feedforward network (i.e., the network does not dynamically evolve over time \(k\) and the network output is a static prediction). Limits for gradient-descent-trained feedforward networks as the number of hidden units \(N\to\infty\) can been established when \(\frac{1}{2}\leq\beta_{2}<1\) (the NTK limit [3]) or \(\beta_{2}=1\) (the mean-field limit [4, 5, 6, 7]). A "typical" limit ODE from mean-field analysis will not occur in for the RNNs and standard mean-field techniques (see, for example, [9]) cannot be directly applied. As an illustrative example, standard mean-field techniques would be applicable to a neural network with the following updates: \[S_{k+1}^{i,N}(X;\theta):=S_{k}^{i,N}(X;\theta)+\frac{1}{N}\sigma\left((W^{i}) ^{\top}X_{k}+\frac{1}{N^{\beta_{1}}}\sum_{j=1}^{N}B^{ij}S_{k}^{j,N}(X;\theta) \right),\quad S_{0}^{i,N}(X;\theta)=0. \tag{1.3}\] (1.3) is an Euler approximation of an ODE with step size \(\mathcal{O}(1/N)\). (1.3) is a standard mean-field framework and it can clearly be proven as \(N\to\infty\) that (1.3) converges to an appropriate infinite-dimensional ODE. However, the \(S_{k}^{i,N}(X;\theta)\) and \(1/N\) do not appear in the RNN (1.2). This changes the analysis: the RNN hidden layer (1.2) is not an Euler approximation to an ODE. Although (1.2) is not a standard mean-field equation, we can observe mean-field behaviour for the distribution of the hidden layer. This is illustrated by Figure 1 where, for varying \(N\), we simulated paths of the hidden layer \(\left(S_{k}^{i,N}(X;\theta)\right)_{i=1}^{N}\), based upon a common \(\theta\) and independent paths of the input sequence \(X\). The empirical distributions of the hidden units in the memory layer at a large, fixed time step \(k\), is displayed as \(N\rightarrow+\infty\). The empirical distributions converge as \(N\rightarrow\infty\). Figure 1 suggests that a mean-field limit as \(N\rightarrow\infty\) does exist. Further details of the simulation are provided in subsection 3.3. Furthermore, numerical simulations suggest that the hidden layer is ergodic as the time steps \(k\rightarrow\infty\). Figure 2 displays the time-averaged first and second moments of the hidden layer. The formal definition of the time averages is provided in subsection 3.3. Figures 1 and 2 together motivate an analysis of the RNN (1.2) as both \(k,N\rightarrow\infty\). Our mean-field analysis studies an appropriate fixed point for the untrained hidden units in the memory states, and use it to study the evolution of the RNN. The case for a constant input sequence \(X_{k}=x\) (with a finite number of hidden units) has been study in [10, 11, 12] for developing a more efficient gradient-descent-type algorithm. However, the fixed point analysis is more complex when the input sequence \(X_{k}\) is non-constant and random. As a result, we would expect the RNN to have a _random_ fixed point and that the _distribution_ of the untrained hidden unit in the memory states should converge _weakly_. The random fixed point should also depend on the distribution of the data sequence \(X_{k}\). We are able to prove such random fixed point exists if the data sequence \(X_{k}\) is ergodic. Section 3 shows that the ergodicity of \(X\) actually leads to the ergodicity of \(S_{k}^{i,N}(X;\theta)\). The fixed point analysis is further complicated since the parameters \(\theta\) will be simultaneously trained using the truncated backpropagation through time (tBPTT) algorithm as the hidden layer is updated at each time \(k\); see Section 2. Both the hidden layers \(S_{k}^{N}(X;\theta_{k})\) and the parameters \(\theta_{k}\) (which govern the transition function for the hidden layer \(S_{k}^{N}\)) will jointly evolve in time. Therefore, the dynamics of the RNN Figure 1: Each curve represents the _overall_ empirical distributions of the untrained hidden units in the memory states (the _hidden memory units_) from all simulation instances \(\ell=1,\ldots,100\) for \(N=10^{2},...10^{6}\) and time step \(k\approx 50000\). will be changing over time. Fortunately, the changes due to the parameter updates will be of magnitude \(\mathcal{O}(1/N)\). Therefore, the evolution of the output layer of the network \((g_{k}^{N}(X;\theta)=\frac{1}{N^{\beta_{2}}}\sum_{i=1}^{N}C^{i}S_{k+1}^{i,N}(X; \theta))\) can be represented as an Euler approximation of an appropriate infinite-dimensional ODE whose dynamics are a function of the RNN hidden layer's random fixed point, which it will converge to as \(N\to\infty\). We emphasise that the evolution of the RNN hidden layer itself (i.e., \(S_{k+1}^{i,N}(X;\theta)\)) cannot be represented as a discretization of an ODE/PDE since the RNN hidden layer updates are \(\mathcal{O}(1)\). The RNN hidden layer is studied as a function in a Sobolev space, whose evolution is governed by the data sequence \(X_{k}\), the parameter updates, and its dependence on the RNN hidden layer at the previous time step. Due to the correlation between updates (including both the hidden layer and data samples), a Poisson equation must be used to bound the fluctuations of the RNN around its limit equation. The rest of the paper is organized as follows. In Section 2 we present the model that we study and our main assumptions. In Section 3 we characterise the dynamics of the RNN memory layer as a sequence of random functions of the parameters. We prove that the sequence of memory layers of the trained network is geometrically ergodic. In Section 4 we prove the convergence of the trained RNN. Our main result is presented in Theorem 4.2. Section 5 uses our limit result to prove a useful guarantee for the training of RNN models: the RNN training algorithm will asymptotically decrease the loss function (i.e., the RNN model will be updated in a descent direction). In Section 6 we present a number of technical lemmas and preliminary estimates that are used in the detailed proof of Theorem 4.2 which is presented in Section 7. In the Appendix A we present a recursive inequality that is used in various places throughout the paper as well as the construction of a clipping function. ## 2 Assumptions, Data, and Model Architecture ### Data generation Our paper focus on the problem of recovering the map from an input data sequence to an output data sequence. We assume the input data sequence \(X=(X_{k})_{k\geq 0},X_{k}\in\mathbb{R}^{d}\) and output data sequence \(Y=(Y_{k})_{k\geq 0},Y_{k}\in\mathbb{R}\) are jointly governed by the following update equation \[(X_{k+1},Z_{k+1}) =g(X_{k},Z_{k})+\epsilon_{k}, \tag{2.1}\] \[Y_{k} =f(X_{k},Z_{k})+\eta_{k}, \tag{2.2}\] Figure 2: The plots of the time-averaged first and second moments of the hidden units for a sufficiently large \(N\) (chosen to be \(10^{6}\)) and \(p=1,2\). The \(x\)-axis represents the number of time steps. We summarise the minimum/maximum of the simulated first and second moments of the time-averages for independent input sequences \(X\) using a (seemingly invisible) grey band. The red line represents the mean of the time-averaged moments for all input sequences \(X\), thus providing a Monte-Carlo estimate for the moments of the _random_ fixed point. The fact that the realisations of the time averages all converge as \(k\to\infty\) illustrates the ergodicity of the sequence \(S_{k}^{i,N}(X;\theta)\). where \((\epsilon_{k},\eta_{k})_{k\geq 0},\epsilon_{k}\in\mathbb{R}^{d+1},\eta_{k}\in \mathbb{R}\) are independent, identically distributed (iid) and mean-zero noises (but \(\epsilon_{k}\) doesn't need to be independent from \(\eta_{k}\)). The map \(g:\mathbb{R}^{d+1}\to\mathbb{R}^{d+1}\) drives the dynamics of the input sequence, and \(f:\mathbb{R}^{d+1}\to\mathbb{R}\) maps the input sequence to the output sequence. The variables \(Z_{k}\in\mathbb{R}\) are _hidden_, i.e. only the inputs \((X_{k})_{k\geq 0}\) and outputs \((Y_{k})_{k\geq 0}\) are observed, and it is not guaranteed whether the observation \((X_{k},Y_{k})_{k\geq 0}\) is actually a Markov process. Let us make the following assumption on the functions \(f,g\), as well as the background noise of the dynamics. As we will see in the next section, these assumptions are essential in ensuring that the dynamics of the input chain is ergodic. **Assumption 2.1** (On governing dynamics and background noises of the data sequences).: 1. We assume that the function \((f,g)\) is \(L\)-globally Lipschitz (with respect to the Euclidean norm) with \(L<1\). That is for any \((x,z),(x^{\prime},z^{\prime})\in\mathbb{R}^{d+1}\) we have \[\left|\begin{pmatrix}g(x,z)-g(x^{\prime},z^{\prime})\\ f(x,z)-f(x^{\prime},z^{\prime})\end{pmatrix}\right|\leq L\left|(x,z)-(x^{\prime },z^{\prime})\right|.\] 2. We assume that both \((\epsilon_{k})_{k\geq 0}\) and \((\eta_{k})_{k\geq 0}\) are _mean-zero_ iid random variables that are uniformly bounded by some constant \(C_{\epsilon}\). 3. We finally assume that the sequence \((X_{k},Z_{k})_{k\geq 0}\) is bounded with \(|(X_{k},Z_{k})|\leq 1\). This could be achieved by e.g. assuming \[\max\left(\sup_{x,z}|g(x,z)|,\sup_{k}|\epsilon_{k}|\right)\leq\frac{1}{2}.\] As a result, the output \((Y_{k})_{k\geq 0}\) is uniformly bounded by some constant \(C_{y}<+\infty\). ### Recurrent Neural Network A recurrent neural network is used to approximate the function \(g\) as defined above. Specifically, we will study the standard recurrent network in equation (1.2) but with the following simplifying assumption for its weight matrix: **Assumption 2.2** (Simplifying assumption of memory weights).: We assume all columns of \(B\) are equal, i.e. for all \(j\) we have \(B^{ij}=B^{j}\) for some \(B^{j}\). **Assumption 2.3** (Regularity of activation function).: We assume that \(\sigma\in C_{b}^{2}(\mathbb{R})\) (being a twice-continuously differentiable), with output bounded by one and first and second derivatives both uniformly bounded by \(C_{\sigma}\) where \(C_{\sigma}^{2}<\min\{\frac{1}{2},\frac{1-I^{2}}{8}\}\). Assumption 2.3 on \(C_{\sigma}\) is instrumental in our proof of ergodicity of the memory states. One of the crucial results is Lemma 3.2 and the related results on erogodicity are in Section 3.1. _Example 2.4_.: An example of such activation function is the usual sigmoid function, defined as \[z\in\mathbb{R}\mapsto\sigma(z)=\frac{1}{1+e^{-z}}\implies\quad\sup_{z\in \mathbb{R}}|\sigma(z)|\leq 1.\] One could show (see e.g. [13]) that \(\sigma^{\prime}(z)\in[0,1/4]\) and \(|\sigma^{\prime\prime}(z)|\leq 1/4\) for any \(z\in\mathbb{R}\). As a result, the sigmoid function satisfies our assumption when \(L<1/2\). We emphasise that the above assumptions are included for the simplicity of our proof, and we conjecture from our simulation that the RNN exhibits the desired mean-field behaviour under weaker assumptions. **Assumption 2.5** (Initialisation of parameters).: We assume that the trained parameters \((C^{i},W^{i},B^{i})\) are initialised independently from the data sequence, so that the sequence \((C^{i}_{0},W^{i}_{0},B^{i}_{0})_{i=1}^{N}\) is composed of independent and identically distributed random vectors with a distribution \((C^{i}_{0},W^{i}_{0},B^{i}_{0})\sim\lambda_{0}:=\lambda(dc,dw,db)\), such that \(C^{i}_{0}\) and \(W^{i}_{0}\) are independent from each other with \(C^{i}_{0}\) having zero mean. We further assume that the measure \(\lambda(dc,dw,db)\) is absolutely continuous with respect to Lebesgue measure, i.e., it has a density. In addition, we assume that \(|C^{i}_{0}|\leq 1\), \(|B^{i}_{0}|\leq 1\) and \(\mathbb{E}\left\|W^{i}_{0}\right\|^{2}\leq 1\). _Remark 2.6_.: For simplicity of notation, we define \(\lambda_{0}^{N}\) to be the empirical measure of the trained parameters at initialisation on the space \(\mathbb{R}^{1+d+1}\), i.e. \[\lambda_{0}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{C_{0}^{i},W_{0}^{i},B_{0}^{i}}.\] Finally, in our notation later in the paper, the dependence of the RNN hidden layer \(S_{k}^{i,N}(X;\theta)\) on \(X\) may be dropped in the later sections. ### Training the RNN parameters The parameters \(\theta\) are in practice trained with an online SGD algorithm seeking to minimize the function \[\mathcal{L}(\theta)=\frac{1}{T}\sum_{k=1}^{T}\mathcal{L}_{k}(\theta),\quad \text{where }\mathcal{L}_{k}(\theta)=\frac{1}{2}(g_{k}^{N}(X;\theta)-Y_{k})^{2}.\] The computational cost of evaluating the RNN network up to step \(T\) and performing one full evaluation of gradient by back-propagation through time (BPTT) grows in \(\mathcal{O}(T)\), which become intractable if we want to perform gradient-based algorithms for large \(T\) for minimising the loss function. We instead update the parameters through the _online stochastic gradient descent with truncated back-propagation through time_ (online SGD with tBPTT) [14][15]. A detailed explanation is given in the next section, but the main idea is to truncate the computational graph up to \(\tau\) time steps for \(\tau\ll T\) when computing the outputs of the RNN and gradients with respect to the parameters. For simplicity and without loss of generality, we restrict our discussion to the case when \(\tau=1\). Assume that the network (estimated) outputs at each step is \(\hat{Y}_{k}^{N}\), which are used as a comparison with the actual observations \(Y_{k}\) for the computation of loss function and the estimated gradient. As it turns out, it is challenging to prove the boundedness of \(\hat{Y}_{k}^{N}\). To resolve this issue, we will clip the gradients used in the parameter updates. Gradient clipping is a standard method in deep learning and training RNNs [13][16][17]. The gradient clipping will actually disappear in the final approximation argument as the number of hidden units \(\to\infty\). Once the gradients in the parameter updates are clipped, the output \(\hat{Y}_{k}^{N}\) can be proven to be bounded. We use the following clipping function: **Definition 2.7** (Smooth clipping function [18]).: A family of functions \((\psi^{N})_{N\in\mathbb{N}},\ \psi^{N}\in C_{b}^{2}(\mathbb{R})\) is a family of _smooth clipping function_ with parameter \(\gamma\) if the following are satisfied: 1. \(\left|\psi^{N}(x)\right|\) is bounded by \(2N^{\gamma}\), 2. \(\psi^{N}(x)=x\) for \(x\in[-N^{\gamma},N^{\gamma}]\), 3. \(\left|\frac{d}{dx}\psi^{N}(x)\right|\leq 1\). A construction of such clipping function is provided in appendix B following the arguments in [19]. By fundamental theorem of calculus we also know that \(|\psi^{N}(x)|\leq|x|\). The parameter \(\gamma\) is to be chosen in the next section, and it should be small to ensure that \(|\psi^{N}(\hat{Y}_{k}^{N})|\) does not blow up too quickly as \(N\to\infty\). Assumptions for the wide-network limitIn our paper we set \(\beta_{1}=1\). Depending on the choice of \(\beta_{2}\), we will get different limits. Preliminary works for feedforward NN (including our prior work), [7, 20, 21, 22, 23] demonstrate that when \(N\to+\infty\), the evolution of the output of the feedforward NN converges to a limit equation: 1. for \(\beta_{2}=1\), the limit equation is expected to be a PDE, 2. for \(\beta_{2}=\beta\in(1/2,1)\), the limit equation is expected to be an infinite-dimensional ODE, and 3. for \(\beta_{2}=1/2\), the limit equation is expected to be a _random_ infinite-dimensional ODE. Our paper will focus on the case when \(\beta_{2}=\beta\in(1/2,1)\). We are aware that some analyses have been done for the case when \(\beta_{2}=1\) in [24], with the RNN trained _offline_ by continuous gradient descent after observing a fixed number of steps of the sequence \((X_{k})_{k\geq 0}\). We emphasise that in the present work the RNN is trained _online_, so that we update the parameters every time we observe a new step of our sequences \((X_{k},Y_{k})\). We study the asymptotics of the training of RNN as the training time (and hence number of observations made for the input and output sequences) grow with the width of the hidden layer, i.e. \(N\rightarrow+\infty\). In order to derive a well-defined typical behaviour of the neural network when training our RNN in the limit as \(N\rightarrow+\infty\), we choose the learning rate to be \[\alpha^{N}:=\frac{\alpha}{N^{2-2\beta}}, \tag{2.3}\] for some constant \(\alpha>0\). The learning rate is constant in time, and with this we may explicitly state the online SGD with tBPTT, see Algorithm 1. ``` 1:procedureSGDtBPTT(\(N,\lambda,T\))\(\triangleright\) network size, initial parameters distribution, running time 2:Initialise: initial parameters \(\theta=(C_{0},W_{0},B_{0})\sim\lambda\), initial memories \(\forall i,\hat{S}_{0}^{i,N}=0\), step \(k=0\) 3:while\(k\leq NT\)do\(\triangleright\) Truncated forward propagation 4:for all\(i\in\{1,2,...,N\}\)do\(\triangleright\) Updating memory 5:\(\hat{S}_{k+1}^{i,N}\leftarrow\sigma\left((W_{k}^{i})^{\top}X_{k}+\frac{1}{N} \sum_{j=1}^{N}B_{k}^{j}\hat{S}_{k}^{j,N}\right)\)\(\triangleright\) Updating memory 6:endfor 7:\(\hat{Y}_{k}^{N}\leftarrow\frac{1}{N^{\beta}}\sum_{i=1}^{N}C_{k}^{i}\hat{S}_{k+1} ^{i,N}\)\(\triangleright\) Updating output 8:\(\hat{L}_{k}=\frac{1}{2}(\hat{Y}_{k}^{N}-Y_{k})^{2}\)\(\triangleright\) Computing loss 9:for all\(i\in\{1,2,...,N\}\)do\(\triangleright\) Truncated backward propagation on \(\hat{L}_{k}\) 10:\(\Delta\hat{S}_{k+1}^{i,N}\leftarrow\sigma^{\prime}\left((W_{k}^{i})^{\top}X_{ k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}\hat{S}_{k}^{i,N}\right)\) 11:\(C_{k+1}^{i}=C_{k}^{i}-\frac{\alpha}{N^{2-\beta}}(\psi^{N}(\hat{Y}_{k}^{N})-Y_{k })\hat{S}_{k+1}^{j,N}\) 12:\(W_{k+1}^{i}=W_{k}^{i}-\frac{\alpha C_{k}^{i}}{N^{2-\beta}}(\psi^{N}(\hat{Y}_{k}^ {N})-Y_{k})\Delta\hat{S}_{k+1}^{i,N}\) 13:\(B_{k+1}^{i}=B_{k}^{i}-\frac{\alpha}{N^{3-\beta}}(\psi^{N}(\hat{Y}_{k}^{N})-Y_{k })\sum_{\ell=1}^{N}C_{k}^{\ell}\Delta\hat{S}_{k+1}^{\ell,N}\) 14:endfor 15:endwhile 16:endprocedure ``` **Algorithm 1** Online SGD with tBPTT (\(\tau=1\)) Lemma 2.8 yields a bound (uniform in \(i\)) for how much the parameters move. **Lemma 2.8**.: _Fix \(T>0\). If we choose \(\gamma>0\) such that \(\beta+2\gamma<1\), then for all \(k\) such that \(k/N\leq T\) we have the following sure bounds:_ \[|C_{k}^{i}-C_{0}^{i}|+\left\|W_{k}^{i}-W_{0}^{i}\right\|+|B_{k}^{i}-B_{0}^{i}| <\frac{C_{T}}{N^{1-\beta-\gamma}}, \tag{2.4}\] _where \(C_{T}>0\) is a constant that depends on \(T\)._ Proof.: See Section 6.1. _Remark 2.9_.: One can also show, with the absence of clipping function \(\psi^{N}\), that \[|C^{i}_{k+1}-C^{i}_{k}|+\left\|W^{i}_{k+1}-W^{i}_{k}\right\|+|B^{i}_{k+1}-B^{i}_ {k}|\leq\frac{C_{T}}{N}, \tag{2.5}\] and hence by a telescoping sum argument, \[|C^{i}_{k}-C^{i}_{0}|+\left\|W^{i}_{k}-W^{i}_{0}\right\|+|B^{i}_{k}-B^{i}_{0}| \leq C_{T}.\] The proof of this remark is also very similar - see section 6.1. However, this does not reflect our intuition that the trained parameters are closed to the initialisation. The clipping function here is crucial in justifying the linearisation of the sample outputs with respect to the parameters in our analysis. Define the empirical measure of \((C^{i}_{k},W^{i}_{k},B^{i}_{k})\) as \(\lambda^{N}_{k}\), i.e. \[\lambda^{N}_{k}=\frac{1}{N}\sum_{i=1}^{N}\delta_{C^{i}_{k},W^{i}_{k},B^{i}_{k }}.\] Lemma 2.8 says that the parameters at step \(k\) should be close to the initial parameters on average, so \(\lambda^{N}_{k}\) should be well-approximated by \(\lambda^{N}\) in some sense. We therefore expect that for \(k\leq\left\lfloor TN\right\rfloor\), \(\lambda^{N}_{k}\) converges to \(\lambda\) (distribution of initialisation) in a sense to be specified. However, establishing such convergence using our traditional way of studying its limiting ODE (see e.g. [20]) is difficult. To that end, the evolution of \[\left\langle f,\lambda^{N}_{k}\right\rangle=\int_{\mathbb{R}^{1+d+1}}f(c,w,b) \,\lambda^{N}_{k}(dc,dw,db)=\frac{1}{N}\sum_{i=1}^{N}f(C^{i}_{k},W^{i}_{k},B^ {i}_{k}),\] i.e. the inner-product of the empirical distribution with a smooth test function \(f\in C^{\infty}(\mathbb{R}^{1+d+1})\) does not look like a discretization of a differential equation. A new mathematical approach is therefore required to analyze the infinite-width RNN. Our result is divided into three parts: a result regarding the dynamics of sampled memories through the network, the limiting ODE of the process \(\hat{Y}^{N}_{k}\), and how one could gain insight on how the tBPTT decreases the loss function by studying the limiting ODE. ## 3 Dynamics of the memory units We characterise the dynamics of the memories as a sequence of a random function with respect to the weights. Specifically, we have \(\hat{S}^{i,N}_{k+1}=v^{N}_{k+1}(W^{i}_{k})\), where \(v^{N}_{k+1}(w)\) satisfies the following recursion: \[v^{N}_{k+1}(w)=\sigma\left(w^{\top}X_{k}+\frac{1}{N}\sum_{i=1}^{N}B^{i}_{k}v^{ N}_{k}(W^{i}_{k-1})\right),\quad v^{N}_{0}(w)=0. \tag{3.1}\] We will show that the sequence of random functions \(\left\{v^{N}_{k}(\cdot)\right\}_{k\geq 0}\) converges in distribution to some stationary distribution over a specific function space, as a result of the ergodicity of the underlying input and output dynamics, and the parameters do not move too far from their initial position. Taking these observations into account, we study the joint process \(V^{N}_{k}=(X_{k},Z_{k},Y_{k},v^{N}_{k}(\cdot))\sim\nu^{N}_{k}\) as an element of the product space \[\mathcal{X}=\overline{B_{1}(0)}\times[-C_{y},C_{y}]\times\overline{\mathcal{ H}},\] where \[\overline{B_{1}(0)}=\{(x,z)\,|\,|(x,z)|\leq 1\}, \tag{3.2}\] \(\mathcal{H}\) is the set \[\mathcal{H}=\left\{f(w)=\sigma(w^{\top}a+b)\,:\,a\in\mathbb{R}^{d},\,|a|\leq 1,\,b\in\mathbb{R}\right\},\] and \(\overline{\mathcal{H}}\) is the closure of \(\mathcal{H}\) with the Sobolev norm (with respect to the distribution of initialisation of \(w\) (i.e. \(\lambda\)) \[\|h\|_{H^{1}(\lambda)}^{2}=\int_{\mathbb{R}^{d}}\left(|h(w)|^{2}+ \left|\nabla h(w)\right|^{2}\right)\,\lambda(dw). \tag{3.3}\] We equip the space \(\mathcal{X}\) with the norm \(\left\|\cdot\right\|_{\mathcal{X}}\): \[\left\|(x,z,y,h)\right\|_{\mathcal{X}}^{2}=\left|x\right|^{2}+ \left|z\right|^{2}+\left|y\right|^{2}+\left\|h\right\|_{H^{1}(\lambda)}^{2}, \tag{3.4}\] and the usual Borel \(\sigma\)-algebra \(\mathcal{B}(\mathcal{X})\) that contains the open subsets of \(\mathcal{X}\) with respect to this norm. **Lemma 3.1**.: _An element \(v\in\mathcal{H}\) satisfies_ \[\sup_{w}|v(w)|\leq 1,\quad\sup_{w}|\nabla v(w)|\leq C_{\sigma}<1,\quad \max_{i,j}\sup_{w}\left[|\partial_{ij}v(w)|\right]\leq C_{\sigma}<1.\] _As a result, we know that \(v\in H^{1}(\mathbb{R}^{d};\lambda)=:H^{1}(\lambda)\), i.e. \(\mathcal{H}\) is a (proper) subspace of \(H^{1}(\lambda)\)._ Proof.: The first inequality is immediately true by Assumption 2.3. The second inequality is true as \[\nabla v(w)=\sigma^{\prime}(w^{\top}a+b)a\implies|\nabla v(w)|\leq| \sigma^{\prime}(w^{\top}a+b)||a|\leq C_{\sigma}.\] The second inequality can also be proven by noting that \[\mathsf{Hess}\,v(w)=\sigma^{\prime\prime}(w^{\top}a+b)aa^{\top} \implies|\partial_{ij}v(w)|\leq|\sigma^{\prime\prime}(w^{\top}a+b)||a_{i}||a_ {j}|\leq C_{\sigma}.\] Notice that this implies that \(v\in H^{1}(\lambda)\), as the \(\|v\|_{H^{1}(\lambda)}\) norm is well-defined and bounded by the following \[\|v\|_{H^{1}(\lambda)}^{2}\leq\int_{\mathbb{R}^{d}}\left(|v(w)|^ {2}+|\nabla v(w)|^{2}\right)\,\lambda(dw) =\int_{\mathbb{R}^{d}}\left[\left|\sigma(w^{\top}a+b)\right|^{2} +\left|\sigma^{\prime}(w^{\top}a+b)\right|^{2}|a|^{2}\right]\,\lambda(dw)\] \[\leq\int_{\mathbb{R}^{d}}\left(1+C_{\sigma}^{2}\right)\lambda(dw )=1+C_{\sigma}^{2}<+\infty. \tag{3.5}\] In fact this yields the control \(\|v\|_{H^{1}(\lambda)}\leq\sqrt{1+C_{\sigma}^{2}}\), so \(\mathcal{H}\) is bounded in \(H^{1}(\lambda)\)-norm. As a result, the closure \(\overline{\mathcal{H}}\) is a proper closed subspace of the Polish space \(\mathbb{R}^{d+1+1}\times H^{1}(\lambda)\) in the \(\mathcal{X}\)-norm, hence it is a complete normed space as well. As the parameters do not move too much from their initial position, we would expect \(\lambda_{k}^{N}\) to be close to \(\lambda^{N}\) in some sense when \(k\leq\left\lfloor NT\right\rfloor\). We also expect \(\lambda^{N}\) to be close to \(\lambda\) when \(N\) gets large. This motivates us to consider the following auxiliary processes: \[H_{k}=(X_{k},Z_{k},Y_{k},h_{k}(\cdot))\sim\mu_{k},\] where \(h_{k}\) is governed by the following random iterations: \[h_{k+1}(w)=\sigma(w^{\top}X_{k}+\langle b^{\prime}h_{k}(w^{ \prime}),\lambda\rangle)\in\mathcal{H},\quad h_{0}(w)=0. \tag{3.6}\] We will quantify the distance between \(v_{k}^{N}(w)\) and \(h_{k}(w)\) in the later part of this section, but let us focus on this auxiliary process. The process is a Markov chain of the form \(H_{k+1}=F(H_{k},\epsilon_{k},\eta_{k})\), where \[F:(x,z,y,h,\epsilon,\eta) \mapsto(g(x,z)+\epsilon,f(x,z)+\eta,\varsigma(x,z,y,h))\in \mathcal{X} \tag{3.7}\] \[\varsigma:(x,z,y,h) \mapsto\left[w\mapsto\sigma(w^{\top}x+\langle bh(w),\lambda \rangle)\right]\in\mathcal{H}\] As a result, the Markov chain possesses the following transition kernel \(P:\mathcal{X}\times\mathcal{B}(\mathcal{X})\to\mathbb{R}_{\geq 0}\), where: \[P(H,A)=\mathbb{E}_{\epsilon,\eta}(\mathbb{I}_{A}(F(H,\epsilon, \eta))).\] We make note that **Lemma 3.2**.: _For any fixed realisation of \((\epsilon,\eta)\), the function \(F\) is \(q_{0}\)-Lipschitz continuous with \(q_{0}\leq\sqrt{L^{2}+8C_{\sigma}^{2}}<1\). Due to Assumptions 2.1 and 2.3 on \(L\) and \(C_{\sigma}\) respectively, we have \(q_{0}<1\)._ Proof.: See section 6.2. This enables us to show that the Markov chain \((H_{k})_{k\geq 1}\) is ergodic, which is the objective of subsection 3.1. ### Geometric Ergodicity of the auxiliary process Given a norm \(\left\|\cdot\right\|\) on a vector space \(\mathcal{X}\) such that \((\mathcal{X},\left\|\cdot\right\|)\) is Banach (i.e. complete) and separable, we can define the associated \(p\)-Wasserstein distance for \(p\geq 1\)[25]: **Definition 3.3** (Wasserstein Metric).: Let \(\rho_{1},\rho_{2}\) are measures in the \(p\)-Wasserstein space of \((\mathcal{X},\left\|\cdot\right\|)\): \[\rho_{1},\rho_{2}\in\mathcal{P}_{p}^{\left\|\cdot\right\|}(\mathcal{X})= \left\{\rho\,:\,\int_{\mathcal{X}}\left\|u\right\|_{\mathcal{X}}^{p}\,\rho( dx)<+\infty\right\}\] A measure \(\rho\) on \(\mathcal{X}\times\mathcal{X}\) is a _coupling_ between \(\rho_{1}\) and \(\rho_{2}\) if \[\rho_{1}(\cdot)=\rho(\cdot\times\mathcal{X})\quad\rho_{2}(\cdot)=\rho( \mathcal{X}\times\cdot).\] The \(p\)-Wasserstein distance between \(\rho_{1}\) and \(\rho_{2}\) with respect to norm \(d\) is defined as \(\mathsf{Wass}_{p}^{\left\|\cdot\right\|}(\rho_{1},\rho_{2})\) that satisfies \[\left[\mathsf{Wass}_{p}^{\left\|\cdot\right\|}(\rho_{1},\rho_{2})\right]^{p}= \inf_{\gamma\in\Gamma(\rho_{1},\rho_{2})}\int_{\mathcal{X}\times\mathcal{X}} \left\|u-v\right\|_{\mathcal{X}}^{p}\,\gamma(du\,dv), \tag{3.8}\] where \(\Gamma(\rho_{1},\rho_{2})\) is the set of all couplings between \(\rho_{1}\) and \(\rho_{2}\). The norm \(\left\|\cdot\right\|\) might be omitted in writing \(\mathsf{Wass}_{p}^{\left\|\cdot\right\|}(\rho_{1},\rho_{2})\) and \(\mathcal{P}_{p}^{\left\|\cdot\right\|}(\mathcal{X})\) if no confusion is created as a result. _Remark 3.4_.: Note that the infimum in (3.8) is a minimum [25], i.e. there is an optimal coupling \(\gamma^{*}\in\Gamma(\rho_{1},\rho_{2})\) such that \[\left[\mathsf{Wass}_{p}^{\left\|\cdot\right\|}(\rho_{1},\rho_{2})\right]^{p}= \int_{\mathcal{X}\times\mathcal{X}}\left\|u-v\right\|_{\mathcal{X}}^{p}\,\gamma ^{*}(du\,dv). \tag{3.9}\] This fact will be used to prove bounds for the Wasserstein metric between successive terms of sequence of distribution \(\mu_{k}\). Recall that the transition kernel \(P\) induces an operator on the space of all probability measure on \(\mathcal{X}\): \[\mu\stackrel{{ P^{\vee}}}{{\mapsto}}[P^{\vee}(\mu)](\cdot)=\int_{ \mathcal{X}}P(\mathsf{H},\cdot)\,\mu(d\mathsf{H}).\] In particular, the sequence of measures \((\mu_{k})_{k\geq 0}\) could now be written as \(\mu_{k}=(P^{\vee})^{k}\mu_{0}\). Thanks to the \(q_{0}\)-Lipschitzness of the function \(F\), we could show the following control: **Lemma 3.5**.: \[\mathsf{Wass}_{2}(P(H,\cdot),P(H^{\prime},\cdot))\leq q_{0}\left\|H-H^{\prime }\right\|_{\mathcal{X}}.\] Proof.: We consider the joint random variable \(\tilde{F}_{H,H^{\prime}}=(F(H,\epsilon,\eta),F(H^{\prime},\epsilon,\eta))\). We note that the first component has the same distribution as \(P(H,\cdot)\), and the second component has the same distribution as \(P(H^{\prime},\cdot)\). Therefore the distribution of \(\tilde{F}_{H,H^{\prime}}\), denoted as \(\bar{\gamma}_{H,H^{\prime}}\), is both a coupling between the measures \(P(H,\cdot)\) and \(P(H^{\prime},\cdot)\) and a valid \(((H,H^{\prime})\)-measurable) transition kernel. Therefore, by Lemma 3.2 we have \[\mathsf{Wass}_{2}(P(H,\cdot),P(H^{\prime},\cdot))\leq\left[\mathbb{E}_{F_{H,H^{ \prime}}}|F(H,\epsilon,\eta)-F(H^{\prime},\epsilon,\eta)|^{2}\right]^{1/2} \leq q_{0}\|H-H^{\prime}\|_{\mathcal{X}}. \tag{3.10}\] This leads to the following formal result on the convergence of \(\mu_{k}\). **Theorem 3.6**.: _For all \(\mu,\nu\in\mathcal{P}_{2}(\mathcal{X})\),_ \[\mathsf{Wass}_{2}(P^{\vee}(\mu),P^{\vee}(\nu))\leq q_{0}\mathsf{Wass}_{2}(\mu, \nu),\] _where \(q_{0}<1\), and the 2-Wasserstein distance is taken with respect to the metric induced by the \(\left\|\cdot\right\|_{\mathcal{X}}\) norm defined in (3.4). As a result the operator \(P^{\vee}\) has a unique contractive fixed point \(\mu\), which is the stationary measure of the Markov chain \((H_{k})_{k\geq 0}\)._ Proof.: Let \(\gamma^{*}\) be the optimal coupling between \(\mu\) and \(\nu\), such that \[(\mathsf{Wass}_{2}(\mu,\nu))^{2}=\int_{\mathcal{X}}\left\|\mathsf{H}-\bar{ \mathsf{H}}\right\|_{\mathcal{X}}^{2}\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}}). \tag{3.11}\] We look at the coupling \[\gamma(\cdot)=\int_{\mathcal{X}\times\mathcal{X}}\tilde{\gamma}_{\mathsf{H}, \bar{\mathsf{H}}}(\cdot)\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}}).\] The integral is well defined as \(\tilde{\gamma}_{\mathsf{H},\bar{\mathsf{H}}}(\cdot)(A)\) is a \((\mathsf{H},\bar{\mathsf{H}})\)-measurable function for any sets \(A\in\mathcal{B}(\mathcal{X}\times\mathcal{X})\). One can directly check that \(\gamma\) is a measure. In particular, \(\gamma\) is a coupling of \(P^{\vee}(\mu)\) and \(P^{\vee}(\nu)\), as for all \(A,B\in\mathcal{B}(\mathcal{X})\) \[\gamma(A\times\mathcal{X}) =\int_{\mathcal{X}\times\mathcal{X}}\tilde{\gamma}_{\mathsf{H}, \bar{\mathsf{H}}}(A\times\mathcal{X})\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H} })=\int_{\mathcal{X}\times\mathcal{X}}P(\mathsf{H},A)\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}})=[P^{\vee}(\mu)](A), \tag{3.12}\] \[\gamma(\mathcal{X}\times B) =\int_{\mathcal{X}\times\mathcal{X}}\tilde{\gamma}_{\mathsf{H}, \bar{\mathsf{H}}}(\mathcal{X}\times B)\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H }})=\int_{\mathcal{X}\times\mathcal{X}}P(\bar{\mathsf{H}},B)\,\gamma^{*}(d \mathsf{H},d\bar{\mathsf{H}})=[P^{\vee}(\nu)](B).\] We therefore have \(\gamma\in\Gamma(P^{\vee}(\mu),P^{\vee}(\nu))\). Finally, since \[\gamma(A\times B)=\int_{\mathcal{X}\times\mathcal{X}}\tilde{\gamma}_{\mathsf{ H},\bar{\mathsf{H}}}(A\times B)\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}})= \int_{\mathcal{X}\times\mathcal{X}}\int_{\mathcal{X}\times\mathcal{X}}\mathbb{ I}_{A\times B}(H,\bar{H})\,\tilde{\gamma}_{\mathsf{H},\bar{\mathsf{H}}}(dH,d\bar{H})\, \gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}})\] one could extend the above formula to define an integral of \((H,\bar{H})\)-measurable functions \(c(h,\bar{h})\) with respect to the coupling \(\gamma\): \[\int_{\mathcal{X}\times\mathcal{X}}c(h,\bar{h})\,\gamma(dh,d\bar{h})=\int_{ \mathcal{X}\times\mathcal{X}}\int_{\mathcal{X}\times\mathcal{X}}c(H,\bar{H}) \,\tilde{\gamma}_{\mathsf{H},\bar{\mathsf{H}}}(dH,d\bar{H})\,\gamma^{*}(d \mathsf{H},d\bar{\mathsf{H}}),\] so we could use this formula with \(c(h,\bar{h})=\|h-\bar{h}\|_{\mathcal{X}}^{2}\) to conclude that \[[\mathsf{Wass}_{2}(P^{\vee}(\mu),P^{\vee}(\nu))]^{2} \leq\int_{\mathcal{X}\times\mathcal{X}}\int_{\mathcal{X}\times \mathcal{X}}\|H-\bar{H}\|_{\mathcal{X}}^{2}\,\tilde{\gamma}_{\mathsf{H},\bar{ \mathsf{H}}}(dH,d\bar{H})\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}}),\] \[\stackrel{{\eqref{eq:c_def_def}}}{{\leq}}\int_{ \mathcal{X}\times\mathcal{X}}q_{0}^{2}\|\mathsf{H}-\bar{\mathsf{H}}\|_{ \mathcal{X}}^{2}\,\gamma^{*}(d\mathsf{H},d\bar{\mathsf{H}})\] \[\stackrel{{\eqref{eq:c_def_def}}}{{=}}q_{0}^{2}[ \mathsf{Wass}_{2}(\mu,\nu)]^{2}, \tag{3.13}\] Therefore \(P^{\vee}\) is indeed a contraction in the space \(\mathcal{P}_{2}(\mathcal{X})\). Since \(\mathcal{X}\) is a complete space with respect to the \(\|\cdot\|_{\mathcal{X}}\) norm, so is \(\mathcal{P}_{2}(\mathcal{X})\) with respect to the \(\mathsf{Wass}_{2}\) metric [25], so by Banach fixed point theorem \(P^{\vee}\) admits a unique fixed point \(\mu\), such that \[P^{\vee}(\mu)=\mu.\] By definition of \(P\), we see that \(\mu\) is the unique stationary measure of the Markov chain \((H_{k})_{k\geq 0}\), so that for any initial distribution \(\mu_{0}\) and induced sequence \((\mu_{k})_{k\geq 0}\) we have \[\mathsf{Wass}_{2}(\mu_{k},\mu)=q_{0}^{k}\mathsf{Wass}_{2}(\mu_{0},\mu).\] This completes the proof of Theorem 3.6. ### Bounds on auxiliary processes We will introduce a further auxiliary sequence \(H_{k}^{N}=(X_{k},Z_{k},Y_{k},h_{k}^{N}(\cdot))\) for our proof, where the random sequence \(\left(h_{k}^{N}(\cdot)\right)_{k\geq 0}\) evolves according to the equation \[h_{k+1}^{N}(w)=\sigma(w^{\top}X_{k}+\big{<}b^{\prime}h_{k}^{N}(w^{\prime}), \lambda^{N}\big{>}),\quad h_{0}^{N}(w)\equiv 0. \tag{3.14}\] _Remark 3.7_.: We could now express the hidden memory units in terms of the auxiliary processes. In fact, * The trained hidden memory units \(\hat{S}_{k}^{i,N}=v_{k}^{N}(W_{k-1}^{i})\) for \(k\geq 1\). The base case is justified as \(\hat{S}_{1}^{i,N}=v_{1}^{N}(W_{0}^{i})=\sigma((W_{0}^{i})^{\top}X_{1})\), and we could inductively prove that \[\hat{S}_{k+1}^{i,N}=v_{k+1}^{N}(W_{k}^{i})=\sigma\left((W_{k}^{i})^{\top}X_{k }+\frac{1}{N}\sum_{i=1}^{N}B_{k}^{i}\hat{S}_{k}^{i,N}\right)=\sigma\left((W_{k }^{i})^{\top}X_{k}+\frac{1}{N}\sum_{i=1}^{N}B_{k}^{i}v_{k}^{N}(W_{k-1}^{i}) \right),\] * We also show that the untrained hidden memory units \(S_{k}^{i,N}(X;\theta_{0})=h_{k}^{N}(W^{i})\) for \(k\geq 1\) and \(\theta_{0}=(C^{i},W^{i},B^{i})\). This is shown by noting that for \(k=1\), we have \(S_{1}^{i,N}(X;\theta)=h_{1}^{N}(W^{i})=\sigma((W^{i})^{\top}X_{0})\), and inductively, \[S_{k+1}^{i,N}(X;\theta) =\sigma\left((W^{i})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B^{j}S_ {k}^{j,N}(X;\theta)\right)\] \[=\sigma\left((W^{i})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B^{j}h_ {k}^{N}(W^{j})\right)\] \[=\sigma\left((W^{i})^{\top}X_{k}+\big{<}b^{\prime}h_{k}^{N}(w^{ \prime}),\,\lambda^{N}\big{>}\right)=h_{k+1}^{N}(W^{i}).\] Since we already know that the individual parameters \(C_{k}^{i},W_{k}^{i},B_{k}^{i}\) are close to their initial values \(C^{i},W^{i},B^{i}\) by Lemma 2.8, we should expect that the random functions \(v_{k}^{N}(\cdot)\) to get close to the functions \(h_{k}^{N}(\cdot)\) as \(N\to+\infty\). In addition, we should also expect that \(h_{k}^{N}(\cdot)\) to be close to the random functions \(h_{k}(\cdot)\) as \(N\to+\infty\) as well. This is useful in proving the weak convergence results in the next section. Let us formalise the above heuristics by studying the difference \[\Gamma_{k}^{N}(w)=v_{k}^{N}(w)-h_{k}(w)=\Gamma_{k}^{N,1}(w)+\Gamma_{k}^{N,2}(w),\] where \[\Gamma_{k}^{N,1}(w)=h_{k}^{N}(w)-h_{k}(w), \tag{3.15}\] \[\Gamma_{k}^{N,2}(w)=v_{k}^{N}(w)-h_{k}^{N}(w).\] We will show the following **Proposition 3.8**.: _If \(a\lor b=\max(a,b)\) and \(a\wedge b=\min(a,b)\), we have for \(k\leq\lfloor NT\rfloor\),_ \[\mathbb{E}\left\|\Gamma_{k}^{N,1}(w)\right\|_{H^{1}(\lambda)}^{2}\vee\max_{i }\mathbb{E}\left[\Gamma_{k}^{N,1}(W^{i})\right]^{2}\leq\frac{C}{N}, \tag{3.16}\] _for some constant \(C>0\), and_ \[\mathbb{E}\left\|\Gamma_{k}^{N,2}(w)\right\|_{H^{1}(\lambda)}^{2}\vee\max_{i }\mathbb{E}\left[\Gamma_{k}^{N,2}(W^{i})\right]^{2}\leq\frac{C_{T}}{N^{2-2 \beta-2\gamma}}, \tag{3.17}\] _where \(C_{T}>0\) is another constant depending on \(T\). Therefore for \(k\leq\lfloor NT\rfloor\),_ \[\mathbb{E}\left\|\Gamma_{k}^{N}(w)\right\|_{H^{1}(\lambda)}^{2}\vee\max_{i} \mathbb{E}\left[\Gamma_{k}^{N}(W^{i})\right]^{2}\leq 4C_{\sigma}^{2}\left(\frac{C}{N}+ \frac{C_{T}}{N^{2-2\beta-2\gamma}}\right)\leq\frac{C_{T}}{N^{2-2\beta-2\gamma}}. \tag{3.18}\] Proof.: See section 6.3. As an immediate result of the above proposition, we see that the empirical distribution of the untrained memory units converges. This gives a formal statement of the mean-field behaviour of the distribution of the untrained memory units. ### Numerical simulations To illustrate the above results, we simulated 100 paths of the untrained hidden memory units, denoted as \(h_{k}^{N,\mathsf{path}}(W^{i})=S_{k}^{i,N}(X^{\mathsf{path}};\theta)\), each based on the common parameters \(\theta=(C^{i},W^{i},B^{i})\) and an independent instance of the input sequence \(X^{\mathsf{path}}\) for \(\mathsf{path}=1,2,...,100\). The actual input sequence is simulated from the following 2D random dynamical systems: \[\begin{bmatrix}X_{k+1}\\ Z_{k+1}\end{bmatrix}=\frac{1}{2}\tanh\left(P\begin{bmatrix}1&0\\ 0&1/2\end{bmatrix}P^{-1}\begin{bmatrix}X_{k}\\ Z_{k}\end{bmatrix}\right),\quad P=\begin{bmatrix}\sqrt{3}/2&1/2\\ -1/2&\sqrt{3}/2\end{bmatrix},\;X_{0},Z_{0}\stackrel{{\text{iid}}} {{\sim}}\mathsf{Uniform}[0,1]. \tag{3.19}\] We choose the standard sigmoid function as our activation function, and \((B^{i},W^{i})\) are simulated so that \(B^{i}\stackrel{{\text{iid}}}{{\sim}}\mathsf{Uniform}[0,1]\) and \(W^{i}\stackrel{{\text{iid}}}{{\sim}}\mathsf{Normal}[0,1]\).1 Footnote 1: We use the notation \(\mathsf{Uniform}[0,1]\) to denote the number of nodes in the training set. The mean-field behaviour could be studied by looking at the empirical distributions of the untrained hidden units for each set (represented by the grey lines in figure 3): \[\nu_{k}^{N,\mathsf{path}}=\frac{1}{N}\sum_{i=1}^{N}\delta_{h_{k}^{N,\mathsf{ path}}(W^{i})},\] as well as the overall distributions of the hidden units from all sets as a Monte-carlo estimate of the expectation of the distribution of \(h_{k}^{N}(W^{i})\) (represented by the red lines in figure 3): \[\nu_{k}^{N,\mathsf{overall}}=\frac{1}{100}\sum_{\mathsf{path}=1}^{100}\delta_{h _{k}^{N,\mathsf{path}}(W^{i})}.\] The convergence of distribution of \(h_{k}^{N}(W^{i})\) as \(N\to\infty\) is illustrated by noting that the above plots are similar to each other for large \(N\). Figure 1 in the introduction is plotted by stacking all overall empirical distributions \(\nu_{k}^{N,\mathsf{overall}}\) on the same plot for easier comparison. The intermediate proposition provides a solid foundation for our numerical experiment. Showing the ergodicity of \(h_{k}^{N}(W^{i})=S^{i,N}(X;\theta)\) is easier than showing the ergodicity of the function \(h_{k}(\cdot)\) as the former does not involve the exact computation of \(\langle b^{\prime}h_{k}(w^{\prime}),\lambda\rangle\). The above proposition implies that for sufficiently large \(N\), the untrained memory units \(h_{k}^{N}(W^{i})=S^{i,N}(X;\theta)\) exhibit similar ergodic behaviour due to the ergodicity of \(h_{k}\). To illustrate this, we compute the time average of the empirical first and second moments of the sample hidden memory units \((S_{k}^{i,N}(X;\theta))_{k\geq 0}\) for each sets, defined as \[\mathsf{timeAvg}_{T}^{N,p,\mathsf{path}}=\frac{1}{NT}\sum_{k=0}^{T}\sum_{i=1}^ {N}\left[h_{k}^{N,\mathsf{path}}(W^{i})\right]^{p},\quad p=1,2,\quad\mathsf{ path}=1,...,100,\] as well as the overall time average of the empirical first and second moments: \[\mathsf{timeAvg}_{T}^{N,p,\mathsf{overall}}=\frac{1}{100}\sum_{\mathsf{path}=1 }^{100}\mathsf{timeAvg}_{T}^{N,p,\mathsf{path}}.\] They are both plotted in Figure 4, with the redline being \(\mathsf{timeAvg}_{T}^{N,p,\mathsf{overall}}\), and that all \(\mathsf{timeAvg}_{T}^{N,p,\mathsf{path}}\) lies within the greyband. The shrinking of the grayband towards the red line is another evidence of mean-field behaviour, and the convergence of \(\mathsf{timeAvg}_{T}^{N,p,\mathsf{overall}}\) demonstrates the ergodicity of the untrained hidden memory units. Figure 2 in the introduction is the enlarged version of 4 for \(N=10^{6}\). Please refer to [https://github.com/Samuel-CHLam/rnn_ergodicity](https://github.com/Samuel-CHLam/rnn_ergodicity) for the code of the simulation. ## 4 Dynamics of the RNN output ### Pre-limit evolution To study the dynamics of the sequence \(\hat{Y}_{k}^{N}\), we first study the auxiliary sequence of functions \[g_{k}^{N}:h\in\mathcal{H}\mapsto g^{N}(h;\theta_{k}):=\frac{1}{N^{\beta}}\sum_{ i=1}^{N}C_{k}^{i}h(W_{k}^{i})=N^{1-\beta}\left\langle h(w)c,\,\lambda_{k}^{N} \right\rangle,\] Notice that \(g_{k}^{N}(\cdot)\) is a sequence of linear functionals on \(\mathcal{H}\) that evolves with the time step \(k\). We further note that \(\hat{S}_{k+1}^{i,N}=v_{k+1}^{N}(W_{k}^{i})\), so \[\hat{Y}_{k}^{N}=g_{k}^{N}(v_{k+1}^{N}(w)).\] Using Taylor expansion, we derive the following evolution equation for \(g_{k}^{N}(h)\) for a fixed test function \(h\): Figure 3: Empirical distributions of the untrained hidden memory units \(h_{k}^{N}(W^{i})\) for varying \(N\) and large time step \(k\approx 50000\). The grey lines represents the empirical distribution for a _single_ set of the untrained hidden memory units \(\nu_{k}^{N,\mathsf{path}}\), and the red line represents the empirical distribution of all untrained hidden memory units from _all_ sets \(\nu_{k}^{N,\mathsf{overall}}\). Figure 4: The plot of time averages for \(N=10^{k},k=2,3,4,5,6\) and \(p=1,2\). The actual realisations of \(\mathsf{timeAvg}_{T}^{N,p,\mathsf{path}}\) lie in the grey band, and the red line is the overall time average \(\mathsf{timeAvg}_{T}^{N,p,\mathsf{overall}}\) as the Monte-carlo estimate of \(\mathbb{E}[\mathsf{timeAvg}_{T}^{N,p}]\). The desired converging behaviour is only exhibited when \(N\) is sufficiently large. **Lemma 4.1**.: _For all \(h\in\mathcal{H}\), let_ \[\triangle g_{k}^{N}(h) =g_{k+1}^{N}(h)-g_{k}^{N}(h),\] \[\delta^{(1)}g_{k}^{N}(h) =-\frac{\alpha}{N^{2}}(\psi^{N}(\hat{Y}_{k}^{N})-Y_{k})\sum_{i=1} ^{N}\left[\hat{S}_{k+1}^{i,N}h(W_{k}^{i})+(C_{k}^{i})^{2}\Delta\hat{S}_{k+1}^{i }\nabla h(W_{k}^{i})^{\top}X_{k}\right],\] \[=-\frac{\alpha}{N}(\psi^{N}(\hat{Y}_{k}^{N})-Y_{k})\big{(}\, \big{\langle}v_{k+1}^{N}(w)h(w)+c^{2}\sigma^{\prime}\left(w\cdot X_{k}+\big{ \langle}bv_{k}^{N}(w),\,\lambda_{k}^{N}\big{\rangle}\right)\nabla h(w)^{\top}X _{k},\,\lambda_{k}^{N}\big{\rangle}\,\big{)}. \tag{4.1}\] _Then for all \(k\leq\lfloor NT\rfloor\),_ \[\left|\triangle g_{k}^{N}(h)-\delta^{(1)}g_{k}^{N}(h)\right|\leq\frac{C_{T}}{ N^{3-\beta-2\gamma}}.\] Proof.: See section 6.4. We will prove that the RNN converges to a continuous-time limit equation on \([0,T]\) as the number of hidden units \(N\to\infty\) and where the number of time steps is \(k=0,1,\ldots,\lfloor TN\rfloor\). The following time-rescaling is defined to embed (4.1) in continuous time: \(g_{t}^{N}(h)=g_{\lfloor Nt\rfloor}^{N}(h;\theta_{\lfloor Nt\rfloor})\). Using a telescoping series applied to (4.1) and re-writing in terms of the empirical measure \(\lambda_{k}^{N}\), one can conclude that \[\left|g_{t}^{N}(h)-g_{0}^{N}(h)-\sum_{k=0}^{\lfloor Nt\rfloor-1} \delta^{(1)}g_{k}^{N}(h)\right| =\left|\sum_{k=0}^{\lfloor Nt\rfloor-1}(\triangle g_{k}^{N}(h)- \delta^{(1)}g_{k}^{N}(h))\right|\] \[\leq\sum_{k=0}^{\lfloor Nt\rfloor-1}\left|\triangle g_{k}^{N}(h) -\delta^{(1)}g_{k}^{N}(h)\right|\] \[\leq Nt\times\frac{C_{t}}{N^{3-\beta-2\gamma}}=\frac{C_{t}}{N^{2- \beta-2\gamma}}. \tag{4.2}\] The sum in (4.2) is an approximation to a Riemann time integral with error \(\mathcal{O}(N^{-1})\), so one would expect that for \(t\in[0,T]\), the evolution of \(g_{t}^{N}(h)\) for fixed \(h\) would converge in distribution to a solution of the linear infinite-dimensional ODE equation. Formally, we have the following: **Theorem 4.2**.: _Let \(T<\infty\) be given and \(t\leq T\). Define the infinite dimensional equation for \(g_{t}\in(H^{1}(\lambda))^{*}\), the dual space of \(H^{1}(\lambda)\), such that for the test function \(h\in H^{1}(\lambda)\):_ \[g_{t}(h) =-\alpha\int_{0}^{t}\left[\int_{\mathcal{X}}(g_{s}(\varsigma( \mathsf{H}))-\mathsf{y}))\mathcal{K}_{\kappa,\lambda}(h,\mathsf{h})\,\mu( \mathsf{H}\mathsf{H})\right]\,ds,\quad g_{0}(h)=0,\] \[\mathcal{K}_{\kappa,\lambda}(h,\mathsf{h}) =\left\langle\sigma(w^{\top}\mathsf{x}+\langle b^{\prime}\mathsf{ h}(w^{\prime}),\lambda\rangle)h(w)+c^{2}\sigma^{\prime}(w^{\top}\mathsf{x}+ \langle b^{\prime}\mathsf{h}(w^{\prime}),\lambda\rangle)\nabla h(w)^{\top}x, \,\lambda\right\rangle,\] \[[\varsigma(\mathsf{H})](w) =\sigma(w^{\top}\mathsf{x}+\langle b^{\prime}\mathsf{h}(w^{ \prime}),\,\lambda\rangle) \tag{4.3}\] _where \(\mathsf{H}=(\mathsf{x},\mathsf{z},\mathsf{y},\mathsf{h})\), and \(\mu\) is the stationary distribution of Markov chain \((H_{k})_{k\geq 0}\) obtained in Theorem (3.6). Then, for \(\gamma>0\) sufficiently small, as \(N\to\infty\) and \(T<+\infty\), we have_ \[\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}\mathbb{E}\left[\big{|}g_{t}^{N}(h)-g_{t }(h)\big{|}\right]\leq\frac{C_{T}}{N^{\epsilon}},\] _where \(\epsilon=(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/2)\wedge 1/2>0\)._ Proof.: The well-posedness of the ODE is discussed in section 6.5, and the actual proof of weak convergence is deferred to section 7. ## 5 Limit RNN minimises the average loss Let us analyse the deterministic limiting ODE. Again we let \(\mathsf{H}=(\mathsf{x},\mathsf{z},\mathsf{y},\mathsf{h})\) and \(\tilde{\mathsf{H}}=(\tilde{\mathsf{x}},\tilde{\mathsf{z}},\tilde{\mathsf{y}}, \tilde{\mathsf{h}})\). If we integrate the limit equation (4.3) with respect to \(\mathcal{X}\) again with the limiting measure \(\mu\), we will obtain that \[\int_{\mathcal{X}}g_{t}(\varsigma(\tilde{\mathsf{H}}))\,\mu(d \tilde{\mathsf{H}}) =\int_{\mathcal{X}}\,\left[-\alpha\int_{0}^{t}\int_{\mathcal{X}}(g _{s}(\varsigma(\mathsf{H}))-\mathsf{y})\mathcal{K}_{\mathsf{x},\lambda}( \varsigma(\tilde{\mathsf{H}}),\mathsf{h})\,\mu(d\mathsf{H})\,ds\right]\mu(d \tilde{\mathsf{H}})\] \[\overset{(\text{Fubini})}{=}-\alpha\int_{0}^{t}\int_{\mathcal{X}} \int_{\mathcal{X}}(g_{s}(\varsigma(\mathsf{H}))-\mathsf{y})\mathcal{K}_{ \mathsf{x},\lambda}(\varsigma(\tilde{\mathsf{H}}),\mathsf{h})\,\mu(d\mathsf{H })\,\mu(d\tilde{\mathsf{H}})\,ds, \tag{5.1}\] which yields \[\frac{d}{dt}\int_{\mathcal{X}}g_{t}(\varsigma(\tilde{\mathsf{H}}))\,\mu(d \tilde{\mathsf{H}})=-\alpha\int_{\mathcal{X}}\int_{\mathcal{X}}(g_{t}( \varsigma(\mathsf{H}))-\mathsf{y})\mathcal{K}_{\mathsf{x},\lambda}(\varsigma( \tilde{\mathsf{H}}),\mathsf{h})\,\mu(d\mathsf{H})\,\mu(d\tilde{\mathsf{H}}).\] Notice that the integrand on the left hand side of (5) is \(g_{t}(\varsigma(\tilde{\mathsf{H}}))\) and not \(g_{t}(\varsigma(\mathsf{h}))\). This is because \(\varsigma(\tilde{\mathsf{H}})](w)=\sigma(w^{\top}\tilde{\mathsf{x}}+\left\langle b ^{\prime}\tilde{\mathsf{h}}(w^{\prime}),\,\lambda\right\rangle)\) mimics the limiting behavior of the pre-limit memory state (1.1). Now, we can further compute that \[\frac{d}{dt}\left[\frac{1}{2}\int_{\mathcal{X}}(g_{t}(\varsigma( \tilde{\mathsf{H}}))-\tilde{\mathsf{y}})^{2}\,\mu(d\tilde{\mathsf{H}})\right] =\int_{\mathcal{X}}\left[(g_{t}(\varsigma(\tilde{\mathsf{H}}))- \tilde{\mathsf{y}})\frac{dg_{t}(\varsigma(\tilde{\mathsf{H}}))}{dt}\right]\, \mu(d\tilde{\mathsf{H}})\] \[=-\alpha\int_{\mathcal{X}}\int_{\mathcal{X}}\mathcal{K}_{ \mathsf{x},\lambda}(\varsigma(\tilde{\mathsf{H}}),\mathsf{h})(g_{t}(\varsigma( \tilde{\mathsf{H}}))-\tilde{\mathsf{y}})(g_{t}(\varsigma(\mathsf{H}))-\mathsf{ y})\,\mu(d\mathsf{H})\,\mu(d\tilde{\mathsf{H}}) \tag{5.2}\] For convenience we define the kernel on \(\mathcal{X}\): \[\tilde{\mathcal{K}}(\tilde{\mathsf{H}},\mathsf{H}) =\mathcal{K}_{\mathsf{x},\lambda}(\varsigma(\tilde{\mathsf{H}}), \mathsf{h})\] \[=\left\langle\sigma(w^{\top}\mathsf{x}+\langle b^{\prime} \mathsf{h}(w^{\prime}),\,\lambda\rangle)[\varsigma(\tilde{\mathsf{H}})](w)+ c^{2}\sigma^{\prime}(w^{\top}\mathsf{x}+\langle b^{\prime}\mathsf{h}(w^{ \prime}),\,\lambda\rangle)\nabla[\varsigma(\tilde{\mathsf{H}})](w),\,\lambda\right\rangle\] \[=\langle\sigma(w^{\top}\mathsf{x}+\langle b^{\prime}\mathsf{h}(w^ {\prime}),\,\lambda\rangle)\sigma(w^{\top}\tilde{\mathsf{x}}+\langle b^{ \prime}\tilde{\mathsf{h}}(w^{\prime}),\,\lambda\rangle)\] \[\quad+c^{2}\sigma^{\prime}(w^{\top}\mathsf{x}+\langle b^{\prime} \mathsf{h}(w^{\prime}),\,\lambda\rangle)\sigma^{\prime}(w^{\top}\tilde{ \mathsf{x}}+\langle b^{\prime}\tilde{\mathsf{h}}(w^{\prime}),\,\lambda\rangle) \mathsf{x}^{\top}\tilde{\mathsf{x}},\,\lambda\rangle\] We can now prove that the loss is monotonically decreasing during training. **Proposition 5.1** (Minimisation of averaged square loss).: \[\frac{d}{dt}\left[\frac{1}{2}\int_{\mathcal{X}}(g_{t}(\varsigma(\tilde{\mathsf{ H}}))-\tilde{\mathsf{y}})^{2}\,\mu(d\tilde{\mathsf{H}})\right]\leq 0.\] (5.3) Proof.: By straight forward computation, we have \[\frac{d}{dt}\left[\frac{1}{2}\int_{\mathcal{X}}(g_{t}(\varsigma( \tilde{\mathsf{H}}))-\tilde{\mathsf{y}})^{2}\,\mu(d\tilde{\mathsf{H}})\right]\] \[=-\alpha\int_{\mathcal{X}}\int_{\mathcal{X}}\tilde{\mathcal{K}}( \tilde{\mathsf{H}},\mathsf{H})(g_{t}(\varsigma(\tilde{\mathsf{H}}))-\tilde{ \mathsf{y}})(g_{t}(\varsigma(\mathsf{H}))-\mathsf{y})\,\mu(d\mathsf{H})\,\mu(d \tilde{\mathsf{H}})\] \[=-\alpha\int_{\mathcal{X}}\int_{\mathcal{X}}\left[\langle\sigma(w ^{\top}\mathsf{x}+\langle b^{\prime}\mathsf{h}(w^{\prime}),\,\lambda\rangle) \sigma(w^{\top}\tilde{\mathsf{x}}+\langle b^{\prime}\tilde{\mathsf{h}}(w^{ \prime}),\,\lambda\rangle),\,\lambda\rangle(g_{t}(\varsigma(\tilde{\mathsf{H }}))-\tilde{\mathsf{y}})(g_{t}(\varsigma(\mathsf{H}))-\mathsf{y})\right.\] \[\quad+\langle c^{2}\sigma^{\prime}(w^{\top}\mathsf{x}+\langle b^ {\prime}\mathsf{h}(w^{\prime}),\,\lambda\rangle)\sigma^{\prime}(w^{\top} \tilde{\mathsf{x}}+\langle b^{\prime}\tilde{\mathsf{h}}(w^{\prime}),\,\lambda \rangle)\mathsf{x}^{\top}\tilde{\mathsf{x}},\,\lambda\rangle(g_{t}(\varsigma (\tilde{\mathsf{H}}))-\tilde{\mathsf{y}})(g_{t}(\varsigma(\mathsf{H}))- \mathsf{y})\,\mu(d\mathsf{H})\,\mu(d\tilde{\mathsf{H}})\right]\] \[=-\alpha\Bigg{\langle}\left(\int_{\mathcal{X}}\sigma(w^{\top} \mathsf{x}+\langle b^{\prime}\mathsf{h}(w^{\prime}),\,\lambda\rangle)(g_{t}( \varsigma(\mathsf{H}))-\mathsf{y})\,\mu(d\mathsf{H})\right)^{2}+\] \[\qquad\qquad+c^{2}\left|\int_{\mathcal{X}}\sigma^{\prime}(w^{ \top}\mathsf{x}+\langle b^{\prime}\mathsf{h}(w^{\prime}),\,\lambda\rangle)(g_ {t}(\varsigma(\mathsf{H}))-\mathsf{y})\mathsf{x}\,\mu(d\mathsf{H})\right|^{2}, \,\lambda\Bigg{\rangle}\] \[\leq 0,\] proving the claim. Therefore, the function \(t\mapsto\frac{1}{2}\int_{\mathcal{X}}(g_{t}(\varsigma(\tilde{\mathsf{H}}))- \tilde{\mathsf{y}})^{2}\,\mu(d\tilde{\mathsf{H}})\) is decreasing. This is a useful theoretical guarantee that emerges from the limit analysis. The pre-limit training algorithm (tBPPT) truncates the chain rule - see the algorithm (1) - and therefore it is not guaranteed that parameter updates are in a descent direction for the loss function. That is, in principle, the loss (for the long-run distribution of the data sequence) may actually increase when the parameters are updated. Proposition 5.1 proves that, as \(N,k\to\infty\), the RNN model will be updated in a descent direction for the loss function. ## 6 Technical Lemmas The unimportant constants \(C,C_{T}>0\) may change from line to line, and we allow the constants \(C_{T}\) to depend on \(T\). ### A-priori control of the parameter increments Proof of Lemma 2.8Assumption 2.3 on the activation functions \(\sigma\) implies for all \(k\in\mathbb{N}\) \[\hat{S}^{i,N}_{k+1} =\sigma\left((W^{i}_{k})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B^{j }_{k}\hat{S}^{j,N}_{k}\right)\leq 1 \tag{6.1}\] \[\Delta\hat{S}^{i,N}_{k+1} =\sigma^{\prime}\left((W^{i}_{k})^{\top}X_{k}+\frac{1}{N}\sum_{j=1 }^{N}B^{j}_{k}\hat{S}^{i,N}_{k}\right)\leq C_{\sigma}, \tag{6.2}\] where \((\hat{S}^{i,N}_{k})_{k\geq 0}\) and \((\Delta\hat{S}^{i,N}_{k})_{k\geq 1}\) are as specified in algorithm 1. Therefore, we have \[\left|C^{i}_{k+1}-C^{i}_{k}\right|\leq\frac{\alpha}{N^{2-\beta}}\left|\hat{S}^ {i,N}_{k+1}\right|\left(|\psi^{N}(\hat{Y}^{N}_{k})|+|Y_{k}|\right)\leq\frac{ \alpha}{N^{2-\beta}}(2N^{\gamma}+C_{y})\leq\frac{C}{N^{2-\beta-\gamma}}, \tag{6.3}\] so by a telescopic sum argument we have for all \(k\leq\lfloor NT\rfloor\) \[\left|C^{i}_{k}-C^{i}_{0}\right|\leq\sum_{j=0}^{k-1}\left|C^{i}_{j+1}-C^{i}_{ j}\right|\leq\frac{CNT}{N^{2-\beta-\gamma}}=\frac{CT}{N^{1-\beta-\gamma}},\] \[\left|C_{k}^{i}\right|\leq\left|C_{0}^{i}\right|+\left|C_{k}^{i}-C_{0}^{i} \right|\leq 1+\frac{CT}{N^{1-\beta-\gamma}}\leq C_{T}.\] We may further conclude that \[\left\|W_{k+1}^{i}-W_{k}^{i}\right\| \leq\frac{\alpha}{N^{2-\beta}}\left|C_{k}^{i}\right|\left[\left| \psi^{N}(\hat{Y}_{k}^{N})\right|+\left|Y_{k}\right|\right]\left|\Delta\hat{S}_ {k+1}^{i,N}\right|\leq\frac{\alpha C_{\sigma}C_{T}}{N^{2-\beta}}\left[2N^{ \gamma}+C_{y}\right]\leq\frac{C_{T}}{N^{2-\beta-\gamma}},\] \[\left|B_{k+1}^{i}-B_{k}^{i}\right| \leq\frac{\alpha}{N^{3-\beta}}\left(\left|\psi^{N}(\hat{Y}_{k}^{N })\right|+\left|Y_{k}\right|\right)\sum_{\ell=1}^{N}\left|C_{k}^{\ell}|| \Delta\hat{S}_{k+1}^{\ell,N}|\leq\frac{\alpha C_{\sigma}C_{T}}{N^{2-\beta}} \left[2N^{\gamma}+C_{y}\right]\leq\frac{C_{T}}{N^{2-\beta-\gamma}},\] so by following the telescoping sum argument as above we conclude, as desired, \[\left|C_{k}^{i}-C_{0}^{i}\right|+\left\|W_{k}^{i}-W_{0}^{i}\right\|+\left|B_{ k}^{i}-B_{0}^{i}\right|\leq\frac{C_{T}}{N^{1-\beta-\gamma}}.\] Proof of Remark 2.9The proof of remark relies on the observation that \[\left|C_{k+1}^{i}\right|-\left|C_{k}^{i}\right|\leq\left|C_{k+1}^{i}-C_{k}^{i }\right|\leq\frac{\alpha}{N^{2-\beta}}\left|S_{k+1}^{i}\right|\left(\left|\hat {Y}_{k}^{N}\right|+\left|Y_{k}\right|\right)\leq\frac{\alpha C_{\sigma}C_{y}}{ N^{2-\beta}}+\frac{\alpha C_{\sigma}^{2}}{N^{2}}\sum_{i=1}^{N}|C_{k}^{i}|, \tag{6.4}\] so by letting \(\bar{C}_{k}:=\frac{\sum_{i=1}^{N}\left|C_{k}^{i}\right|}{N}\), one has \[\bar{C}_{k+1}-\bar{C}_{k}\leq\frac{\alpha C_{\sigma}C_{y}}{N^{2-\beta}}+\frac {\alpha C_{\sigma}^{2}}{N}\bar{C}_{k}\implies\bar{C}_{k+1}\leq\left(1+\frac{ \alpha C_{\sigma}^{2}}{N}\right)\bar{C}_{k}+\frac{\alpha C_{\sigma}C_{y}}{N^{ 2-\beta}},\] and by Lemma A.1, for all \(k\leq\left\lfloor NT\right\rfloor\), one has \[\bar{C}_{k}\leq\left(1+\frac{\alpha C_{\sigma}^{2}}{N}\right)^{\left\lfloor TN \right\rfloor}\left(1+\frac{\alpha C_{\sigma}C_{y}/N^{2-\beta}}{\alpha C_{ \sigma}^{2}/N}\right)\leq C\exp(\alpha C_{\sigma}^{2}T)=:C_{T}.\] One can that make use of equation (6.4) to conclude that \[\left|C_{k+1}^{i}-C_{k}^{i}\right|\leq\frac{\alpha C_{\sigma}C_{y}}{N^{2- \beta}}+\frac{\alpha C_{\sigma}\bar{C}_{k}}{N}\leq\frac{C_{T}}{N},\] and that \(\left|C_{k}^{i}\right|\leq NTC_{T}/N=C_{T}\) is bounded whenever \(k\leq\left\lfloor NT\right\rfloor\) by a telescoping sum arguments. This enables us to show further that \[\left\|W_{k+1}^{i}-W_{k}^{i}\right\| \leq\frac{\alpha}{N^{2-\beta}}\left|C_{k}^{i}\right|\left[\left| \hat{Y}_{k}^{N}\right|+\left|Y_{k}\right|\right]\left|\Delta\hat{S}_{k+1}^{i,N}\right|\] \[\leq\frac{\alpha C_{\sigma}\left|C_{k}^{i}\right|}{N^{2-\beta}} \left[\left|Y_{k}\right|+\frac{1}{N^{\beta}}\sum_{\bullet=1}^{N}\left|C_{k}^{ \bullet}\right|\right]\] \[\leq\frac{C_{T}}{N^{2-\beta}}\left[\left|C_{y}\right|+C_{T}N^{1- \beta}\right]\leq\frac{C_{T}}{N},\] \[\left|B_{k+1}^{i}-B_{k}^{i}\right| \leq\frac{\alpha}{N^{3-\beta}}\left(\left|\hat{Y}_{k}^{N}\right| +\left|Y_{k}\right|\right)\sum_{\ell=1}^{N}\left|C_{k}^{\ell}||\Delta\hat{S}_{k+ 1}^{\ell,N}|\] \[\leq\frac{\alpha C_{\sigma}}{N^{3-\beta}}\left(C_{y}+\frac{1}{N^{ \beta}}\sum_{\bullet=1}^{N}\left|C_{k}^{\bullet}\right|\hat{S}_{k+1}^{\bullet,N }\right|\right)\left(\sum_{\ell=1}^{N}\left|C_{k}^{\ell}\right|\right)\] \[=\frac{\alpha C_{\sigma}}{N^{3-\beta}}\left(C_{y}+N^{1-\beta}C_{ \sigma}C_{T}\right)NC_{T}\leq\frac{C_{T}}{N}.\] ### Proof of Lemma 3.2 \[\left\|F(x,z,y,h,\epsilon,\eta)-F(\tilde{x},\tilde{z},\tilde{y}, \tilde{h},\epsilon,\eta)\right\|_{\mathcal{X}}^{2}\] \[=|g(x,z)-g(\tilde{x},\tilde{z})|^{2}+|f(x,z)-f(\tilde{x},\tilde{z} )|^{2}+\left\|[\varsigma(x,z,y,h)](w)-[\varsigma(\tilde{x},\tilde{z},\tilde{y},\tilde{h})](w)\right\|_{H^{1}(\lambda)}^{2}\] \[\leq L^{2}\left(|x-\tilde{x}|^{2}+|z-\tilde{z}|^{2}\right)+ \left\|[\varsigma(x,z,y,h)](w)-[\varsigma(\tilde{x},\tilde{z},\tilde{y}, \tilde{h})](w)\right\|_{H^{1}(\lambda)}^{2}\] Note \[\left|[\varsigma(x,z,y,h)](w)-[\varsigma(\tilde{x},\tilde{z}, \tilde{y},\tilde{h})](w)\right|\] \[=\left|\sigma(w^{\top}x+\langle b^{\prime}h(w^{\prime}),\lambda \rangle)-\sigma(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h}(w^{\prime}), \lambda\rangle)\right|\] \[\leq C_{\sigma}\left|w^{\top}(x-\tilde{x})+\langle b^{\prime}(h( w^{\prime})-\tilde{h}(w^{\prime}),\lambda\rangle\right|\] \[\leq C_{\sigma}\left(|w|\,|x-\tilde{x}|+\int_{\mathbb{R}^{d}} \left|h(w^{\prime})-\tilde{h}(w^{\prime})\right|\,\lambda(dw^{\prime})\right)\] \[\leq C_{\sigma}\left(|w|\,|x-\tilde{x}|+\|h-\tilde{h}\|_{H^{1}( \lambda)}\right).\] \[\implies\left|\sigma(w^{\top}x+\langle b^{\prime}h(w^{\prime}), \lambda\rangle)-\sigma(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h}(w^{ \prime}),\lambda\rangle)\right|^{2}\] \[\leq 2C_{\sigma}^{2}\left(|w|^{2}\,|x-\tilde{x}|^{2}+\|h-\tilde{h} \|_{H^{1}(\lambda)}^{2}\right).\] Moreover, \[\left|\nabla[\varsigma(x,z,y,h)](w)-[\nabla\varsigma(\tilde{x}, \tilde{z},\tilde{y},\tilde{h})](w)\right|\] \[=\left|\nabla\sigma(w^{\top}x+\langle b^{\prime}h(w^{\prime}), \lambda\rangle)-\nabla\sigma(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h}(w^ {\prime}),\lambda\rangle)\right|\] \[=\left|\sigma^{\prime}(w^{\top}x+\langle b^{\prime}h(w^{\prime}), \lambda\rangle)x-\sigma^{\prime}(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h }(w^{\prime}),\lambda\rangle)\tilde{x}\right|\] \[\leq\left|\sigma^{\prime}(w^{\top}x+\langle b^{\prime}h(w^{ \prime}),\lambda\rangle)-\sigma^{\prime}(w^{\top}\tilde{x}+\langle b^{\prime} \tilde{h}(w^{\prime}),\lambda\rangle)\right|\left|x\right|+\left|\sigma^{ \prime}(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h}(w^{\prime}),\lambda \rangle)\tilde{x}\right|\left|x-\tilde{x}\right|\] \[\leq C_{\sigma}\left|w^{\top}(x-\tilde{x})+\langle b^{\prime}(h( w^{\prime})-\tilde{h}(w^{\prime}),\lambda\rangle\right|+C_{\sigma}\left|x- \tilde{x}\right|\] \[\leq C_{\sigma}\left(|w|\,|x-\tilde{x}|+\|h-\tilde{h}\|_{H^{1}( \lambda)}\right)+C_{\sigma}\left|x-\tilde{x}\right|.\] \[\implies \left|\nabla\sigma(w^{\top}x+\langle b^{\prime}h(w^{\prime}), \lambda\rangle)-\nabla\sigma(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h}(w^ {\prime}),\lambda\rangle)\right|^{2}\] \[\leq 3C_{\sigma}^{2}\left((1+|w|^{2})|x-\tilde{x}|^{2}+\|h-\tilde{h} \|^{2}\right),\] so, using Assumptions 2.1 and 2.5 we have the bound \[\left\|[\varsigma(x,z,y,h)]-[\varsigma(\tilde{x},\tilde{z},\tilde{y },\tilde{h})]\right\|_{H^{1}(\lambda)}^{2} =\left\|\sigma(w^{\top}x+\langle b^{\prime}h(w^{\prime}),\lambda \rangle)-\sigma(w^{\top}\tilde{x}+\langle b^{\prime}\tilde{h}(w^{\prime}), \lambda\rangle)\right\|_{H^{1}(\lambda)}^{2}\] \[\leq C_{\sigma}^{2}|x-\tilde{x}|^{2}\int_{\mathbb{R}^{d}}(5|w|^{ 2}+3)\,\lambda(dw)+5C_{\sigma}^{2}\left\|h-\tilde{h}\right\|_{H^{1}(\lambda)}^ {2}\] \[\leq 8C_{\sigma}^{2}\left|x-\tilde{x}\right|^{2}+5C_{\sigma}^{2} \left\|h-\tilde{h}\right\|_{H^{1}(\lambda)}^{2}, \tag{6.5}\] and \[\left\|F(x,z,y,h,\epsilon,\eta)-F(\tilde{x},\tilde{z},\tilde{y},\tilde{h},\epsilon,\eta)\right\|_{\mathcal{X}}^{2}\leq(L^{2}+8C_{\sigma}^{2})\left(|x-\tilde{x}|^ {2}+|z-\tilde{z}|^{2}+|y-\tilde{y}|^{2}+\left\|h-\tilde{h}\right\|^{2}\right)\] By Assumption 2.3 we have that the Lipschitz constant is bounded by \[q_{0}^{2}:=L^{2}+8C_{\sigma}^{2}<1,\] and that \(F(\cdot,\epsilon,\eta)\) is a continuous contraction for any realisations of \(\epsilon\) and \(\eta\). ### Analysis of the auxiliary processes (proof of proposition 3.8) Preliminary bounds for the auxiliary processesLet us begin by establishing some preliminary bounds for \(h_{k}(w)\), \(h_{k}^{N}(w)\) and \(v_{k}^{N}(w)\). **Lemma 6.1**.: _All of the \(h_{k}(w)\), \(h_{k}^{N}(w)\) and \(v_{k}^{N}(w)\) are elements in \(\mathcal{H}\). As a result, the following holds:_ \[\sup_{w}|h_{k}(w)|\vee\sup_{w}|h_{k}^{N}(w)|\vee\sup_{w}|v_{k}^{N }(w)| \leq 1, \tag{6.6}\] \[\sup_{w}|\nabla h(w)|\vee\sup_{w}|\nabla h_{k}^{N}(w)|\vee\sup_{ w}|\nabla v_{k}^{N}(w)| \leq C_{\sigma}<1,\] (6.7) \[\max_{i,j}\sup_{w}\left[|\partial_{ij}h_{k}(w)|\vee|\partial_{ij }h_{k}^{N}(w)|\vee|\partial_{ij}v_{k}^{N}(w)|\right] \leq C_{\sigma}<1.\] Proof.: To see this, we record the following recursive formula for the gradients of the above random sequences of functions for our convenience: \[\nabla v_{k+1}^{N}(w) =\sigma^{\prime}\left(w^{\top}X_{k}+\frac{1}{N}\sum_{i=1}^{N}B_{ k}^{i}v_{k}^{N}(W_{k-1}^{i})\right)X_{k},\quad\nabla v_{0}^{N}(w)\equiv 0,\] \[\nabla h_{k+1}^{N}(w) =\sigma^{\prime}(w^{\top}X_{k}+\langle b^{\prime}h_{k}^{N}(w^{ \prime}),\lambda^{N}\rangle)X_{k},\quad\nabla h_{0}^{N}(w)\equiv 0,\] \[\nabla h_{k+1}(w) =\sigma^{\prime}(w^{\top}X_{k}+\langle b^{\prime}h_{k}(w^{ \prime}),\lambda\rangle)X_{k},\quad\nabla h_{0}(w)\equiv 0.\] We further note that \[\left|\frac{1}{N}\sum_{i=1}^{N}B_{k}^{i}v_{k}^{N}(W_{k-1}^{i}) \right|\leq\frac{1}{N}\sum_{i=1}^{N}\left|B_{k}^{i}\right|\left|v_{k}^{N}(W_{ k}^{i})\right|\leq C_{T},\] where \(C_{T}\) depends on the size of the increments as derived in Lemma 2.8. Similarly, we have \[\left|\langle b^{\prime}h_{k}^{N}(w^{\prime}),\lambda^{N}\rangle \right| =\left|\frac{1}{N}\sum_{i=1}^{N}B^{i}h_{k}^{N}(W^{i})\right|\leq \frac{1}{N}\sum_{i=1}^{N}\left|B^{i}\right|\left|h_{k}^{N}(W^{i})\right|\leq 1, \tag{6.8}\] \[\left|\langle b^{\prime}h_{k}(w^{\prime}),\lambda\rangle\right| =\left|\int_{\mathbb{R}\times\mathbb{R}^{d}}b^{\prime}h_{k}(w^{ \prime})\,\lambda(db^{\prime},dw^{\prime})\right|\leq\int_{\mathbb{R}\times \mathbb{R}^{d}}\left|b^{\prime}\right|\left|h_{k}(w^{\prime})\right|\,\lambda (db^{\prime},dw^{\prime})\leq 1.\] So we know that for finite \(k,N\), all of the \(h_{k}(w)\), \(h_{k}^{N}(w)\) and \(v_{k}^{N}(w)\) are elements of \(H^{1}(\lambda)\). Bounds for the differences between the auxiliary processesRecall that \[\Gamma_{k}^{N}(w)=v_{k}^{N}(w)-h_{k}(w)=\Gamma_{k}^{N,1}(w)+\Gamma_{k}^{N,2}( w),\] where \[\Gamma_{k}^{N,1}(w)=h_{k}^{N}(w)-h_{k}(w), \tag{6.9}\] \[\Gamma_{k}^{N,2}(w)=v_{k}^{N}(w)-h_{k}^{N}(w).\] Since the activation function is regular, we know that \[\left|\Gamma_{k}^{N,1}(w)\right|^{2}+\left|\nabla\Gamma_{k}^{N,1}(w)\right|^{2} \leq\left|\sigma(w^{\top}X_{k}+\left\langle b^{\prime}h_{k}^{N}, \lambda^{N}\right\rangle)-\sigma(w^{\top}X_{k}+\left\langle b^{\prime}h_{k}, \lambda\right\rangle)\right|^{2}\] \[\quad+\left|\sigma^{\prime}(w^{\top}X_{k}+\left\langle b^{\prime }h_{k}^{N},\lambda^{N}\right\rangle)-\sigma^{\prime}(w^{\top}X_{k}+\left\langle b ^{\prime}h_{k},\lambda\right\rangle)\right|^{2}|X_{k}|^{2}\] \[\leq(1+|X_{k}|^{2})C_{\sigma}^{2}\left(\left\langle b^{\prime}h_ {k}^{N}(w^{\prime}),\lambda^{N}\right\rangle-\left\langle b^{\prime}h_{k}(w^{ \prime}),\lambda\right\rangle\right)^{2}\] \[\leq 2C_{\sigma}^{2}\left(E_{k}^{N,1}\right)^{2},\] where \[E_{k}^{N,1}:=\left\langle b^{\prime}h_{k}^{N}(w^{\prime}),\lambda^{N}\right\rangle -\left\langle b^{\prime}h_{k}(w^{\prime}),\lambda\right\rangle.\] Similarly, \[\left|\Gamma_{k}^{N,2}(w)\right|^{2}+\left|\nabla\Gamma_{k}^{N,2}(w)\right|^{ 2}\leq 2C_{\sigma}^{2}\left(E_{k}^{N,2}\right)^{2},\] where for \(k\geq 1\) we define \[E_{k}^{N,2}=\frac{1}{N}\sum_{i=1}^{N}B_{k}^{i}v_{k}^{N}(W_{k-1}^{i})-\left\langle b ^{\prime}h_{k}^{N}(w^{\prime}),\lambda^{N}\right\rangle,\] and \(E_{0}^{N,2}=0\)2. Finally, we have that Footnote 2: We note that when \(k=0\) then \(v_{k}^{N}(w)=h_{k}^{N}(w)\equiv 0\), so the value of \(E_{0}^{N,2}=0\) regardless of what value we set for \(W_{-1}^{i}\). \[\left|\Gamma_{k}^{N}(w)\right|^{2}+\left|\nabla\Gamma_{k}^{N}(w)\right|^{2} \leq 2C_{\sigma}^{2}\left(E_{k}^{N,1}+E_{k}^{N,2}\right)^{2}\leq 4C_{ \sigma}^{2}\left(\left(E_{k}^{N,1}\right)^{2}+\left(E_{k}^{N,2}\right)^{2} \right).\] This yields the following lemma **Lemma 6.2**.: _For all \(k\geq 0\),_ \[\mathbb{E}\left\|\Gamma_{k}^{N,1}(w)\right\|_{H^{1}(\lambda)}^{2} \vee\max_{i}\mathbb{E}\left[\Gamma_{k}^{N,1}(W^{i})\right]^{2} \leq 2C_{\sigma}^{2}\mathbb{E}\left[E_{k}^{N,1}\right]^{2} \tag{6.10}\] \[\mathbb{E}\left\|\Gamma_{k}^{N,2}(w)\right\|_{H^{1}(\lambda)}^{2} \vee\max_{i}\mathbb{E}\left[\Gamma_{k}^{N,2}(W^{i})\right]^{2} \leq 2C_{\sigma}^{2}\mathbb{E}\left[E_{k}^{N,2}\right]^{2}\] (6.11) \[\mathbb{E}\left\|\Gamma_{k}^{N}(w)\right\|_{H^{1}(\lambda)}^{2} \vee\max_{i}\mathbb{E}\left[\Gamma_{k}^{N}(W^{i})\right]^{2} \leq 4C_{\sigma}^{2}\left(\mathbb{E}\left[E_{k}^{N,1}\right]^{2}+ \mathbb{E}\left[E_{k}^{N,2}\right]^{2}\right) \tag{6.12}\] _where \(a\lor b=\max(a,b)\)._ Proof.: The proof for (6.10)-(6.12) are similar, so we will only prove (6.10) here. To that end, we note that the bound \(E_{k}^{N,1}\) is uniform in \(w\), so for all \(i\), \[\mathbb{E}\left[\Gamma_{k}^{1,N}(W_{0}^{i})\right]^{2}\leq 2C_{\sigma}^{2} \mathbb{E}\left[E_{k}^{N,1}\right]^{2}.\] Moreover, \[\mathbb{E}\left\|\Gamma_{k}^{N,1}(w)\right\|_{H^{1}(\lambda)}^{2} =\mathbb{E}\left[\int_{\mathbb{R}^{d}}\left[\left(\Gamma_{k}^{N, 1}(w)\right)^{2}+\left(\nabla\Gamma_{k}^{N,1}(w)\right)^{2}\right]\,\lambda( dw)\right]\leq 2C_{\sigma}^{2}\mathbb{E}\left[\int_{\mathbb{R}^{d}}\left(E_{k} ^{N,1}\right)^{2}\,\lambda(dw)\right]\] \[=2C_{\sigma}^{2}\mathbb{E}\left[E_{k}^{N,1}\right]^{2}.\] This completes the proof. It remains for us to control \(\mathbb{E}\left[E_{k}^{N,1}\right]^{2}\) and \(\mathbb{E}\left[E_{k}^{N,2}\right]^{2}\). **Lemma 6.3**.: _For all \(k\geq 0\),_ \[\mathbb{E}\left[E_{k}^{N,1}\right]^{2}\leq\frac{C}{N},\] _for an unimportant constant \(C<\infty\)._ Proof.: This is trivial for \(k=0\) since \(h_{0}^{N}(w)=h_{0}(w)\equiv 0\), so \(E_{0}^{N,1}=0\). For general \(k\), we note by Young's inequality that \[\left(E_{k}^{N,1}\right)^{2} =\left(\langle b^{\prime}(h_{k}^{N}(w^{\prime})-h_{k}(w^{\prime}) ),\lambda\rangle+\langle b^{\prime}h_{k}^{N}(w^{\prime}),\lambda^{N}-\lambda \rangle\right)^{2}\] \[\leq 2\left(\langle b^{\prime}(h_{k}^{N}(w^{\prime})-h_{k}(w^{ \prime})),\lambda\rangle\right)^{2}+2\left(\langle b^{\prime}h_{k}^{N}(w^{ \prime}),\lambda^{N}-\lambda\rangle\right)^{2}.\] Now note that \[\mathbb{E}\left(\langle b^{\prime}h_{k}^{N}(w^{\prime}),\lambda^ {N}-\lambda\rangle^{2}\right) =\mathbb{E}\left[\mathbb{E}\left[\langle b^{\prime}h_{k}^{N}(w^{ \prime}),\lambda^{N}-\lambda\rangle^{2}\,\mid\,h_{k}^{N}\right]\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[\left(\frac{1}{N}\sum_{i=1}^{N} \left(B^{i}h_{k}^{N}(W^{i})-\int_{\mathbb{R}^{1+d}}b^{\prime}h_{k}^{N}(w^{ \prime})\,\lambda(db^{\prime}\,dw^{\prime})\right)\right)^{2}\,\mid\,h_{k}^{N }\right]\right].\] Since \(B^{i}h_{k}^{N}(W^{i})\,\mid\,h_{k}^{N}\) is conditionally mutually iid for \(i=1,\cdots,N\), \[\mathbb{E}\left(\langle b^{\prime}h_{k}^{N}(w^{\prime}),\lambda^ {N}-\lambda\rangle\right)^{2} =\frac{1}{N}\mathbb{E}\left[\mathbb{E}\left[\left(B^{1}h_{k}^{N} (W^{1})-\int_{\mathbb{R}^{1+d}}b^{\prime}h_{k}^{N}(w^{\prime})\,\lambda(db^{ \prime}\,dw^{\prime})\right)^{2}\,\mid\,h_{k}^{N}\right]\right]\] \[=\frac{1}{N}\mathbb{E}\left[\mathbb{E}\left[\left(B^{1}h_{k}^{N} (W^{1})\right)^{2}\,\mid\,h_{k}^{N}\right]-\left(\int_{\mathbb{R}^{1+d}}b^{ \prime}h_{k}^{N}(w^{\prime})\,\lambda(db^{\prime}\,dw^{\prime})\right)^{2}\right]\] \[\leq\frac{1}{N}\mathbb{E}\left[\mathbb{E}\left[\left(B^{1}h_{k}^{ N}(W^{1})\right)^{2}\,\mid\,h_{k}^{N}\right]\right]+\frac{1}{N}\mathbb{E}\left(\int_{ \mathbb{R}^{1+d}}b^{\prime}h_{k}^{N}(w^{\prime})\,\lambda(db^{\prime}\,dw^{ \prime})\right)^{2}\] \[\leq\frac{1}{N}\mathbb{E}\left[\left(B^{1}h_{k}^{N}(W^{1})\right) ^{2}\right]+\frac{1}{N}\mathbb{E}\left(\int_{\mathbb{R}^{1+d}}b^{\prime}h_{k} ^{N}(w^{\prime})\,\lambda(db^{\prime}\,dw^{\prime})\right)^{2}\] \[\leq\frac{C}{N},\] for an unimportant constant \(C<\infty\). In addition, we have \[\mathbb{E}\left(\langle b^{\prime}(h_{k}^{N}(w^{\prime})-h_{k}(w ^{\prime})),\lambda\rangle\right)^{2} \leq\int_{\mathbb{R}^{1+d}}(b^{\prime})^{2}\,\lambda(db^{\prime} \,dw^{\prime})\mathbb{E}\int_{\mathbb{R}^{1+d}}|h_{k}^{N}(w^{\prime})-h_{k}(w ^{\prime})|^{2}\,\lambda(db^{\prime}\,dw^{\prime})\] \[\leq\mathbb{E}\int_{\mathbb{R}^{1+d}}\left(\sigma(w^{\top}X_{k-1} +\langle b^{\prime\prime}h_{k-1}(w^{\prime\prime}),\lambda\rangle)\right)^{2} \lambda(db^{\prime}\,dw^{\prime})\] \[\leq\mathbb{E}\int_{\mathbb{R}^{1+d}}C_{\sigma}^{2}\left(E_{k-1} ^{N,1}\right)^{2}\,\lambda(db^{\prime}\,dw^{\prime})=C_{\sigma}^{2}\mathbb{E} \left(E_{k-1}^{N,1}\right)^{2}.\] Therefore, we have \[\mathbb{E}\left(E_{k}^{N,1}\right)^{2}\leq\frac{2C}{N}+2C_{\sigma}^{2}\mathbb{E }\left(E_{k-1}^{N,1}\right)^{2}. \tag{6.13}\] By lemma A.1, for all \(k\geq 0\) we have \[\mathbb{E}\left(E_{k}^{N,1}\right)^{2}\leq\frac{2C}{N(1-2C_{\sigma}^{2})}= \frac{C}{N},\] noting that \(2C_{\sigma}^{2}<1\). The unimportant constant \(C\) changes from line to line. This completes the proof. Similarly, **Lemma 6.4**.: _For all \(k\leq\lfloor NT\rfloor\),_ \[\mathbb{E}\left(E_{k}^{N,2}\right)^{2}\leq\frac{C_{T}}{N^{2-2\beta-2\gamma}}.\] Proof.: Again, we have \(v_{0}^{N}(w)=h_{k}^{N}(w)\equiv 0\) so \(E_{0}^{N,2}=0\). For general \(k\), we conclude by Young's inequality: \[\left(E_{k}^{N,2}\right)^{2} =\left(\left\langle b^{\prime}(v_{k}^{N}(w^{\prime})-h_{k}^{N}(w^ {\prime})),\lambda^{N}\right\rangle+\frac{1}{N}\sum_{i=1}^{N}\left(B_{k}^{i}v_ {k}^{N}(W_{k-1}^{i})-B_{0}^{i}v_{k}^{N}(W_{0}^{i})\right)\right)^{2}\] \[\leq 2\left(\left\langle b^{\prime}(v_{k}^{N}(w^{\prime})-h_{k}^{ N}(w^{\prime})),\lambda^{N}\right\rangle\right)^{2}+2\left(\frac{1}{N}\sum_{i=1}^{N} \left(B_{k}^{i}v_{k}^{N}(W_{k-1}^{i})-B_{0}^{i}v_{k}^{N}(W_{0}^{i})\right) \right)^{2}\] \[\leq 2\left(\left\langle b^{\prime}(v_{k}^{N}(w^{\prime})-h_{k}^{ N}(w^{\prime})),\lambda^{N}\right\rangle\right)^{2}+\frac{2}{N}\sum_{i=1}^{N} \left(B_{k}^{i}v_{k}^{N}(W_{k-1}^{i})-B_{0}^{i}v_{k}^{N}(W_{0}^{i})\right)^{2}\] We first analyse the second term, and we emphasise that this term vanishes at \(k=0\) as \(v_{0}^{N}\) is defined to be zero. For \(k\geq 1\), we have \[\left|B_{0}^{i}v_{k}^{N}(W_{0}^{i})-B_{k}^{i}v_{k}^{N}(W_{k-1}^{i })\right| =\left|B_{0}^{i}(v_{k}^{N}(W_{0}^{i})-v_{k}^{N}(W_{k-1}^{i}))-v_{k }^{N}(W_{k-1}^{i})(B_{0}^{i}-B_{k}^{i})\right|\] \[\leq\left(\left|B_{0}^{i}\right|\left|v_{k}^{N}(W_{0}^{i})-v_{k}^ {N}(W_{k-1}^{i})\right|+\left|v_{k}^{N}(W_{k-1}^{i})\right|\left|(B_{0}^{i}-B_ {k}^{i})\right|\right)\] \[\leq\left(C_{\sigma}\left|B_{0}^{i}\right|\left|X_{k-1}^{\top}(W_ {0}^{i}-W_{k-1}^{i})\right|+\left|(B_{0}^{i}-B_{k}^{i})\right|\right)\] \[\leq\left(C_{\sigma}\left|W_{0}^{i}-W_{k-1}^{i}\right|+\left|B_{0 }^{i}-B_{k}^{i}\right|\right),\] \[\implies\mathbb{E}\left|B_{0}^{i}v_{k}^{N}(W^{i})-B_{k}^{i}v_{k}^ {N}(W_{k-1}^{i})\right|^{2} \leq 2\mathbb{E}\left(C_{\sigma}^{2}\left|W_{0}^{i}-W_{k-1}^{i} \right|^{2}+\left|B_{0}^{i}-B_{k}^{i}\right|^{2}\right)\] \[\leq\frac{C_{T}}{N^{2-2\beta-2\gamma}},\] where in the last inequality, we used Lemma 2.8. In addition, for \(k\geq 1\) we have \[\left\langle b^{\prime}(v_{k}^{N}(w^{\prime})-h_{k}^{N}(w^{\prime })),\lambda^{N}\right\rangle \leq\frac{1}{N}\sum_{i=1}^{N}\left|B^{i}\right|\left|v_{k}^{N}(W^ {i})-h_{k}^{N}(W^{i})\right|\] \[\leq\frac{C_{\sigma}}{N}\sum_{i=1}^{N}\left|\frac{1}{N}\sum_{j=1 }^{N}B_{k-1}^{j}v_{k-1}^{N}(W_{k-2}^{j})-\left\langle b^{\prime}h_{k-1}^{N}(w ^{\prime}),\lambda^{N}\right\rangle\right|\] \[=C_{\sigma}\left|E_{k-1}^{N,2}\right|\] Therefore, we have \[\mathbb{E}\left(E_{k}^{N,2}\right)^{2}\leq\frac{2C_{T}}{N^{2-2\beta-2\gamma}}+ 2C_{\sigma}^{2}\mathbb{E}\left(E_{k-1}^{N,2}\right)^{2}. \tag{6.14}\] By Lemma A.1, for all \(k\geq 0\) we have \[\mathbb{E}\left(E_{k}^{N,2}\right)^{2}\leq\frac{2C_{T}}{N^{2-2\beta-2\gamma}( 1-2C_{\sigma}^{2})}=\frac{C_{T}}{N^{2-2\beta-2\gamma}},\] noting that \(2C_{\sigma}^{2}<1\). Proof.: (Proof of proposition 3.8) Lemma 6.2 and 6.3 imply that for any \(k\geq 0\), there is a constant \(C>0\) such that \[\mathbb{E}\left\|\Gamma_{k}^{N,1}(w)\right\|_{H^{1}(\lambda)}^{2}\vee\max_{i} \mathbb{E}\left[\Gamma_{k}^{N,1}(W^{i})\right]^{2}\leq\frac{C}{N},\] Similarly, lemma 6.2 and 6.4 imply that for all \(k\leq\lfloor NT\rfloor\), there is another constant \(C_{T}>0\), depending on \(T\), such that \[\mathbb{E}\left\|\Gamma_{k}^{N,2}(w)\right\|_{H^{1}(\lambda)}^{2}\vee\max_{i} \mathbb{E}\left[\Gamma_{k}^{N,2}(W^{i})\right]^{2}\leq\frac{C_{T}}{N^{2-2\beta -2\gamma}}.\] This completes the proof. ### Proof of Pre-Limit Evolution (Lemma 4.1) We expand \(h(W_{k+1}^{i})-h(W_{k}^{i})\) using Taylor series as followed: \[h(W_{k+1}^{i})-h(W_{k}^{i}) =\nabla h(W_{k}^{i,*})^{\top}(W_{k+1}^{i}-W_{k}^{i})\] \[=\nabla h(W_{k}^{i})^{\top}(W_{k+1}^{i}-W_{k}^{i})+(W_{k+1}^{i}-W _{k}^{i})^{\top}\mathsf{Hess}\,h(W_{k}^{i,**})(W_{k+1}^{i}-W_{k}^{i}),\] where \(W_{k}^{i,*},W_{k}^{i,**}\) are points in the line segment connecting the points \(W_{k}^{i}\) and \(W_{k+1}^{i}\). The following remainder terms in the Taylor's expansion is small by Lemma 2.8 (specifically by (6.3)): \[\left|(C_{k+1}^{i}-C_{k}^{i})\nabla h(W_{k}^{i,*})^{\top}(W_{k+1}^ {i}-W_{k}^{i})\right| \leq\frac{C_{T}}{N^{4-2\beta-2\gamma}} \tag{6.15}\] \[\left|C_{k}^{i}(W_{k+1}^{i}-W_{k}^{i})\mathsf{Hess}\,h(W_{k}^{i,* })(W_{k+1}^{i}-W_{k}^{i})\right| \leq C_{T}(Cd^{2})\left|W_{k+1}^{i}-W_{k}^{i}\right|^{2}\leq \frac{C_{T}}{N^{4-2\beta-2\gamma}},\] noting that the derivatives of \(h\) are bounded. From this, we obtain the following evolution equation \[\triangle g_{k}^{N}(h) =g_{k+1}^{N}(h)-g_{k}^{N}(h)\] \[=\frac{1}{N^{\beta}}\sum_{i=1}^{N}(C_{k+1}^{i}h(W_{k+1}^{i})-C_{k }^{i}h(W_{k}^{i}))\] \[=\frac{1}{N^{\beta}}\sum_{i=1}^{N}[(C_{k+1}^{i}-C_{k}^{i})h(W_{k} ^{i})+C_{k}^{i}(h(W_{k+1}^{i})-h(W_{k}^{i}))+(C_{k+1}^{i}-C_{k}^{i})(h(W_{k+1} ^{i})-h(W_{k}^{i}))]\] \[=\frac{1}{N^{\beta}}\sum_{i=1}^{N}[(C_{k+1}^{i}-C_{k}^{i})h(W_{k} ^{i})+C_{k}^{i}\nabla h(W_{k}^{i})(W_{k+1}^{i}-W_{k}^{i})+C_{k}^{i}(W_{k+1}^{i }-W_{k}^{i})\mathsf{Hess}\,h(W_{k}^{i,**})(W_{k+1}^{i}-W_{k}^{i})\] \[\quad+(C_{k+1}^{i}-C_{k}^{i})\nabla h(W_{k}^{i,*})^{\top}(W_{k+1 }^{i}-W_{k}^{i})].\] Notice that with the specific choice of the learning rate \(\alpha^{N}=\frac{\alpha}{N^{2-2\beta}}\) by (2.3) we get after training is taken into account \[\frac{1}{N^{\beta}}\sum_{i=1}^{N}[(C_{k+1}^{i}-C_{k}^{i})h(W_{k} ^{i})+C_{k}^{i}\nabla h(W_{k}^{i})^{\top}(W_{k+1}^{i}-W_{k}^{i})]\] \[=-\frac{\alpha}{N^{2}}(\psi^{N}(\hat{Y}_{k}^{N})-Y_{k})\sum_{i=1} ^{N}\left[\hat{S}_{k+1}^{i}h(W_{k}^{i})+C_{k}^{i}\Delta\hat{S}_{k+1}^{i,N} \nabla h(W_{k}^{i})^{\top}X_{k}\right]=\delta^{(1)}g_{k}^{N}(h).\] Therefore, for all \(k\leq\lfloor NT\rfloor\), \[\left|\triangle g_{k}^{N}(h)-\delta^{(1)}g_{k}^{N}(h)\right|\] \[=\left|\frac{1}{N^{\beta}}\sum_{i=1}^{N}\left[C_{k}^{i}(W_{k+1}^ {i}-W_{k}^{i})\mathsf{Hess}\,h(W_{k}^{i,**})(W_{k+1}^{i}-W_{k}^{i})+(C_{k+1}^{i }-C_{k}^{i})\nabla h(W_{k}^{i,*})^{\top}(W_{k+1}^{i}-W_{k}^{i})\right]\right|\] \[\leq\frac{1}{N^{\beta}}\sum_{i=1}^{N}\left[\left|C_{k}^{i}(W_{k+1 }^{i}-W_{k}^{i})\mathsf{Hess}\,h(W_{k}^{i,**})(W_{k+1}^{i}-W_{k}^{i})\right|+ \left|(C_{k+1}^{i}-C_{k}^{i})\nabla h(W_{k}^{i,*})^{\top}(W_{k+1}^{i}-W_{k}^{i })\right|\right]\] \[\leq\frac{N}{N^{\beta}}\times\frac{C_{T}}{N^{4-2\beta-2\gamma} }=\frac{C_{T}}{N^{3-\beta-2\gamma}}.\] It is important to note that \(3-\beta-2\gamma>2+1-\beta-2\gamma>2\), so \(\left|\triangle g_{k}^{N}(h)-\delta^{(1)}g_{k}^{N}(h)\right|=o(N^{-2})\) uniformly for \(k\leq\left\lfloor NT\right\rfloor\) and \(h\in\mathcal{H}\). ### Well-posedness of the limit ODE The affine ODE on the dual space \((H^{1}(\lambda))^{*}\) (which is a Banach space) could be written as \[\frac{d}{dt}g_{t}=\mathcal{A}(g_{t})+b,\quad g_{0}=0,\] where \(\mathcal{A}\) is a linear operator from \((H^{1}(\lambda))^{*}\) to \((H^{1}(\lambda))^{*}\) as defined below, \[\mathcal{A}:g\in(H^{1}(\lambda))^{*}\mapsto\left[\mathcal{A}(g):h\mapsto-\int_ {\mathcal{X}}g(\varsigma(\mathsf{H}))\mathcal{K}_{\varsigma,\lambda}(h, \mathsf{h})\,\mu(d\mathsf{H})\right];\] and \(b\) is the following linear functional: \[b:h\mapsto\int\mathsf{y}\mathcal{K}_{\varsigma,\lambda}(h,\mathsf{h})\,\mu(d \mathsf{H})\] Therefore the ODE admits a unique solution if \(\mathcal{A}\) is bounded. In fact, for \(\mathsf{H}\in\mathcal{X}\), we have \[\left|\mathcal{K}_{\varsigma,\lambda}(h,\mathsf{h})\right|\leq\int_{\mathbb{ R}^{d}}\left[\left|h(w)\right|+\left|\nabla h(w)\right|\right]\,\lambda(dw)\leq \left\|h\right\|_{H^{1}(\lambda)},\] so \[\left|[\mathcal{A}(g)](h)\right| \leq\int_{\mathcal{X}}\left|g(\mathsf{h})\right|\left|\mathcal{K }_{\varsigma,\lambda}(h,\mathsf{h})\right|\mu(d\mathsf{H})\] \[\leq\int_{\mathcal{X}}\left\|g\right\|_{(H^{1}(\lambda))^{*}} \left\|\varsigma(\mathsf{H})\right\|_{H^{1}(\lambda)}\left\|h\right\|_{H^{1}( \lambda)}\,\mu(d\mathsf{H})\] \[\leq\sqrt{1+C_{\sigma}^{2}}\left\|g\right\|_{(H^{1}(\lambda))^{*} }\left\|h\right\|_{H^{1}(\lambda)}, \tag{6.16}\] thanks to the fact that any elements of \(\varsigma(\mathsf{H})\) is in \(\mathcal{H}\) so the control (3.5) holds. Therefore, we have \[\left\|\mathcal{A}(g)\right\|_{(H^{1}(\lambda))^{*}}\leq\sqrt{1+C_{\sigma}^{2 }}\left\|g\right\|_{(H^{1}(\lambda))^{*}}\implies\left\|\mathcal{A}\right\| \leq\sqrt{1+C_{\sigma}^{2}}.\] Moreover, \(b\) is bounded the following sense: \[\left|b(h)\right|\leq\int\left|\mathsf{y}\right|\left|\mathcal{K}_{\varsigma, \lambda}(h,\mathsf{h})\right|\mu(d\varsigma,d\mathsf{z},d\mathsf{y},d \mathsf{h})\leq C_{y}\left\|h\right\|_{H^{1}(\lambda)}\implies\left\|b \right\|_{(H^{1}(\lambda))^{*}}\leq C_{y}.\] Therefore, the following exponential operator is well defined for any \(T>0\): \[\exp(t\mathcal{A})=\sum_{i=0}^{\infty}\frac{t^{i}\mathcal{A}^{\circ i}}{i!}, \quad\mathcal{A}^{\circ i}=\underbrace{\mathcal{A}\circ...\circ\mathcal{A}}_{i \text{ times}},\] and therefore **Proposition 6.5**.: _The ODE (4.3) admits the following classical solution:_ \[g_{t}=\int_{0}^{t}\exp((t-s)\mathcal{A})b\,ds, \tag{6.17}\] _and when acting on \(h\in H^{1}(\lambda)\) we have_ \[g_{t}(h)=\int_{0}^{t}\exp((t-s)\mathcal{A})b(h)\,ds.\] In particular, we have the following control over the operator norm \[\|g_{t}\|_{(H^{1}(\lambda))^{*}}\leq T\exp(T\|\mathcal{A}\|)\|b\|_{(H^{1}(\lambda) )^{*}}\leq C_{y}Te^{\sqrt{1+C_{x}^{2}}T}:=C_{T} \tag{6.18}\] where \(\|\mathcal{A}\|\) is the operator norm of the operator \(\mathcal{A}\), which is proven to be bounded by (6.16). So for all \(h\) we have \(|g_{t}(h)|\leq C_{T}\left\|h\right\|_{H^{1}(\lambda)}\) for any finite \(T>0\). Proof.: We follow [26] for our discussion. Firstly, Theorem 1.2-3 of [26] states that \(\mathcal{A}\) induces a unique uniformly continuous semigroup \(\exp(t\mathcal{A})\). Indeed, consider the sequence of operators from \((H^{1}(\lambda))^{*}\) to \((H^{1}(\lambda))^{*}\): \[S^{N}(t;\mathcal{A})=\sum_{i=0}^{N}\frac{t^{i}\mathcal{A}^{\circ i}}{i!}\] By triangle inequality we have the operator norm control \(\|S^{N}(t;\mathcal{A})\|\leq\exp(t\|\mathcal{A}\|)<+\infty\), uniform in \(N\). Therefore the partial sums \((S^{N}(t,\mathcal{A}))_{N\geq 0}\) must converge absolutely. Since the space of operators from \((H^{1}(\lambda))^{*}\) to \((H^{1}(\lambda))^{*}\) is Banach, the partial sum \((S^{N}(t,\mathcal{A}))_{N\geq 0}\) must also converge to an operator in operator norm, for which we define it to be \(\exp(t\mathcal{A})\). Since \(b\in(H^{1}(\lambda))^{*}\) is constant in \(t\), then by Corollary 2.5 of [26] the ODE 4.3 admits a classical solution, given by the formula (6.17) according to Corollary 2.2 of [26]. ## 7 Proof of weak convergence The actual convergence of the ODE is broken mainly into three steps. Firstly, we show in subsection 7.1 that one could modify the increments \(\delta^{(1)}g_{k}^{N}(h)\) by replacing the current parameters \((C_{k}^{i},W_{k}^{i},B_{k}^{i})\) to the initial parameters \((C^{i},W^{i},B^{i})\) without introducing too much error, thanks to the heuristic provided in Lemma 2.8 and Proposition 3.8. Then we remove the clipping in the evolution in subsection 7.2 by noticing the initial value \(g_{0}^{N}(h)\) is \(L^{2}\) integrable for all \(h\in\mathcal{H}\). Finally, we establish in subsection 7.3 the weak convergence by using analysing the corresponding Poisson equation [27] of the Markov chain \((H_{k})_{k\geq 0}\). The proof of the main result of this paper, Theorem 4.2, then follows in subsection 7.4. ### Replacing the trained memory units Recall that \(g_{t}^{N}(h)=g_{\lfloor Nt\rfloor}^{N}(h;\theta_{\lfloor Nt\rfloor})\), and that we have now established \[\left|g_{t}^{N}(h)-g_{0}^{N}(h)-\sum_{k=0}^{\lfloor Nt\rfloor-1}\delta^{(1)} g_{k}^{N}(h)\right|\leq\frac{C_{t}}{N^{2-\beta-2\gamma}},\] where \[\delta^{(1)}g_{k}^{N}(h)=-\frac{\alpha}{N^{2}}(\psi^{N}(\hat{Y}_{k}^{N})-Y_{k} )\sum_{i=1}^{N}\left[\hat{S}_{k+1}^{i,N}h(W_{k}^{i})+(C_{k}^{i})^{2}\Delta \hat{S}_{k+1}^{i}\nabla h(W_{k}^{i})^{\top}X_{k}\right]. \tag{7.1}\] In this section, we would like to show that one could study a simpler increment for the evolution of \(g_{t}^{N}(h)\), in light of the a-priori bounds for the increments of parameters (Lemma 2.8) and our current analysis of the trained hidden memory units in appendix 3. We begin by showing that \(\Psi^{N}(Y_{k})\) could be replaced by \(\Psi^{N}(g_{k}^{N}(h_{k}))\). This relies on the following lemma: **Lemma 7.1**.: _If \(u,v\in\mathcal{H}\), then for all \(k\leq\lfloor TN\rfloor\), there are constants \(C_{T}\) depending on \(T\) such that_ \[\mathbb{E}|g_{k}^{N}(u)-g_{k}^{N}(v)|^{2}\leq C_{T}N^{2\gamma}\left\|u-v\right\| _{H^{1}(\lambda)}^{2}+\frac{C_{T}}{N^{2-2\beta-4\gamma}}.\] Proof.: We start with \[\left|(g_{k+1}^{N}(u)-g_{k+1}^{N}(v))-(g_{k}^{N}(u)-g_{k}^{N}(v))\right|\] \[=\left|(g_{k+1}^{N}(u)-g_{k}^{N}(u))-(g_{k+1}^{N}(v)-g_{k}^{N}(v))\right|\] \[=\left|\triangle g_{k}^{N}(u)-\triangle g_{k}^{N}(v)\right|\] \[\leq\left|\delta^{(1)}g_{k}^{N}(u)-\delta^{(1)}g_{k}^{N}(v) \right|+\left|\triangle g_{k}^{N}(u)-\delta^{(1)}g_{k}^{N}(u)\right|+\left| \delta^{(1)}g_{k}^{N}(v)-\triangle g_{k}^{N}(v)\right|,\] so by Young's inequality, \[\mathbb{E}\left|(g_{k+1}^{N}(u)-g_{k+1}^{N}(v))-(g_{k}^{N}(u)-g_{ k}^{N}(v))\right|^{2}\] \[\leq 3\left[\mathbb{E}\left|\delta^{(1)}g_{k}^{N}(u)-\delta^{(1) }g_{k}^{N}(v)\right|^{2}+\mathbb{E}\left|\triangle g_{k}^{N}(u)-\delta^{(1)}g _{k}^{N}(u)\right|^{2}+\mathbb{E}\left|\delta^{(1)}g_{k}^{N}(v)-\triangle g_ {k}^{N}(v)\right|^{2}\right],\] \[\leq 3\mathbb{E}\left|\delta^{(1)}g_{k}^{N}(u)-\delta^{(1)}g_{k} ^{N}(v)\right|^{2}+\frac{C_{T}}{N^{6-2\beta-4\gamma}}, \tag{7.2}\] with the constant \(C_{T}\) depends on the constant in the pre-limit equation in Lemma 4.1. We further note from the definition of \(\delta^{(1)}g_{k}^{N}(\cdot)\) in equation (7.1) that \[\left|\delta^{(1)}g_{k}^{N}(u)-\delta^{(1)}g_{k}^{N}(v)\right|\] \[\leq\frac{\alpha}{N^{2}}\left|\psi^{N}(\hat{Y}_{k}^{N})-Y_{k} \right|\left|\sum_{i=1}^{N}\left[\hat{S}_{k+1}^{i,N}(u(W_{k}^{i})-v(W_{k}^{i}) )+C_{k}^{i}\Delta\hat{S}_{k+1}^{i,N}(\nabla u(W_{k}^{i})-\nabla v(W_{k}^{i}))^ {\top}X_{k}\right]\right|,\] so \[\mathbb{E}\left|\delta^{(1)}g_{k}^{N}(u)-\delta^{(1)}g_{k}^{N}(v )\right|^{2}\] \[\leq\frac{\alpha^{2}}{N^{4}}\mathbb{E}\left[\left|\psi^{N}(\hat {Y}_{k}^{N})-Y_{k}\right|\left|\sum_{i=1}^{N}\left[\hat{S}_{k+1}^{i,N}(u(W_{k} ^{i})-v(W_{k}^{i}))+C_{k}^{i}\Delta\hat{S}_{k+1}^{i,N}(\nabla u(W_{k}^{i})- \nabla v(W_{k}^{i}))^{\top}X_{k}\right]\right|\right]^{2}\] \[\leq\frac{C}{N^{4-2\gamma}}\mathbb{E}\left[\sum_{i=1}^{N}\left[ \hat{S}_{k+1}^{i,N}(u(W_{k}^{i})-v(W_{k}^{i}))+C_{k}^{i}\Delta\hat{S}_{k+1}^{i,N}(\nabla u(W_{k}^{i})-\nabla v(W_{k}^{i}))^{\top}X_{k}\right]\right]^{2}\] \[\leq\frac{C_{T}}{N^{3-2\gamma}}\mathbb{E}\left[\sum_{i=1}^{N} \left[\left(u(W_{k}^{i})-v(W_{k}^{i})\right)^{2}+\left|\nabla u(W_{k}^{i})- \nabla v(W_{k}^{i})\right|^{2}\right]\right]\] \[\leq\frac{C_{T}}{N^{3-2\gamma}}\mathbb{E}\bigg{[}\sum_{i=1}^{N} \left[\left.\left(u(W_{k}^{i})-u(W_{0}^{i})\right)^{2}+\left(v(W_{k}^{i})-v(W_ {0}^{i})\right)^{2}+\left(u(W_{0}^{i})-v(W_{0}^{i})\right)^{2}\right.\right.\] \[\left.\left.+\left|\nabla u(W_{k}^{i})-\nabla u(W_{0}^{i})\right|^ {2}+\left|\nabla v(W_{k}^{i})-\nabla v(W_{0}^{i})\right|^{2}+\left|\nabla u(W_ {0}^{i})-\nabla v(W_{0}^{i})\right|^{2}\right.\right]\right]\] \[\leq\frac{C_{T}}{N^{3-2\gamma}}\mathbb{E}\bigg{[}\sum_{i=1}^{N} \left[\left.\left|W_{k}^{i}-W_{0}^{i}\right|^{2}+\left(u(W_{0}^{i})-v(W_{0}^{i}) \right)^{2}+\left|\nabla u(W_{0}^{i})-\nabla v(W_{0}^{i})\right|^{2}\right. \right]\right]\] \[\leq\frac{C_{T}}{N^{2-2\gamma}}\left[\frac{C_{T}}{N^{2-2\beta-2 \gamma}}+\left\|u-v\right\|_{H^{1}(\lambda)}^{2}\right]=\frac{C_{T}}{N^{2-2 \gamma}}\left\|u-v\right\|_{H^{1}(\lambda)}^{2}+\frac{C_{T}}{N^{4-2\beta-4 \gamma}}\] Combining the above with equation (7.2) yields \[\mathbb{E}\left|(g_{k+1}^{N}(u)-g_{k}^{N}(u))-(g_{k+1}^{N}(v)-g_{k}^{N}(v)) \right|\leq\frac{C_{T}}{N^{2-2\gamma}}\left\|u-v\right\|_{H^{1}(\lambda)}^{2}+ \frac{C_{T}}{N^{4-2\beta-4\gamma}},\] so by summing the telescoping sums: \[\mathbb{E}\left|g_{k}^{N}(u)-g_{k}^{N}(v)\right|^{2} =\mathbb{E}\left|\sum_{j=0}^{k-1}\left(\triangle g_{j}^{N}(u)- \triangle g_{j}^{N}(v)\right)\right|^{2}\leq N\mathbb{E}\left[\sum_{j=0}^{k-1} \left(\triangle g_{j}^{N}(u)-\triangle g_{j}^{N}(v)\right)^{2}\right]\] \[\leq C_{T}N^{2\gamma}\left\|u-v\right\|_{H^{1}(\lambda)}^{2}+ \frac{C_{T}}{N^{2-2\beta-4\gamma}}.\] This leads us to the following lemma. **Lemma 7.2**.: _Define_ \[\delta^{(2)}g_{k}^{N}(h) =-\frac{\alpha}{N^{2}}(\psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\sum_{i =1}^{N}\left[\hat{S}_{k+1}^{i,N}h(W_{k}^{i})+(C_{k}^{i})^{2}\Delta\hat{S}_{k+1} ^{i}\nabla h(W_{k}^{i})^{\top}X_{k}\right]\] \[=-\frac{\alpha}{N^{2}}(\psi^{N}(g_{k}^{N}(\varsigma(X_{k},Z_{k}, Y_{k},h_{k}))-Y_{k})\sum_{i=1}^{N}\left[\hat{S}_{k+1}^{i,N}h(W_{k}^{i})+(C_{k}^{i })^{2}\Delta\hat{S}_{k+1}^{i}\nabla h(W_{k}^{i})^{\top}X_{k}\right],\] _then for all \(h\in\mathcal{H}\), the following holds for all \(k\leq\left\lfloor NT\right\rfloor-1\)_ \[\mathbb{E}\left|\delta^{(1)}g_{k}^{N}(h)-\delta^{(2)}g_{k}^{N}(h)\right|^{2} \leq\frac{C_{T}}{N^{4-2\beta-4\gamma}}.\] Proof.: We first note the uniform bound in \(i\) for \(k\leq\left\lfloor NT\right\rfloor\): thanks to the boundedness of the test function \(h\), boundedness of activation function (see Assumption 2.3 and equations (6.1)-(6.2)) and the boundedness of \(C_{k}^{i}\) (see Assumption 2.5 and Lemma 2.8), one has \[\hat{S}_{k+1}^{i,N}h(W_{k}^{i})+(C_{k}^{i})^{2}\Delta\hat{S}_{k+1}^{i,N}\nabla h (W_{k}^{i})^{\top}X_{k}\leq C+C_{T}C_{\sigma}C=:C_{T},\] where the constant \(C_{T}\) may change from line to line. By the boundedness of derivative of \(\psi^{N}\), \[\left|\delta^{(1)}g_{k}^{N}(h)-\delta^{(2)}g_{k}^{N}(h)\right| \leq\frac{C_{T}}{N}\left|\psi^{N}(\hat{Y}_{k}^{N})-\psi^{N}(g_{k}^{N}(h_{k+1} ))\right|\leq\frac{C_{T}}{N}\left|\hat{Y}_{k}^{N}-g_{k}^{N}(h_{k+1})\right|. \tag{7.3}\] Observing that \(\hat{Y}_{k}^{N}=g_{k}^{N}(v_{k+1}^{N})\), and that we know a-priori that the \(h_{k}\) and \(v_{k}^{N}\) has output bounded by \(1\) and partial derivatives bounded by \(C_{\sigma}\), we may invoke Lemma 7.1 with our a-priori bound 7.3 and Proposition 3.8 (see also equation (3.18)) to bound our required expectation: \[\mathbb{E}\left|\delta^{(1)}g_{k}^{N}(h)-\delta^{(2)}g_{k}^{N}(h )\right|^{2} \stackrel{{\eqref{eq:C_T}}}{{\leq}}\frac{C_{T}}{N^{2}} \mathbb{E}\left[g_{k}^{N}(v_{k+1}^{N})-g_{k}^{N}(h_{k+1})\right]^{2}\] \[=\frac{C_{T}}{N^{2}}\mathbb{E}\left[\left[\left(g_{k}^{N}(v_{k+1} ^{N})-g_{k}^{N}(h_{k+1})\right)^{2}\,\left|\,v_{k+1}^{N},h_{k+1}\right|\right]\right]\] \[\stackrel{{\eqref{eq:C_T}}}{{\leq}}\frac{C_{T}}{N^{2 }}\mathbb{E}\left[C_{T}N^{2\gamma}\left\|v_{k+1}^{N}-h_{k+1}\right\|_{H^{1}( \lambda)}^{2}+\frac{C_{T}}{N^{2-2\beta-4\gamma}}\right]\] \[\stackrel{{\eqref{eq:C_T}}}{{\leq}}\frac{C_{T}}{N^{4 -2\beta-4\gamma}} \tag{7.4}\] The result holds by recalling that \(h_{k+1}=\varsigma(X_{k},Z_{k},Y_{k},h_{k})\). We could then sum up all the differences between the increments to conclude that \[\mathbb{E}\left|g_{t}^{N}(h)-g_{0}^{N}(h)-\sum_{k=0}^{\lfloor Nt \rfloor-1}\delta^{(2)}g_{k}^{N}(h)\right|^{2}\] \[\leq 2\mathbb{E}\left[\left(g_{t}^{N}(h)-g_{0}^{N}(h)-\sum_{k=0}^{ \lfloor Nt\rfloor-1}\delta^{(1)}g_{k}^{N}(h)\right)^{2}+\left(\sum_{k=0}^{ \lfloor Nt\rfloor-1}\left(\delta^{(1)}g_{k}^{N}(h)-\delta^{(2)}g_{k}^{N}(h) \right)\right)^{2}\right]\] \[\leq\frac{C_{T}}{N^{4-2\beta-4\gamma}}+2N\sum_{k=0}^{\lfloor Nt \rfloor-1}\mathbb{E}\left[\delta^{(1)}g_{k}^{N}(h)-\delta^{(2)}g_{k}^{N}(h) \right]^{2}\leq\frac{C_{T}}{N^{2-2\beta-4\gamma}}.\] To understand the next step, we revisit the formula for \(\delta^{(2)}g_{k}^{N}(h)\): \[\delta^{(2)}g_{k}^{N}(h) =-\frac{\alpha}{N^{2}}(\psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\sum_{i =1}^{N}\left[\hat{S}_{k+1}^{i,N}h(W_{k}^{i})+(C_{k}^{i})^{2}\Delta\hat{S}_{k+1 }^{i}\nabla h(W_{k}^{i})^{\top}X_{k}\right]\] \[=-\frac{\alpha}{N^{2}}(\psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\sum_{i =1}^{N}\left[v_{k+1}^{N}(W_{k}^{i})h(W_{k}^{i})+(C_{k}^{i})^{2}\sigma^{\prime} \left((W_{k}^{i})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k -1}^{i})\right)\nabla h(W_{k}^{i})^{\top}X_{k}\right]\] \[=-\frac{\alpha}{N}(\psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\left\langle v _{k+1}^{N}(w)h(w)+c^{2}\sigma^{\prime}\left(w^{\top}X_{k}+\frac{1}{N}\sum_{j=1 }^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\right)\nabla h(w)^{\top}X_{k},\,\lambda_{ k}^{N}\right\rangle\] In light of the results proven regarding the increments of parameters (Lemma 2.8) and the trained memory units (Section 3), we would expect that one could study the new increments with the average of empirical distribution of updated parameters \(\lambda_{k}^{N}\) replaced by an average with respect to the initial parameter distribution \(\lambda\). This is formalised by the following technical lemma: **Lemma 7.3**.: _Define_ \[\delta^{(3)}g_{k}^{N}(h) =-\frac{\alpha}{N}(\psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\left\langle h _{k+1}(w)h(w)+c^{2}\sigma^{\prime}\left(w^{\top}X_{k}+\left\langle b^{\prime} h_{k}(w^{\prime}),\lambda\right\rangle\right)\nabla h(w)^{\top}X_{k},\,\lambda\right\rangle,\] \[=-\frac{\alpha}{N}(\psi^{N}(g_{k}^{N}(\varsigma(X_{k},Z_{k},Y_{k},h_{k})))-Y_{k})\left\langle h_{k+1}(w)h(w)+c^{2}\sigma^{\prime}\left(w^{\top} X_{k}+\left\langle b^{\prime}h_{k}(w^{\prime}),\lambda\right\rangle\right)\nabla h(w)^{ \top}X_{k},\,\lambda\right\rangle\] _then for all \(h\in\mathcal{H}\), the following holds for all \(k\leq\lfloor NT\rfloor-1\)_ \[\mathbb{E}\left|\delta^{(2)}g_{k}^{N}(h)-\delta^{(3)}g_{k}^{N}(h)\right|^{2} \leq\frac{C_{T}}{N^{4-2\beta-4\gamma}}.\] Proof.: We break down the fluctuation term into different components \[\delta^{(2)}g_{k}^{N}(h)-\delta^{(3)}g_{k}^{N}(h)=-\frac{\alpha}{N}(\psi^{N}(g_ {k}^{N}(h_{k+1}))-Y_{k})\sum_{\bullet=1}^{6}M_{k}^{\bullet,N},\] where \[M_{k}^{1,N} =\left\langle v_{k+1}^{N}(w)h(w),\,\lambda_{k}^{N}-\lambda^{N}\right\rangle\] \[M_{k}^{2,N} =\left\langle c^{2}\sigma^{\prime}\left(w^{\top}X_{k}+\frac{1}{N} \sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\right)\nabla h(w)^{\top}X_{k},\, \lambda_{k}^{N}-\lambda^{N}\right\rangle\] \[M_{k}^{3,N} =\left\langle(v_{k+1}^{N}(w)-h_{k+1}(w))h(w),\,\lambda^{N}\right\rangle\] \[M_{k}^{4,N} =\left\langle c^{2}\left(\sigma^{\prime}\left(w^{\top}X_{k}+\frac {1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\right)-\sigma^{\prime} \left(w^{\top}X_{k}+\langle b^{\prime}h_{k}(w^{\prime}),\,\lambda\rangle \right)\right)\nabla h(w)^{\top}X_{k},\,\lambda^{N}\right\rangle\] \[M_{k}^{5,N} =\left\langle h_{k+1}(w)h(w),\lambda^{N}-\lambda\right\rangle\] \[M_{k}^{6,N} =\left\langle c^{2}\sigma^{\prime}\left(w^{\top}X_{k}+\langle b^ {\prime}h_{k}(w^{\prime}),\,\lambda\rangle\right)\nabla h(w)^{\top}X_{k}, \lambda^{N}-\lambda\right\rangle\] This implies \[\mathbb{E}\left|\delta^{(2)}g_{k}^{N}(h)-\delta^{(3)}g_{k}^{N}(h)\right|^{2} \leq\frac{\alpha^{2}}{N^{2}}\mathbb{E}\left[\left|\psi^{N}(g_{k}^{N}(h_{k+1})) -Y_{k}\right|^{2}\left(\sum_{\bullet=1}^{6}M_{k}^{\bullet,N}\right)^{2}\right] \leq\frac{6C}{N^{2-2\gamma}}\left(\sum_{\bullet=1}^{6}\mathbb{E}\left[M_{k}^{ \bullet,N}\right]^{2}\right)\] Now note that: \[\left|M_{k}^{1,N}\right| =\left|\frac{1}{N}\sum_{i=1}^{N}\left(v_{k+1}^{N}(W_{k}^{i})h(W_{ k}^{i})-v_{k+1}^{N}(W_{0}^{i})h(W_{0}^{i})\right)\right|\] \[\leq\frac{1}{N}\sum_{i=1}^{N}\left|v_{k+1}^{N}(W_{k}^{i})h(W_{k}^ {i})-v_{k+1}^{N}(W_{0}^{i})h(W_{0}^{i})\right|\] \[\leq\frac{1}{N}\sum_{i=1}^{N}\left[\left|v_{k+1}^{N}(W_{k}^{i})(h (W_{k}^{i})-h(W_{0}^{i}))\right|+\left|(v_{k+1}^{N}(W_{k}^{i})-v_{k+1}^{N}(W_{ 0}^{i}))h(W_{0}^{i})\right|\right]\] \[\leq\frac{C}{N}\sum_{i=1}^{N}\left|W_{k}^{i}-W_{0}^{i}\right| \stackrel{{(2,4)}}{{\leq}}\frac{C_{T}}{N^{1-\beta-\gamma}} \tag{7.5}\] \[\Big{|}M_{k}^{2,N}\Big{|} =\left|\frac{1}{N}\sum_{i=1}^{N}\left((C_{k}^{i})^{2}\sigma^{\prime} \left((W_{k}^{i})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1 }^{i})\right)\nabla h(W_{k}^{i})^{\top}X_{k}\right.\] \[\quad-(C_{0}^{i})^{2}\sigma^{\prime}\left((W_{0}^{i})^{\top}X_{k} +\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\right)\nabla h(W_{0} ^{i})^{\top}X_{k}\right)\Biggr{|}\] \[\leq\frac{1}{N}\sum_{i=1}^{N}\Bigg{[}\left|(C_{k}^{i})^{2}-(C_{0} ^{i})^{2}\right|\underbrace{\left|\sigma^{\prime}((W_{k}^{i})^{\top}X_{k}+ \frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\nabla h(W_{k}^{i})^{ \top}X_{k})\right|}_{\leq C}\] \[\quad+\underbrace{(C_{0}^{i})^{2}}_{\leq 1}\bigg{|}\sigma^{ \prime}\Big{(}(W_{k}^{i})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^ {N}(W_{k-1}^{i})\Big{)}\nabla h(W_{k}^{i})^{\top}X_{k}\] \[\quad-\sigma^{\prime}\Big{(}(W_{0}^{i})^{\top}X_{k}+\frac{1}{N} \sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\Big{)}\nabla h(W_{0}^{i})^{\top} X_{k}\bigg{|}\Bigg{]}\] \[\leq\frac{1}{N}\sum_{i=1}^{N}\Bigg{[}\left|C_{k}^{i}-C_{0}^{i} \right|\underbrace{\left|C_{k}^{i}+C_{0}^{i}\right|}_{\leq C_{T}}\] \[\quad+\underbrace{\left|\sigma^{\prime}\Big{(}(W_{k}^{i})^{\top}X_ {k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\Big{)}-\sigma^{ \prime}\Big{(}(W_{0}^{i})^{\top}X_{k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^ {N}(W_{k-1}^{i})\Big{)}\right|}_{\leq C_{s}|W_{k}^{i}-W_{0}^{i}|}\frac{\left| \nabla h(W_{k}^{i})^{\top}X_{k}\right|}{\leq C}\] \[\quad+\underbrace{\left|\sigma^{\prime}\Big{(}(W_{0}^{i})^{\top} X_{k}+\frac{1}{N}\sum_{j=1}^{N}B_{k}^{j}v_{k}^{N}(W_{k-1}^{i})\Big{)}\right| \left|X_{k}\right|}_{\leq C_{a}|W_{k}^{i}-W_{0}^{i}|}\frac{\left|\nabla h(W_{k} ^{i})-\nabla h(W_{0}^{i})\right|}{\leq Cd^{2}|W_{k}^{i}-W_{0}^{i}|}\Bigg{]}\] \[\leq\frac{C_{T}}{N}\sum_{i=1}^{N}\left[|C_{k}^{i}-C_{0}^{i}|+|W_ {k}^{i}-W_{0}^{i}|\right]\stackrel{{\eqref{eq:C_T}}}{{\leq}} \frac{C_{T}}{N^{1-\beta-\gamma}}. \tag{7.6}\] Furthermore, \[\Big{|}M_{k}^{3,N}\Big{|}\leq\frac{1}{N}\sum_{i=1}^{N}\left|v_{k+1}^{N}(W_{0} ^{i})-h_{k+1}(W_{0}^{i})\right|\left|h(W_{0}^{i})\right|\leq\frac{C}{N}\sum_{i =1}^{N}\left|\Gamma_{k+1}^{N}(W_{0}^{i})\right|,\] where \(\Gamma_{k}^{N}(w)=v_{k}^{N}(w)-h_{k}(w)\). By Proposition 3.8, and recalling the definitions of \(E^{N,1},E^{N,2}\), we have \[\mathbb{E}\left|M_{k}^{3,N}\right|^{2} \leq\frac{C}{N}\sum_{i=1}^{N}\mathbb{E}\left|\Gamma_{k+1}^{N}(W_{0 }^{i})\right|^{2}\] \[\stackrel{{\eqref{eq:C_T}}}{{\leq}}C\left[\mathbb{E }\left[E_{k+1}^{N,1}\right]^{2}+\mathbb{E}\left[E_{k+1}^{N,2}\right]^{2}\right]\] \[\stackrel{{\eqref{eq:C_T}}}{{\leq}}C\left[\frac{2}{N }+2C_{a}^{2}\mathbb{E}\left(E_{k-1}^{N,1}\right)^{2}+\frac{2C_{T}}{N^{2-2\beta- 2\gamma}}+2C_{a}^{2}\mathbb{E}\left(E_{k-1}^{N,2}\right)^{2}\right]\] \[\stackrel{{\eqref{eq:C_T}}}{{\leq}}\frac{C_{T}}{N^{2-2 \beta-2\gamma}}.\] Next, we have \[\Big{|}M_{k}^{4,N}\Big{|}\leq C\left[\left|E_{k}^{N,1}\right|+\left|E_{k}^{N,2} \right|\right]\implies\mathbb{E}\left[M_{k}^{4,N}\right]^{2}\stackrel{{ \eqref{eq:C_T}}}{{\leq}}\frac{C_{T}}{N^{2-2\beta-2\gamma}}. \tag{7.7}\] The remaining two terms could be easily bounded using similar arguments in establishing equation (6.13). Formally, we note that \(h_{k+1}(W_{0}^{i})h(W_{0}^{i})\,\mid\,h_{k+1}\) is conditionally mutually iid for \(i=1,\cdots,N\), so \[\mathbb{E}\left|M_{k}^{5,N}\right|^{2} =\mathbb{E}\left(\left\langle h_{k+1}(w)h(w),\lambda^{N}-\lambda \right\rangle^{2}\right)\] \[=\mathbb{E}\left[\mathbb{E}\left[\left\langle h_{k+1}(w)h(w), \lambda^{N}-\lambda\right\rangle^{2}\,\left|\,h_{k+1}\right]\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[\left(\frac{1}{N}\sum_{i=1}^{N} \left(h_{k+1}(W_{0}^{i})h(W_{0}^{i})-\int h_{k+1}(w^{\prime})h(w^{\prime})\, \lambda(db^{\prime}\,dw^{\prime})\right)\right)^{2}\,\left|\,h_{k+1}\right]\right]\] \[=\frac{1}{N}\mathbb{E}\left[\mathbb{E}\left[\left(h_{k+1}(W^{1}) h(W^{1})\right)^{2}\,\left|\,\,\,h_{k+1}\right]-\left(\int h_{k+1}(w^{\prime})h(w^{ \prime})\,\lambda(db^{\prime}\,dw^{\prime})\right)^{2}\right]\] \[\leq\frac{C}{N}.\] Noting also that \((C_{0}^{i})^{2}\sigma^{\prime}\left((W_{0}^{i})^{\top}X_{k}+\langle b^{\prime }h_{k}(w^{\prime}),\,\lambda\rangle\right)\nabla h(W_{0}^{i})^{\top}X_{k}\,|\, h_{k},X_{k}\) is conditionally mutually iid, we can apply the exact same argument to show that \[\mathbb{E}\left|M_{k}^{6,N}\right|^{2}=\frac{\mathbb{E}\left[\left((C_{0}^{1}) ^{2}\sigma^{\prime}\left((W_{0}^{1})^{\top}X_{k}+\langle b^{\prime}h_{k}(w^{ \prime}),\,\lambda\rangle\right)\nabla h(W_{0}^{1})^{\top}X_{k}\right)\right]^ {2}}{N}\leq\frac{C}{N}.\] Finally, since \(\beta>1/2\) and \(\gamma>0\), we have \(2-2\beta-2\gamma<1-2\gamma<1\), so \(C_{T}N^{-1}=o(N^{-(2-2\beta-2\gamma)})\). Therefore, one could sum up all fluctuation term to have \[\mathbb{E}\left|\delta^{(2)}g_{k}^{N}(h)-\delta^{(3)}g_{k}^{N}(h)\right|^{2} \leq\frac{C}{N^{2-2\gamma}}\frac{C_{T}}{N^{2-2\beta-2\gamma}}=\frac{C_{T}}{N^ {4-2\beta-4\gamma}}.\] Arguing as before, one could now conclude that \[\mathbb{E}\left|g_{t}^{N}(h)-g_{0}^{N}(h)-\sum_{k=0}^{\lfloor Nt\rfloor-1} \delta^{(3)}g_{k}^{N}(h)\right|^{2}\leq\frac{C_{T}}{N^{2-2\beta-4\gamma}}. \tag{7.8}\] ### Removal of the clipping function So far, we have shown that one could study the process \(g_{m}^{N}(h)\) by looking at the increment \(\delta^{(3)}g_{k}^{N}\). To simplify our computation, let us recall the fact that \[h_{k+1}(w)=[\varsigma(X_{k},Z_{k},Y_{k},h_{k})](w)=\sigma(w^{\top}X_{k}+ \langle b^{\prime}h_{k}(w^{\prime}),\,\lambda\rangle),\] and define the kernel \(\mathcal{K}_{x,\lambda}(\cdot,\cdot):\mathcal{H}\times\mathcal{H}\to\mathbb{R}\) for all \(x\): \[\mathcal{K}_{x,\lambda}(h,\mathsf{h})=\left\langle\sigma(w^{\top}x+\langle b^ {\prime}\mathsf{h}(w^{\prime})\rangle)h(w)+c^{2}\sigma^{\prime}(w^{\top}x+ \langle b^{\prime}\mathsf{h}(w^{\prime}),\lambda\rangle)\nabla h(w)^{\top}x,\, \lambda\right\rangle,\] then \[\delta^{(3)}g_{k}^{N}(h) =-\frac{\alpha}{N}(\psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\left\langle h _{k+1}(w)h(w)+c^{2}\sigma^{\prime}\left(w^{\top}X_{k}+\langle b^{\prime}h_{k}(w ^{\prime}),\lambda\rangle\right)\nabla h(w)^{\top}X_{k},\,\lambda\right\rangle,\] \[\stackrel{{(\ref{eq:kernel})}}{{=}}-\frac{\alpha}{N}( \psi^{N}(g_{k}^{N}(h_{k+1}))-Y_{k})\left\langle\sigma(w^{\top}X_{k}+\langle b ^{\prime}h_{k}(w^{\prime}),\,\lambda\rangle)h(w)+c^{2}\sigma^{\prime}\left(w^{ \top}X_{k}+\langle b^{\prime}h_{k}(w^{\prime}),\lambda\rangle\right)\nabla h (w)^{\top}X_{k},\,\lambda\right\rangle\] \[=-\frac{\alpha}{N}(\psi^{N}(g_{k}^{N}(\varsigma(X_{k},Z_{k},Y_{k}, h_{k})))-Y_{k})\,\mathcal{K}_{X_{k},\lambda}(h,h_{k}). \tag{7.9}\] We have assumed that \(|(X_{k},Z_{k})|\leq 1\) in Assumption 2.1, which implies \(|X_{k}|\leq 1\), so we could limit our discussion of the kernel \(\mathcal{K}_{x,\lambda}\) for \(|x|\leq 1\). By Assumption 2.3 on activation function \(\sigma(\cdot)\) and the space \(\mathcal{H}\), we have \[|\mathcal{K}_{x,\lambda}(h,\mathsf{h})|\leq C.\] The main objective of this section is to remove the clipping function in \(\delta^{(3)}g_{k}^{N}(h)\), and set the initial value of the output evolution to zero, without introducing too much error. **Lemma 7.4** (Uniform integrability of \(g_{m}^{N}(h_{m+1})\)).: _For \(m\leq\lfloor NT\rfloor-1\), we have \(\mathbb{E}\left[g_{m}^{N}(h_{m+1})\right]^{2}\leq C_{T}\)._ Proof.: Using the independence and identical distribution of \((C_{0}^{i},W_{0}^{i})\), for \(i=1,\cdots,N\) and the fact that \(C_{0}^{i}\) and \(W_{0}^{i}\) are independent from each other with \(C_{0}^{i}\) having zero mean, we have \[\mathbb{E}|g_{0}^{N}(h_{1})|^{2} =\mathbb{E}\left[\mathbb{E}\left[\left(\frac{1}{N^{\beta}}\sum_{i =1}^{N}C_{0}^{i}\sigma((W_{0}^{i})^{\top}X_{0})\right)^{2}\,\bigg{|}\,X_{0} \right]\right]\] \[=\mathbb{E}\left[\frac{1}{N^{2\beta}}\sum_{i=1}^{N}\mathbb{E} \left[\left(C_{0}^{i}\sigma((W_{0}^{i})^{\top}X_{0})\right)^{2}\,\bigg{|}\,X _{0}\right]\right]\] \[\leq\mathbb{E}\left[\frac{1}{N^{2\beta}}\sum_{i=1}^{N}1\right]= \frac{1}{N^{2\beta-1}}\] Therefore by (7.8) we have \[\mathbb{E}|g_{m}^{N}(h_{m+1})|^{2} \leq 2\mathbb{E}\left|\sum_{k=0}^{m-1}\delta^{(3)}g_{k}^{N}(h_{m+ 1})\right|^{2}+\frac{C_{T}}{N^{2-2\beta-4\gamma}}\] \[\leq 2m\sum_{k=0}^{m-1}\mathbb{E}\left[\delta^{(3)}g_{k}^{N}(h_{m+ 1})\right]^{2}+\frac{C_{T}}{N^{2-2\beta-4\gamma}}\] \[\leq\frac{2CT}{N}\sum_{k=0}^{m-1}2\left[\mathbb{E}\left|\psi^{N} (g_{k}^{N}(h_{k+1}))\right|^{2}+\mathbb{E}\left|Y_{k}\right|^{2}\right]+\frac{ C_{T}}{N^{2-2\beta-4\gamma}}\] \[\leq\frac{4CT}{N}\sum_{k=0}^{m-1}\left[\mathbb{E}\left|g_{k}^{N} (h_{k+1})\right|^{2}+C_{y}^{2}\right]+\frac{C_{T}}{N^{2-2\beta-4\gamma}}\] \[\leq\frac{4CT}{N}\sum_{k=0}^{m-1}\left[\mathbb{E}\left|g_{k}^{N} (h_{k+1})\right|^{2}\right]+4CC_{y}^{2}T+\frac{C_{T}}{N^{2-2\beta-4\gamma}}\] \[\leq\frac{4CT}{N}\sum_{k=0}^{m-1}\mathbb{E}\left|g_{k}^{N}(h_{k+1 })\right|^{2}+C_{T}\] We may then use discrete Gronwall's inequality to conclude that \[\mathbb{E}|g_{m}^{N}(h_{m+1})|^{2}\leq C_{T}\exp\left(\frac{4CmT}{N}\right) \leq C_{T}.\] Because the expectation \(\mathbb{E}|g_{m}^{N}(h_{m+1})|^{2}\) is uniformly bounded for \(m\leq\lfloor NT\rfloor\), we may invoke Markov's inequality to prove the following: **Lemma 7.5** (Removal of clipping function in \(\delta^{(3)}\)).: _Define_ \[\delta^{(4)}g_{k}^{N}(h)=\frac{\alpha}{N}(g_{k}^{N}(h_{k+1})-Y_{k})\mathcal{K }_{X_{k},\lambda}(h,\mathrm{h}),\] _then for all \(h\in\mathcal{H}\), the following holds for all \(k\leq\lfloor NT\rfloor\)_ \[\mathbb{E}\left|\delta^{(3)}g_{k}^{N}(h)-\delta^{(4)}g_{k}^{N}(h)\right|\leq \frac{C_{T}}{N^{1+\gamma}}.\] Proof.: Note that \[\left|\delta^{(3)}g_{k}^{N}(h)-\delta^{(4)}g_{k}^{N}(h)\right|=\frac{\alpha}{N} \left|\psi^{N}(g_{k}^{N}(h_{k+1}))-g_{k}^{N}(h_{k+1})\right|\left|\mathcal{K}_{X _{k},\lambda}(h,h_{k})\right|\leq\frac{C}{N}\left|\psi^{N}(g_{k}^{N}(h_{k+1}))- g_{k}^{N}(h_{k+1})\right|\] Recall that when \(|g_{k}^{N}(h_{k+1})|\leq N^{\gamma}\) then \(\psi^{N}(g_{k}^{N}(h_{k+1}))=g_{k}^{N}(h_{k+1})\), so \[\psi^{N}(g_{k}^{N}(h_{k+1}))-g_{k}^{N}(h_{k+1}) =(\psi^{N}(g_{k}^{N}(h_{k+1}))-g_{k}^{N}(h_{k+1}))\left(\mathbf{1} _{\{|g_{k}^{N}(h_{k+1})|>N^{\gamma}\}}+\mathbf{1}_{\{|g_{k}^{N}(h_{k+1})|\leq N ^{\gamma}\}}\right)\] \[=(\psi^{N}(g_{k}^{N}(h_{k+1}))-g_{k}^{N}(h_{k+1}))\mathbf{1}_{\{| g_{k}^{N}(h_{k+1})|>N^{\gamma}\}}.\] Combining with \(|\psi^{N}(x)|\leq|x|\), we have \[\mathbb{E}\left|\delta^{(3)}g_{k}^{N}(h)-\delta^{(4)}g_{k}^{N}(h)\right| \leq\frac{C}{N}\mathbb{E}\left|\psi^{N}(g_{k}^{N}(h_{k+1}))-g_{k} ^{N}(h_{k+1})\right|\mathbf{1}_{\{|g_{k}^{N}(h_{k+1})|>N^{\gamma}\}}\] \[\leq\frac{C}{N}\mathbb{E}\left[\left|\psi^{N}(g_{k}^{N}(h_{k+1}) )\right|+\left|g_{k}^{N}(h_{k+1})\right|\right]\mathbf{1}_{\{|g_{k}^{N}(h_{k+1 })|>N^{\gamma}\}}\] \[\leq\frac{2C}{N}\mathbb{E}\left|g_{k}^{N}(h_{k+1})\right|\mathbf{ 1}_{\{|g_{k}^{N}(h_{k+1})|>N^{\gamma}\}}\] \[\leq\frac{2C}{N}\sqrt{\mathbb{E}\left|g_{k}^{N}(h_{k+1})\right|^ {2}}\sqrt{\mathbb{E}\left[\mathbf{1}_{\{|g_{k}^{N}(h_{k+1})|>N^{\gamma}\}} \right]}\] \[\leq\frac{C_{T}}{N}\sqrt{\frac{\mathbb{E}\left|g_{k}^{N}(h_{k+1}) \right|^{2}}{N^{2\gamma}}}=\frac{C_{T}}{N^{1+\gamma}}.\] Arguing as before, we have \[\mathbb{E}\left|g_{t}^{N}(h)-g_{0}^{N}(h)-\sum_{k=0}^{\lfloor Nt\rfloor-1} \delta^{(4)}g_{k}^{N}(h)\right|\leq\frac{C_{T}}{N^{(1-\beta-2\gamma)\wedge \gamma}}, \tag{7.10}\] We now record a simple observation regarding the boundeness of the second moment of \(g_{0}^{N}(h)\). **Lemma 7.6**.: _For all \(h\in\mathcal{H}\),_ \[\mathbb{E}\left[g_{0}^{N}(h)\right]^{2}\leq\frac{1}{N^{2\beta-1}}\] Proof.: Using the independence and identical distribution of \((C_{0}^{i},W_{0}^{i})\), for \(i=1,\cdots,N\) and the fact that \(C_{0}^{i}\) and \(W_{0}^{i}\) are independent from each other with \(C_{0}^{i}\) having zero mean, we have \[\mathbb{E}\left[g_{0}^{N}(h)\right]^{2}=\frac{1}{N^{2\beta}}\mathbb{E}\left[ \sum_{i=1}^{N}C_{0}^{i}h(W_{0}^{i})\right]^{2}=\frac{\mathbb{E}\left[C_{0}^{1} h(W_{0}^{1})\right]^{2}}{N^{2\beta-1}}\leq\frac{1}{N^{2\beta-1}}\] _Remark 7.7_.: The above is also true if we replace the fixed \(h\) with the elements in random sequence \((h_{k})_{k\geq 0}\), as the sequence \(h_{k}\) is independent of the initialisation \((C_{0}^{i},W_{0}^{i})_{i=1}^{N}\). This allows us to consider the new evolution that \(\varphi_{k}^{N}(h)\) that satisfies the evolution equation \[\varphi_{m}^{N}(h) =\sum_{k=0}^{m-1}\triangle\varphi_{k}^{N}(h),\quad\varphi_{0}^{N} (h)=0 \tag{7.11}\] \[\triangle\varphi_{k}^{N}(h) =-\frac{\alpha}{N}(\varphi_{k}^{N}(h_{k+1})-Y_{k})\mathcal{K}_{X _{k},\lambda}(h,h_{k}).\] \[=-\frac{\alpha}{N}(\varphi_{k}^{N}(\varsigma(X_{k},Z_{k},Y_{k},h _{k}))-Y_{k})\mathcal{K}_{X_{k},\lambda}(h,h_{k}).\] **Lemma 7.8** (\(\varphi_{m}^{N}(h)\) approximates \(g_{m}^{N}(h)\)).: _For all \(m\leq\lfloor NT\rfloor\) and \(h\in\mathcal{H}\),_ \[\sup_{h\in\mathcal{H}}\mathbb{E}\left|g_{m}^{N}(h)-\varphi_{m}^{N}(h)\right| \leq\frac{C_{T}}{N^{(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/2)}}. \tag{7.12}\] Proof.: In light of Lemma 7.6, we know that \[\mathbb{E}\left|g_{m}^{N}(h)-\varphi_{m}^{N}(h)\right| \leq\mathbb{E}\left[g_{m}^{N}(h)-g_{0}^{N}(h)-\varphi_{m}^{N}(h) \right]+\mathbb{E}\left|g_{0}^{N}(h)\right|\] \[\leq\mathbb{E}\left|\sum_{k=0}^{m-1}(\delta^{(4)}g_{k}^{N}(h)- \triangle\varphi_{k}^{N}(h))\right|+\frac{1}{N^{\beta-1/2}}+\frac{C_{T}}{N^{ 1-\beta-2\gamma}}\] \[\leq\frac{C}{N}\sum_{k=0}^{m-1}\mathbb{E}\left|g_{k}^{N}(h_{k+1} )-\varphi_{k}^{N}(h_{k+1})\right|+\frac{1}{N^{\beta-1/2}}+\frac{C_{T}}{N^{(1- \beta-2\gamma)\wedge\gamma}}. \tag{7.13}\] The above inequality is also true when replacing \(h=h_{m+1}\), in particular \[\mathbb{E}\left|g_{m}^{N}(h_{m+1})-\varphi_{m}^{N}(h_{m+1})\right| \leq\frac{C}{N}\sum_{k=0}^{m-1}\mathbb{E}\left|g_{k}^{N}(h_{k+1})- \varphi_{k}^{N}(h_{k+1})\right|+\frac{1}{N^{\beta-1/2}}+\frac{C_{T}}{N^{(1- \beta-2\gamma)\wedge\gamma}}; \tag{7.14}\] so by discrete Gronwall's inequality, \[\mathbb{E}\left|g_{m}^{N}(h_{m+1})-\varphi_{m}^{N}(h_{m+1})\right| \leq\frac{C_{T}\exp(Cm/N)}{N^{(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/2)} }\leq\frac{C_{T}}{N^{(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/2)}}. \tag{7.15}\] We can then plug in (7.15) into (7.13) to obtain our desired result. With the absence of initialisation in the sequence \(\varphi_{m}^{N}(h)\), one can prove that \(\varphi_{m}^{N}(h)\) is surely bounded for any \(h\in\mathcal{H}\). **Lemma 7.9**.: _For \(m\leq\lfloor NT\rfloor\) and \(h\in\mathcal{H}\), \(|\varphi_{m}^{N}(h)|\leq C_{T}\) surely. Moreover, \(\varphi_{m}^{N}(h)\) is \(C_{T}\) globally Lipschitz with respect to the \(H^{1}(\lambda)\) norm whenever \(m\leq\lfloor NT\rfloor\). Finally, consider the map for \(\mathsf{H}=(\mathsf{x},\mathsf{z},\mathsf{y},\mathsf{h})\)_ \[G^{m,h}:\mathsf{H}\in\mathcal{X}\mapsto-\alpha(\varphi_{m}^{N}(\varsigma( \mathsf{H}))-\mathsf{y})\mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{h}),\] _so that_ \[N\times\triangle\varphi_{m}^{N}(h)=G^{m,h}(X_{m},Z_{m},Y_{m},h_{m}). \tag{7.16}\] _Then for fixed \(h\in\mathcal{H}\) and \(m\leq\lfloor NT\rfloor\),_ * \(\int|G^{m,h}|\,d\mu<+\infty\)_, where_ \(\mu\) _is the invariant measure of chain_ \((H_{k})\) _as induced in Theorem_ 3.6_, and_ * \(G^{m,h}\) _is_ \(C_{T}\)_-Lipschitz in_ \(\mathcal{X}\)_._ Proof.: From the evolution equation, one sees that \[\left|\varphi_{m}^{N}(h)\right|\leq\frac{C}{N}\sum_{k=0}^{m-1}(\left|\varphi_ {k}^{N}(h_{k+1})\right|+C), \tag{7.17}\] The above equation is true for \(h=h_{k+1}\), so by discrete Gronwall's inequality one has \[\left|\varphi_{m}^{N}(h_{m+1})\right|\leq C\exp(Cm/N)\leq Ce^{CT}=:C_{T}.\] Substituting this into (7.17) yields our estimate \(|\varphi_{m}^{N}(h)|\leq C\). To show the global Lipschitzness of \(\varphi_{m}^{N}(\cdot)\), we note that for any \(h,\tilde{h}\in H^{1}(\lambda)\) \[|\varphi_{m}^{N}(h)-\varphi_{m}^{N}(\tilde{h})| \leq\sum_{k=0}^{m}\frac{\alpha}{N}|\varphi_{k}^{N}(h_{k+1})-Y_{k} |\int\left[|h(w)-\tilde{h}(w)|+|\nabla h(w)-\nabla\tilde{h}(w)|\right]\, \lambda(dw)\] \[\leq C_{T}\left\|h-\tilde{h}\right\|_{H^{1}(\lambda)}. \tag{7.18}\] It is then immediately true that \(\int|G^{m,h}(\mathsf{x},\mathsf{z},\mathsf{y},\mathsf{h})|\,\mu(d\mathsf{x},d \mathsf{z},d\mathsf{y},d\mathsf{h})<+\infty\). Finally we recall that if \((x,z,y,h)\in\mathcal{X}\) then \(|x|\leq 1\) and \(|y|\leq C_{y}\), and that \(|\mathcal{K}_{x,\lambda}(h,\mathsf{h})|\leq C\). Therefore, for any \(\mathsf{H}=(\mathsf{x},\mathsf{z},\mathsf{y},\mathsf{h}),\tilde{\mathsf{H}}=( \tilde{\mathsf{x}},\tilde{\mathsf{z}},\tilde{\mathsf{y}},\tilde{\mathsf{h}})\) with \(|\mathsf{x}|\vee|\tilde{\mathsf{x}}|\leq 1\) and \(|\mathsf{y}|\vee|\tilde{\mathsf{y}}|\leq C_{y}\), \[|\mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{h})-\mathcal{K}_{ \mathsf{x},\lambda}(h,\tilde{\mathsf{h}})| \leq\alpha\Big{[}\left<\left|h(w)\right|\sigma(w^{\top}\mathsf{x }+\langle b^{\prime}\mathsf{h}(w^{\prime}),\lambda\rangle)-\sigma(w^{\top} \tilde{\mathsf{x}}+\langle b^{\prime}\tilde{\mathsf{h}}(w^{\prime}),\,\lambda \rangle)|,\,\lambda\right>\] \[\quad+\left<c^{2}|\nabla h(w)||\sigma^{\prime}(w^{\top}\mathsf{x }+\langle b^{\prime}\mathsf{h}(w^{\prime}),\lambda\rangle)-\sigma^{\prime}(w^ {\top}\tilde{\mathsf{x}}+\langle b^{\prime}\tilde{\mathsf{h}}(w^{\prime}),\, \lambda\rangle)|,\,\lambda\right>\Big{]}\] \[\leq 4\alpha C_{\sigma}\left[\int w^{\top}(\mathsf{x}-\tilde{ \mathsf{x}})\,\lambda(dw)+\int|\mathsf{h}(w^{\prime})-\tilde{\mathsf{h}}(w^{ \prime})|\,\lambda(dw^{\prime})\right]\] \[\leq 4\alpha C_{\sigma}\left[|\mathsf{x}-\tilde{\mathsf{x}}|+\| \mathsf{h}-\tilde{\mathsf{h}}\|_{H^{1}(\lambda)}\right].\] We also recall the controls from inequality (6.5) \[\|[\varsigma(\mathsf{H})](w)-\varsigma(\tilde{\mathsf{H}})](w)\| ^{2}_{H^{1}(\lambda)} =\left\|\sigma(w^{\top}\mathsf{x}+\langle b^{\prime}\mathsf{h}( w^{\prime}),\lambda\rangle)-\sigma(w^{\top}\tilde{\mathsf{x}}+\langle b^{ \prime}\tilde{\mathsf{h}}(w^{\prime}),\lambda\rangle)\right\|^{2}_{H^{1}( \lambda)}\] \[\leq 8C_{\sigma}^{2}\left[|\mathsf{x}-\tilde{\mathsf{x}}|^{2}+\| \mathsf{h}-\tilde{\mathsf{h}}\|^{2}_{H^{1}(\lambda)}\right],\] \[\leq 8C_{\sigma}^{2}\|\mathsf{H}-\tilde{\mathsf{H}}\|^{2} \tag{7.19}\] so by (7.18) we have \[\left|\varphi_{m}^{N}(\varsigma(\mathsf{H}))-\varphi_{m}^{N}(\varsigma(\tilde{ \mathsf{H}}))\right|^{2}\leq C_{T}\|\mathsf{H}-\tilde{\mathsf{H}}\|^{2}_{ \mathcal{X}},\] This yields the following control \[\implies\,|G^{m,h}(\mathsf{H})-G^{m,h}(\tilde{\mathsf{H}})| \leq\alpha|\mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{h})|\left[ \left|\varphi_{m}^{N}(\varsigma(\mathsf{H}))-\varphi_{m}^{N}(\varsigma(\tilde {\mathsf{H}}))\right|+|\mathsf{y}-\tilde{\mathsf{y}}|\right]\] \[\leq C_{T}\|\mathsf{H}-\tilde{\mathsf{H}}\|_{\mathcal{X}},\] which completes our proof. In particular, the Lipschitz constant for \(G^{0,h}\) is independent from \(T\) as \(\varphi_{0}^{N}(\cdot)\) is set to zero. ### Weak convergence analysis We study the difference between the random evolution \[\varphi_{m}^{N}(h)=\sum_{k=0}^{m-1}\triangle\varphi_{k}^{N}(h)\] and the evolution \[\tilde{\varphi}_{m}^{N}(h)=\sum_{k=0}^{m-1}\delta\varphi_{k}^{N}(h),\] where \[\delta\varphi_{k}^{N}(h)=-\frac{\alpha}{N}\int_{\mathcal{X}}(\varphi_{k}^{N}( \varsigma(\mathsf{H}))-\mathsf{y})\mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{ h})\,\mu(d\mathsf{H}).\] We study this difference by first constructing the associated Poisson equation of the chain of memories [27]. We write the transition kernel of the Markov chain \((H_{k})_{k\geq 1}\) as \(P\), and recall that the Markov chain admits a limiting invariant measure \(\mu\). **Definition 7.10**.: Given \(G:\mathcal{X}\to\mathbb{R}\) a measurable function with \(\int|G|d\mu<+\infty\), the Poisson equation is a functional equation on \(\hat{G}\) \[\hat{G}(H)-\int_{\mathcal{X}}\hat{G}(\bar{H})\,P(H,d\bar{H})=G(H)-\int_{ \mathcal{X}}G(\bar{H})\,\mu(d\bar{H}), \tag{7.20}\] where \(H=(x,z,y,h)\), \(\bar{H}=(\bar{x},\bar{z},\bar{y},\bar{h})\), and \(P\) is the transitional kernel of the Markov Chain \((H_{k})\). In our case, the Poisson equation is used to replace the integral with respect to the invariant measure \(\mu\) with an integral with respect to the transition kernel \(P\). We note that the following expansion converges for any \(H=(x,z,y,h)\in\mathcal{X}\) if \(G\) is globally \(C\)-Lipschitz: \[\hat{G}(H)=\sum_{k=0}^{\infty}\left(\int_{\mathcal{X}}G(\bar{H})\,P^{k}(H,d \bar{H})-\int_{\mathcal{X}}G(\bar{H})\,\mu(d\bar{H})\right), \tag{7.21}\] where \(P^{k}=\underbrace{P\circ...\circ P}_{k\text{ times}}\). This is because \[\left|\int_{\mathcal{X}}G(\bar{H})P^{k}(H,d\bar{H})-\int_{\mathcal{X}}G(\bar {H})\mu(d\bar{H})\right|\leq C\mathsf{Wass}_{2}(P^{k}(H,\cdot),\mu)\leq Cq_{0} ^{k}. \tag{7.22}\] As a result, for all \(H\) \[|\hat{G}(H)|=\left|\sum_{k=0}^{\infty}\left(\int_{\mathcal{X}}G(\bar{H})\,P^{ k}(H,d\bar{H})-\int_{\mathcal{X}}G(\bar{H})\,\mu(d\bar{H})\right)\right| \leq\sum_{k=0}^{\infty}Cq_{0}^{k}=\frac{C}{1-q_{0}}<+\infty. \tag{7.23}\] **Lemma 7.11**.: _Let \(G\) be a globally \(C\)-Lipschitz function on \(\mathcal{X}\), and define \(\hat{G}\) as in (7.21). Then \(\hat{G}\) is the solution to the Poisson equation (7.20). Moreover \(\hat{G}\) is \(C/(1-q_{0})\)-Lipschitz, where \(q_{0}\) is as defined in Theorem 3.6_ Proof.: We observe that \[\int_{\mathcal{X}}\hat{G}(\bar{H})\,P(H,d\bar{H})=\int_{\mathcal{X}}\left[\sum _{k=0}^{\infty}\left(\int_{\mathcal{X}}G(\mathsf{H})\,P^{k}(\bar{H},d\mathsf{ H})-\int_{\mathcal{X}}G(\mathsf{H})\,\mu(d\mathsf{H})\right)\right]\,P(H,d \bar{H}).\] We can exchange the sum and the integral as the integrand is summable (hence integrable). Noting that \(\mu\) is invariant, we have \[\int_{\mathcal{X}}\hat{G}(\bar{H})\,P(H,d\bar{H})=\sum_{k=0}^{\infty}\left( \int_{\mathcal{X}}G(\bar{H})P^{k+1}(H,d\bar{H})-\int_{\mathcal{X}}G(\bar{H}) \mu(d\bar{H})\right).\] Subtracting this from \(\hat{G}(H)\) yields \[\hat{G}(H)-\int_{\mathcal{X}}\hat{G}(\bar{H})\,P(H,d\bar{H})=\int_{\mathcal{X} }\hat{G}(\bar{H})\,P^{0}(H,d\bar{H})-\int_{\mathcal{X}}G(\bar{H})\,\mu(d\bar{ H})=G(H)-\int_{\mathcal{X}}G(\bar{H})\,\mu(d\bar{H})\] as desired. We further note that if \(H^{\prime}=(x^{\prime},z^{\prime},y^{\prime},h^{\prime})\), then \[|\hat{G}(H)-\hat{G}(H^{\prime})| \leq\sum_{k=0}^{\infty}\left|\int_{\mathcal{X}}G(\bar{H})\,P^{k} (H,d\bar{H})-\int_{\mathcal{X}}G(\bar{H})\,P^{k}(H^{\prime},d\bar{H})\right|\] \[\leq C\sum_{k=0}^{\infty}\mathsf{Wass}_{2}(P^{k}(H,\cdot),P^{k}(H ^{\prime},\cdot))\] \[\leq C\|H-H^{\prime}\|_{\mathcal{X}}\sum_{k=0}^{\infty}q_{0}^{k} \leq\frac{C}{1-q_{0}}\|H-H^{\prime}\|_{\mathcal{X}},\] completing the proof. We now consider the Poisson equations with \(G(\mathsf{H})=G^{m,h}(\mathsf{H})\) as defined in lemma 7.9. These Poisson equations admit a solution for each \(m\leq\lfloor NT\rfloor\) using the expansion in (7.21), for which we will call them \(\hat{G}^{m,h}(\mathsf{H})\). In summary we have \[\hat{G}^{m,h}(\mathsf{H})-\int_{\mathcal{X}}\hat{G}^{m,h}(\bar{\mathsf{H}})\,P( \mathsf{H},d\bar{\mathsf{H}})=G^{m,h}(\mathsf{H})-\int_{\mathcal{X}}G^{m,h}( \bar{\mathsf{H}})\,\mu(d\bar{\mathsf{H}}). \tag{7.24}\] It is important to note from the above analysis that, since there is a constant \(C_{T}>0\) such that \(G^{m,h}(\mathsf{H})\) is \(C_{T}\)-Lipschitz (Lemma 7.9) for \(m\leq\lfloor NT\rfloor\) and \(h\in\mathcal{H}\), the above analysis shows that Corollary 7.12 below holds. **Corollary 7.12**.: \[\sup_{h,\mathsf{H}}|\hat{G}^{m,h}(\mathsf{H})|\leq C_{T},\] _and that \(\hat{G}^{m,h}(\cdot)\) is \(C_{T}/(1-q_{0})\) Lipschitz._ With this, we could study the difference between \(\varphi_{m}^{N}(h)\) and \(\tilde{\varphi}_{m}^{N}(h)\) for any \(h\in\mathcal{H}\) and \(m\leq\lfloor NT\rfloor\). Recalling that \(H_{k}=(X_{k},Z_{k},Y_{k},h_{k})\) and letting \(\bar{\mathsf{H}}=(\bar{\mathsf{x}},\bar{\mathsf{z}},\bar{\mathsf{y}},\bar{ \mathsf{h}})\), we have \[\varphi_{m}^{N}(h)-\tilde{\varphi}_{m}^{N}(h)=\sum_{k=0}^{m-1}( \triangle\varphi_{k}^{N}(h)-\delta\varphi_{k}^{N}(h))\] \[=\frac{1}{N}\sum_{k=0}^{m-1}\left(G^{k,h}(H_{k})-\int_{\mathcal{X }}G^{k,h}(\bar{\mathsf{H}})\,\mu(d\bar{\mathsf{H}})\right)\] \[=\frac{1}{N}\sum_{k=0}^{m-1}\left(\hat{G}^{k,h}(H_{k})-\int_{ \mathcal{X}}\hat{G}^{k,h}(\bar{\mathsf{H}})\,P(H_{k},d\bar{\mathsf{H}})\right)\] \[=Q^{N,1}(h)+Q^{N,2}(h)+R^{N,1}(h), \tag{7.25}\] where \[Q_{m}^{N,1}(h) =\frac{1}{N}\sum_{k=1}^{m-1}\left(\hat{G}^{k,h}(H_{k})-\int_{ \mathcal{X}}\hat{G}^{k,h}(\bar{\mathsf{H}})\,P(H_{k-1},d\bar{\mathsf{H}})\right)\] \[Q_{m}^{N,2}(h) =\frac{1}{N}\sum_{k=1}^{m-1}\left(\int_{\mathcal{X}}\hat{G}^{k,h }(\bar{\mathsf{H}})\,P(H_{k-1},d\bar{\mathsf{H}})-\int_{\mathcal{X}}\hat{G}^{ k,h}(\bar{\mathsf{H}})\,P(H_{k},d\bar{\mathsf{H}})\right)\] \[R^{N,1}(h) =\frac{1}{N}\left(\hat{G}^{0,h}(H_{0})-\int_{\mathcal{X}}\hat{G }^{0,h}(\bar{\mathsf{H}})\,P(H_{0},d\bar{\mathsf{H}})\right)\] Corollary 7.12 immediately leads to \[\sup_{h\in\mathcal{H}}|R^{N,1}(h)|\leq\frac{C_{T}}{N}\overset{N\to\infty}{ \rightarrow}0.\] The second term on the RHS of (7.2), i.e. \(Q_{m}^{N,2}\), could be analysed by observing that \[Q_{m}^{N,2}(h) =\frac{\alpha}{N}\sum_{k=1}^{m-1}\left(\int_{\mathcal{X}}\hat{G} ^{k,h}(\bar{H})\,P(H_{k-1},d\bar{H})-\int_{\mathcal{X}}\hat{G}^{k,h}(\bar{H}) \,P(H_{k},d\bar{H})\right)\] \[=\frac{\alpha}{N}\left(\int_{\mathcal{X}}\hat{G}^{1,h}(\bar{H})P( H_{0},d\bar{H})+\sum_{k=2}^{m-1}\int_{\mathcal{X}}\hat{G}^{k,h}(\bar{H})\,P(H_{k-1},d \bar{H})-\sum_{k=1}^{m-1}\int_{\mathcal{X}}\hat{G}^{k,h}(\bar{H})\,P(H_{k},d \bar{H}))\right)\] \[=\frac{\alpha}{N}\left(\int_{\mathcal{X}}\hat{G}^{1,h}(\bar{H})P( H_{0},d\bar{H})+\sum_{k=1}^{m-2}\int_{\mathcal{X}}\hat{G}^{k+1,h}(\bar{H})\,P(H_{k},d \bar{H})-\sum_{k=1}^{m-1}\int_{\mathcal{X}}\hat{G}^{k,h}(\bar{H})\,P(H_{k},d \bar{H}))\right)\] \[=\frac{\alpha}{N}\sum_{k=1}^{m-2}\int_{\mathcal{X}}(\hat{G}^{k+1,h }(\bar{H})-\hat{G}^{k,h}(\bar{H}))\,P(H_{k},d\bar{H})+R_{m}^{N,2}(h),\] where \[R_{m}^{N,2}(h)=\frac{\alpha}{N}\bigg{(}\int_{\mathcal{X}}\hat{G}^{1,h}(\bar{H})P(H _{0},d\bar{H})-\int_{\mathcal{X}}\hat{G}^{m-1,h}(\bar{H})\,P(H_{m-1},d\bar{H})) \bigg{)}\] We therefore naturally break our analysis of \(Q_{m}^{N,2}\) into two parts: **Lemma 7.13**.: _If \(m\leq\lfloor NT\rfloor\), then \(\sup_{h\in\mathcal{H}}|R_{m}^{N,2}(h)|\leq C_{T}/N\)._ Proof.: We note that \(\hat{G}^{1,h}\) is \(C\)-Lipschitz by Lemma 7.11. Moreover, since \(G^{m,h}\) is \(C_{T}\)-Lipschitz whenever \(m\leq\lfloor NT\rfloor\), we know from Corollary 7.12 that \(\hat{G}^{m,h}(\cdot)\) is bounded by some constant \(C_{T}>0\) as well. \[|R_{m}^{N,2}(h)| \leq\frac{\alpha}{N}\bigg{[}\bigg{|}\int_{\mathcal{X}}\hat{G}^{1, h}(\bar{H})\,P(H_{0},d\bar{H})-\int_{\mathcal{X}}\hat{G}^{1,h}(\bar{H})\,P(H_{m- 1},d\bar{H})\bigg{|}\] \[\quad+\bigg{|}\int_{\mathcal{X}}\hat{G}^{1,h}(\bar{H})\,P(H_{m- 1},d\bar{H})-\int_{\mathcal{X}}\hat{G}^{m-1,h}(\bar{H})\,P(H_{m-1},d\bar{H}) \bigg{|}\,\bigg{]}\] \[\leq\frac{\alpha}{N}\left(\mathcal{CWass}_{2}(P(H_{0},\cdot),P(H_ {m-1},\cdot))+C_{T}+C_{T}\right)\leq\frac{C_{T}}{N},\] for a constant \(C_{T}\) that depends on \(T\) but changes from line to line. **Lemma 7.14**.: _For \(t\leq T\), there are \(C_{T}>0\) such that for all \(m\leq\lfloor NT\rfloor\),_ \[\sup_{\mathsf{H}\in\mathcal{X}}|\Delta\hat{G}^{m,h}(\mathsf{H})|\leq\frac{C_{ T}}{N},\] _where \(\Delta\hat{G}^{m,h}(\mathsf{H})=\hat{G}^{m+1,h}(\mathsf{H})-\hat{G}^{m,h}( \mathsf{H})\)._ Proof.: Define \(\Delta G^{m,h}(\mathsf{H})=G^{m+1,h}(\mathsf{H})-G^{m,h}(\mathsf{H})\), then we see that \(\Delta\hat{G}^{m,h}(\mathsf{H})\) is a solution to the Poisson equation for \(\Delta G^{m,h}(\mathsf{H})\), i.e. \[\Delta\hat{G}^{m,h}(\mathsf{H})-\int_{\mathcal{X}}\Delta\hat{G}^{m,h}(\bar{H} )\,P(\mathsf{H},d\bar{H})=G^{m,h}(\mathsf{H})-\int_{\mathcal{X}}\Delta G^{m,h }(\bar{\mathsf{H}})\,\mu(d\bar{\mathsf{H}}),\] As seen in (7.22)-(7.23), the boundedness of \(\Delta\hat{G}^{m,h}(\mathsf{H})\) depends on the Lipschitz constants of \(\Delta G^{m,h}(\mathsf{H})\). The proof is completed if we prove that \(\Delta G^{m,h}(\mathsf{H})\) is \(C_{T}/N\)-Lipschitz. To begin, we note that \[\Delta G^{m,h}(\mathsf{H}) =G^{m+1,h}(\mathsf{H})-G^{m,h}(\mathsf{H})\] \[=\alpha(\varphi_{m+1}^{N}(\varsigma(\mathsf{H}))-\varphi_{m}^{N}( \varsigma(\mathsf{H})))\mathcal{K}_{\kappa,\lambda}(h,\mathsf{h})\] \[\stackrel{{\eqref{eq:C_T}}}{{=}}\frac{\alpha}{N}G^{m, \varsigma(\mathsf{H})}(H_{m})\mathcal{K}_{\kappa,\lambda}(h,\mathsf{h}),\] where \(H_{m}=(X_{m},Z_{m},Y_{m},h_{m})\). Since we know a-priori that \(|(X_{m},Z_{m})|\leq 1,|Y_{m}|\leq C_{y}\) and \(\|h_{m}\|_{C_{b}^{2}}\leq C\) we have the following control \[|\mathcal{K}_{X_{m},\lambda}(\varsigma(\mathsf{H}),h_{m})-\mathcal{K }_{X_{m},\lambda}(\varsigma(\tilde{\mathsf{H}}),h_{m})|\] \[\leq\langle|\sigma(w^{\top}X_{m}+\langle b^{\prime}h_{m}(w^{ \prime}),\lambda)\rangle([\varsigma(\mathsf{H})](w)-[\varsigma(\tilde{ \mathsf{H}})](w))|\] \[\quad+c^{2}|\sigma^{\prime}(w^{\top}X_{m}+\langle b^{\prime}h_{m }(w^{\prime}),\lambda))(\nabla[\varsigma(\mathsf{H})](w)-\nabla[\varsigma( \tilde{\mathsf{H}})](w))^{\top}X_{m}|,\lambda\rangle\] \[\leq\int_{\mathbb{R}^{d}}|\varsigma(\tilde{\mathsf{H}})(w)- \varsigma(\tilde{\mathsf{H}})(w)|\,\lambda(dw)+C_{\sigma}\int_{\mathbb{R}^{d}} |\nabla\varsigma(\tilde{\mathsf{H}})(w)-\nabla\varsigma(\tilde{\mathsf{H}})( w)||X_{m}|\,\lambda(dw)\] \[\leq(1+C_{\sigma})\|\varsigma(\mathsf{H})-\varsigma(\tilde{ \mathsf{H}})\|_{H^{1}(\lambda)}\stackrel{{\eqref{eq:C_T}}}{{ \leq}}CC_{\sigma}(1+C_{\sigma})\|\mathsf{H}-\tilde{\mathsf{H}}\|_{\mathcal{X}}\] \[\implies |G^{m,\varsigma(\mathsf{H})}(H_{m})-G^{m,\varsigma(\tilde{ \mathsf{H}})}(H_{m})|\] \[\leq\alpha|\varphi_{m}^{N}(\sigma(w^{\top}X_{m}+\langle b^{\prime }h_{m}(w^{\prime}),\,\lambda)))-Y_{m}||\mathcal{K}_{X_{m},\lambda}(\varsigma( \mathsf{H}),h_{m})-\mathcal{K}_{X_{m},\lambda}(\varsigma(\tilde{\mathsf{H}}),h_{m})|\] \[\leq C_{T}\|\mathsf{H}-\tilde{\mathsf{H}}\|_{\mathcal{X}}.\] \[\implies |\Delta G^{m,h}(\mathsf{H})-\Delta G^{m,h}(\tilde{\mathsf{H}})|\] \[=\frac{\alpha}{N}|K_{\varsigma,\lambda}(h,\tilde{\mathsf{h}})(G^{ m,\varsigma(\mathsf{H})}(H_{m})-G^{m,\varsigma(\tilde{\mathsf{H}})}(H_{m}))+G^{m, \varsigma(\tilde{\mathsf{H}})}(H_{m})(\mathcal{K}_{\varsigma,\lambda}(h, \mathsf{h})-\mathcal{K}_{\tilde{\varsigma},\lambda}(h,\tilde{\mathsf{h}}))|\] \[\leq\frac{C_{T}}{N}\|\mathsf{H}-\tilde{\mathsf{H}}\|_{\mathcal{X}}.\] So indeed \(\Delta G^{m,h}(\mathsf{H})\) is \(C_{T}/N\)-Lipschitz whenever \(m\leq\lfloor NT\rfloor\), and therefore there is another \(C_{T}>0\) such that \(\sup_{\mathsf{H}\in\mathcal{X}}|\Delta\hat{G}^{m,h}|\leq C_{T}\). The above lemmas lead to the control \[\sup_{h\in\mathcal{H}}|Q_{m}^{N,2}(h)|\leq\frac{\alpha}{N}\sum_{k=1}^{m-2} \frac{C_{T}}{N}+\sup_{h\in\mathcal{H}}|R_{m}^{N,2}(h)|\leq\frac{C_{T}}{N}.\] Finally, let's analyze the term \(Q_{m}^{N,1}(h)\). Define the term \[\tilde{g}_{k}^{N}(h)=\hat{G}^{k,h}(H_{k})-\int_{\mathcal{X}}\hat{G}^{k,h}( \bar{\mathsf{H}})\,P(H_{k-1},d\bar{\mathsf{H}}),\quad k\geq 1,\] so that \(Q_{m}^{N,1}(h)=(\alpha\sum_{k=1}^{m-1}\tilde{g}_{k}^{N}(h))/N\). We define the increasing filtration \(\{\mathcal{F}_{m}\}\), with \(\mathcal{F}_{m}\) the smallest \(\sigma\)-algebra induced by the random variables \((H_{k})_{k\leq m}\). Then \(\{\varphi_{k}^{N}(h)\}_{k\leq m}\) are \(\mathcal{F}_{m-1}\) measurable for any \(h\in\mathcal{H}\), as they are continuous functions in terms of \((H_{k})_{k\leq m-1}\). Therefore, we have \[\mathbb{E}[\tilde{g}_{k}^{N}(h)\,|\,\mathcal{F}_{k-1}]=\mathbb{E}[\mathbb{E}[ \tilde{g}_{k}^{N}(h)\,|\,\varphi_{k}^{N}]\,|\,\mathcal{F}_{k-1}]=\mathbb{E}[ \hat{G}^{k,h}(H_{k})\,|\,H_{k-1}]-\int_{\mathcal{X}}\hat{G}^{k,h}(\bar{\mathsf{ H}})\,P(H_{k-1},d\bar{\mathsf{H}})=0.\] If we define \(H_{k:m}\) being the sequence \((H_{k},...,H_{m})\), and \(\sigma(\varphi_{k:m}^{N}(h))\) being the smallest \(\sigma\)-algebra so that each of the random variables \(\varphi_{k:m}^{N}(h)\) is measurable, then for \(m>k\) we can prove recursively \[\mathbb{E}[\tilde{g}_{k}^{N}(h)\tilde{g}_{m}^{N}(h)\,|\,\mathcal{F }_{k-1}] =\mathbb{E}[\mathbb{E}[g_{k}^{N}(h)g_{m}^{N}(h)\,|\,H_{m-1},\sigma( \varphi_{m-1}^{N}(h))]\,|\,\mathcal{F}_{k-1}]\] \[=\mathbb{E}[\mathbb{E}[g_{k}^{N}(h)g_{m}^{N}(h)\,|\,H_{k-1:m-1}, \sigma(\varphi_{k-1:m-1}^{N}(h))]\,|\,\mathcal{F}_{k-1}]\] \[=\mathbb{E}[g_{k}^{N}(h)\mathbb{E}[g_{m}^{N}(h)\,|\,H_{k-1:m-1}, \sigma(\varphi_{k-1:m-1}^{N}(h))]\,|\,\mathcal{F}_{k-1}]=0.\] Finally, notice that \(\hat{G}^{k,h}(\cdot)\) is uniformly bounded (see Corollary 7.12), we have \[\mathbb{E}\left[(\tilde{g}_{k}^{N}(h))^{2}\right] \leq\mathbb{E}\left[\hat{G}^{k,h}(H_{k})-\int_{\mathcal{X}}\hat{G}^ {k,h}(\bar{\mathsf{H}})\,P(H_{k-1},d\bar{\mathsf{H}})\right]^{2}\] \[\leq 2\mathbb{E}\left[\hat{G}^{k,h}(H_{k})\right]^{2}+2\mathbb{E} \left[\int_{\mathcal{X}}\hat{G}^{k,h}(\bar{\mathsf{H}})\,P(H_{k-1},d\bar{ \mathsf{H}})\right]^{2}\leq C_{T}.\] Therefore \[\mathbb{E}[Q^{N,1}(h)]^{2}=\frac{\alpha^{2}}{N^{2}}\sum_{j,k=1}^{\lfloor Nt\rfloor-1 }\mathbb{E}[\tilde{g}_{j}^{N}(h)\tilde{g}_{k}^{N}(h)]=\frac{\alpha^{2}}{N^{2}} \sum_{k=1}^{\lfloor Nt\rfloor-1}\mathbb{E}[(\tilde{g}_{k}^{N}(h))^{2}]\leq\frac {C_{T}}{N}.\] The above estimates are uniform for all \(h\in\mathcal{H}\), so by combining all of them we have **Lemma 7.15**.: _For all \(m\leq\lfloor NT\rfloor\),_ \[\sup_{h}\mathbb{E}\left[\varphi_{m}^{N}(h)-\tilde{\varphi}_{m}^{N}(h)\right] ^{2}\leq\frac{C_{T}}{N}.\] Proof.: We collect the above estimates to show that \[\sup_{h}\mathbb{E}\left[\varphi_{m}^{N}(h)-\tilde{\varphi}_{m}^{N }(h)\right]^{2} \leq\sup_{h}\mathbb{E}\left[Q^{N,1}(h)\right]^{2}+\sup_{h}\mathbb{ E}\left[Q^{N,2}(h)\right]^{2}+\sup_{h}\mathbb{E}\left[R^{N,1}(h)\right]^{2}\] \[\leq\frac{C_{T}}{N}+\frac{C_{T}}{N^{2}}+\frac{C_{T}}{N^{2}}\leq \frac{C_{T}}{N}.\] Let us then define a new process \(\phi_{t}^{N}(h)\) that satisfies the recursion \[\phi_{m}^{N}(h) =\sum_{k=0}^{m-1}\delta\phi_{m}^{N}(h),\quad\phi_{0}^{N}(h)=0,\] \[\delta\phi_{k}^{N}(h) =-\frac{\alpha}{N}\int_{\mathcal{X}}(\phi_{k}^{N}(\varsigma( \mathsf{H}))-\mathsf{y})\mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{h})\,\mu(d \mathsf{H}),\] and study the difference \(\Gamma_{m}^{N}(h)=\varphi_{m}^{N}(h)-\phi_{m}^{N}(h)\). Then we have \[\Gamma_{m}^{N}(h) =\varphi_{m}^{N}(h)-\tilde{\varphi}_{m}^{N}(h)+\tilde{\varphi}_{ m}^{N}(h)-\phi_{m}^{N}(h)\] \[=\varphi_{m}^{N}(h)-\tilde{\varphi}_{m}^{N}(h)-\frac{\alpha}{N} \sum_{k=0}^{m-1}\int_{\mathcal{X}}\Gamma_{k}^{N}(\varsigma(\mathsf{H}))\, \mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{h})\,\mu(d\mathsf{H}).\] So \[\mathbb{E}\left[\Gamma_{m}^{N}(h)\right]^{2} \leq 2\mathbb{E}[\varphi_{m}^{N}(h)-\tilde{\varphi}_{m}^{N}(h)]^{ 2}+\frac{2\alpha}{N^{2}}\mathbb{E}\left[\sum_{k=0}^{m-1}\int_{\mathcal{X}} \Gamma_{k}^{N}(\varsigma(\mathsf{H}))\,\mathcal{K}_{\mathsf{x},\lambda}(h, \mathsf{h})\,\mu(d\mathsf{H})\right]^{2}\] \[\leq\frac{C_{T}}{N}+\frac{2\alpha}{N}\sum_{k=0}^{m-1}\mathbb{E} \left[\int_{\mathcal{X}}\left(\Gamma_{k}^{N}(\varsigma(\mathsf{H}))\mathcal{K }_{\mathsf{x},\lambda}(h,\mathsf{h})\right)^{2}\,\mu(d\mathsf{H})\right]\] \[\stackrel{{\text{(Tonelli)}}}{{=}}\frac{C_{T}}{N}+ \frac{2\alpha}{N}\sum_{k=0}^{m-1}\int_{\mathcal{X}}\mathbb{E}\left[\Gamma_{k} ^{N}(\varsigma(\mathsf{H}))\mathcal{K}_{\mathsf{x},\lambda}(h,\mathsf{h}) \right]^{2}\,\mu(d\mathsf{x},d\mathsf{z},d\mathsf{y},d\mathsf{h})\] \[\leq\frac{C_{T}}{N}+\frac{C}{N}\sum_{k=0}^{m-1}\int_{\mathcal{X} }\sup_{\mathsf{h}\in\mathcal{H}}\mathbb{E}\left[\Gamma_{k}^{N}(\mathsf{h}) \right]^{2}\,\mu(d\mathsf{x},d\mathsf{z},d\mathsf{y},d\mathsf{h}).\] Defining further \(\tilde{\Gamma}_{m}^{N}=\sup_{\mathsf{h}\in\mathcal{H}}\mathbb{E}[\Gamma_{m}^{N} (\mathsf{h})]^{2}\), we have \[\tilde{\Gamma}_{m}^{N}\leq\frac{C_{T}}{N}+\frac{C_{T}}{N}\sum_{k=0}^{m-1} \tilde{\Gamma}_{k}^{N}.\] So by discrete Gronwall inequality we have **Lemma 7.16**.: _For all \(m\leq\lfloor TN\rfloor\) we have_ \[\tilde{\Gamma}_{m}^{N}=\sup_{\mathfrak{h}\in\mathcal{H}}\mathbb{E}[\Gamma_{m}^{N} (\mathfrak{h})]^{2}=\sup_{\mathfrak{h}\in\mathcal{H}}\mathbb{E}[\varphi_{m}^{N} (\mathfrak{h})-\phi_{m}^{N}(\mathfrak{h})]^{2}\leq\frac{C_{T}}{N}\exp\left( \frac{mC_{T}}{N}\right)\leq\frac{C_{T}\exp(TC_{T})}{N}=\frac{C_{T}}{N}.\] Finally, define \(\tilde{\phi}_{t}^{N}(h)=\phi_{\lfloor Nt\rfloor}^{N}(h)\) be a time-rescaled evolution of \(\phi_{m}^{N}(h)\), and recall our desired limiting equation is \[g_{t}(h)=-\alpha\int_{0}^{t}\left[\int_{\mathcal{X}}(g_{s}(\varsigma( \mathsf{H}))-\mathsf{y}))\mathcal{K}_{\varsigma,\lambda}(h,\mathfrak{h})\, \mu(d\mathsf{h})\right]\,ds,\quad g_{0}(h)=0.\] **Lemma 7.17**.: _As \(N\to+\infty\), we have_ \[\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}|\tilde{\phi}_{t}^{N}(h)-g_{t}(h)|\leq \frac{C_{T}}{N}.\] Proof.: Let \[\Upsilon_{t}^{N}(h)=\phi_{t}^{N}(h)-g_{t}(h),\quad\tilde{\Upsilon}_{t}^{N}= \sup_{h}|\Upsilon_{t}^{N}(h)|,\] then for \(m\leq\lfloor NT\rfloor\), \[\left|\Upsilon_{(m+1)/N}^{N}(h)\right| =|\tilde{\phi}_{(m+1)/N}^{N}(h)-\tilde{\phi}_{m/N}^{N}(h)-g_{(m+1 )/N}(h)+g_{m/N}(h)+\Upsilon_{m/N}^{N}(h)|\] \[\leq|\phi_{m+1}^{N}(h)-\phi_{m}^{N}(h)+g_{(m+1)/N}(h)-g_{m/N}(h)| +|\Upsilon_{m/N}^{N}(h)|\] \[\leq\left|-\frac{\alpha}{N}\int_{\mathcal{X}}(\phi_{m}^{N}( \varsigma(\mathsf{H}))-\mathsf{y})\mathcal{K}_{\varsigma,\lambda}(h,\mathfrak{ h})\,\mu(d\mathsf{x},d\mathsf{z},d\mathsf{y},d\mathsf{h})\right.\] \[\quad-\alpha\int_{m/N}^{(m+1)/N}\int_{\mathcal{X}}(g_{s}( \varsigma(\mathsf{H}))-\mathsf{y})\mathcal{K}_{\varsigma,\lambda}(h,\mathfrak{ h})\,\mu(d\mathsf{x},d\mathsf{z},d\mathsf{y},d\mathsf{h})\,dt\right|+|\Upsilon_{m/N}^{N} (h)|\] \[\leq\alpha\bigg{|}\int_{m/N}^{(m+1)/N}\int_{\mathcal{X}}(\phi_{m}^ {N}(\varsigma(\mathsf{H})-g_{m/N}(\varsigma(\mathsf{H})+g_{m/N}(\varsigma( \mathsf{H})-g_{s}(\varsigma(\mathsf{H}))\mathcal{K}_{\varsigma,\lambda}(h, \mathfrak{h})\,\mu(d\mathsf{x},d\mathsf{z},d\mathsf{y},d\mathsf{h})\,dt\bigg{|}\] \[\quad+|\Upsilon_{m/N}^{N}(h)|\] \[\leq\alpha\int_{m/N}^{(m+1)/N}\int_{\mathcal{X}}\left[\left| \Upsilon_{m/N}^{N}(\mathfrak{h})\right|+\left|g_{m/N}(\mathfrak{h})-g_{s}( \mathfrak{h})\right|\left|\mathcal{K}_{\varsigma,\lambda}(h,\mathfrak{h}) \right|\right]\,\mu(d\mathsf{x},d\mathsf{z},d\mathsf{y},d\mathsf{h})\,dt+| \Upsilon_{m/N}^{N}(h)|,\] The control of the operator norm of \(g_{t}\) by (6.18) suggests that for all \(h\in\mathcal{H}\), \(g_{t}(h)\leq C_{T}\|h\|_{H^{1}(\lambda)}\leq C_{T}\). As a result, for all \(s\in[m/N,(m+1)/N)\) and \(\mathfrak{h}\in\mathcal{H}\), \[\left|g_{s}(\varsigma(\mathsf{H}))-g_{m/N}(\varsigma(\mathsf{H}))\right|\leq \int_{m/N}^{s}\left|[\mathcal{A}(g_{\tau})](\varsigma(\mathsf{H}))+b(\varsigma (\mathsf{H}))\right|d\tau\leq C_{T}(s-m/N)\leq\frac{C_{T}}{N}.\] Therefore, \[\left|\Upsilon_{(m+1)/N}^{N}(h)\right|\leq\frac{\alpha}{N}\left[\tilde{\Upsilon }_{m/N}^{N}+\frac{C_{T}}{N}\right]\,+|\Upsilon_{m/N}^{N}(h)|,\] and that \[\tilde{\Upsilon}_{(m+1)/N}^{N}(h)\leq\left(1+\frac{C_{T}}{N}\right)\tilde{ \Upsilon}_{m/N}^{N}+\frac{C_{T}}{N^{2}}.\] By Lemma (A.1), for all \(m\leq\lfloor TN\rfloor\) we have \[\sup_{h\in\mathcal{H}}|\tilde{\phi}_{m/N}^{N}(h)-g_{m/N}(h)|=:\tilde{\Upsilon }_{m/N}^{N}(h)\leq\left(1+\frac{C_{T}}{N}\right)^{m}\frac{C_{T}}{N}\leq\frac{C_ {T}\exp(TC_{T})}{N}=\frac{C_{T}}{N}.\] Finally, for all \(t\leq NT\), define \(m=\lfloor t/N\rfloor\), then \[\sup_{h\in\mathcal{H}}|\tilde{\phi}_{s}^{N}(h)-g_{s}(h)| \leq\sup_{h\in\mathcal{H}}\underbrace{|\tilde{\phi}_{s}^{N}(h)- \tilde{\phi}_{m/N}^{N}(h)|}_{=0}+\sup_{h\in\mathcal{H}}|\tilde{\Upsilon}_{m/N}^ {N}(h)|+\sup_{h\in\mathcal{H}}|g_{m/N}(h)-g_{s}(h)|\] \[\leq\frac{C_{T}}{N}+\frac{C_{T}}{N}=\frac{C_{T}}{N}.\] This completes the proof as the above bound is uniform in \(h\in\mathcal{H}\). ### The final steps Now, we are in position to prove the main convergence result of this paper. Proof.: (Proof of Theorem 4.2) As a summary, we collect * Lemma 7.8: \[\sup_{m\leq NT}\sup_{h\in\mathcal{H}}\mathbb{E}\left|g_{m}^{N}(h)-\varphi_{m}^{ N}(h)\right|\leq\frac{C_{T}}{N^{(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/2)}}.\] * Lemma 7.16: \[\sup_{m\leq NT}\sup_{h\in\mathcal{H}}\mathbb{E}[\varphi_{m}^{N}(h)-\phi_{m}^{ N}(h)]^{2}\leq\frac{C_{T}}{N}\implies\sup_{h\in\mathcal{H}}\mathbb{E}|\varphi_{m}^ {N}(h)-\phi_{m}^{N}(h)|\leq\frac{C_{T}}{N^{1/2}}.\] * Lemma 7.17: \[\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}|\phi_{t}^{N}(h)-g_{t}(h)|\leq\frac{C_{T }}{N}.\] Adding all of the error terms yields: \[\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}\mathbb{E}|g_{t}^{N}(h)-g_{ t}(h)| \leq\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}\left[\mathbb{E}|g_{t}^ {N}(h)-\varphi_{t}^{N}(h)|+\mathbb{E}|\varphi_{t}^{N}(h)-\phi_{t}^{N}(h)| \right]+\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}\left[\mathbb{E}|\phi_{t}^{N}(h) -g_{t}(h)|\right]\] \[\leq\sup_{m\leq NT}\sup_{h\in\mathcal{H}}\left[\mathbb{E}|g_{m}^ {N}(h)-\varphi_{m}^{N}(h)|+\mathbb{E}|\varphi_{m}^{N}(h)-\phi_{m}^{N}(h)| \right]+\sup_{t\in[0,T]}\sup_{h\in\mathcal{H}}\left[\mathbb{E}|\phi_{t}^{N}(h) -g_{t}(h)|\right]\] \[\leq\frac{C_{T}}{N^{(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/ 2)}}+\frac{C_{T}}{N^{1/2}}+\frac{C_{T}}{N}\] \[\leq\frac{C_{T}}{N^{\epsilon}}\] where \(\epsilon=(1-\beta-2\gamma)\wedge\gamma\wedge(\beta-1/2)\wedge 1/2>0\). Recursive Inequality Many proofs of the technical lemmas involves the study of a sequence \((a_{k})_{k\geq 0}\) that satisfies the following recursive inequality: \[a_{k}\leq M_{1}a_{k-1}+M_{2},\] where \(M_{1},M_{2}\geq 0\). By recursion, we can prove that **Lemma A.1**.: _For \(M_{1}<1\), we have_ \[a_{k}\leq M_{1}^{k}a_{0}+\frac{1-M_{1}^{k}}{1-M_{1}}M_{2}\leq M_{1}^{k}a_{0}+ \frac{1}{1-M_{1}}M_{2},\] _and for \(M_{1}>1\), we have_ \[a_{k}\leq M_{1}^{k}a_{0}+\frac{M_{1}^{k}-1}{M_{1}-1}M_{2}\leq M_{1}^{k}\left(a _{0}+\frac{M_{2}}{M_{1}-1}\right).\] ## Appendix B Construction of a clipping function We consider the function \[f(x)=\begin{cases}\exp(-1/x)&x>0\\ 0&x\leq 0,\end{cases}\] which is known to be infinitely smooth (i.e. in \(C^{\infty}(\mathbb{R})\)). Therefore, the function \[g(x)=\frac{f(x)}{f(x)+f(1-x)}\] is also infinitely smooth. In particular we have \[g(x)=\begin{cases}=0&x<0\\ \in[0,1]&x\in[0,1]\\ =1&x>1\end{cases}.\] Therefore for any \(a<b\), the infinitely smooth function \[g_{a,b}(x)=g\left(\frac{x-a}{b-a}\right)=\begin{cases}=0&x<a\\ \in[0,1]&x\in[a,b]\\ =1&x>b\end{cases},\] and that the infinitely smooth function \[\rho_{N}(x)=g_{-2N^{\gamma},-N^{\gamma}}(-x)g_{-2N^{\gamma},-N^{\gamma}}(x)= \begin{cases}=1&|x|\leq N^{\gamma}\\ \in[0,1]&N^{\gamma}<|x|\leq 2N^{\gamma}\\ =0&|x|>2N^{\gamma}\end{cases}.\] Finally, the function \[\psi_{N}(x)=\int_{0}^{x}\rho_{N}(y)\,dy\] satisfies all the requirement for being a smooth clipping function in definition 2.7: (2) follows by direct computation, and (3) is true by definition (that \(\frac{d}{dx}\psi_{N}(x)=\rho_{N}(x)\)). Finally, (1) follows by Fundamental theorem of calculus. By symmetry we could prove only for the case when \(x>0\), for which \[|\psi_{N}(x)|\leq\int_{0}^{2N^{\gamma}}|\rho_{N}(y)|\,dy\leq 2N^{\gamma}.\] ## Acknowledgement The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility for completing the numerical simulations. Samuel Lam's fellowship is supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1).
2308.02451
Pruning a neural network using Bayesian inference
Neural network pruning is a highly effective technique aimed at reducing the computational and memory demands of large neural networks. In this research paper, we present a novel approach to pruning neural networks utilizing Bayesian inference, which can seamlessly integrate into the training procedure. Our proposed method leverages the posterior probabilities of the neural network prior to and following pruning, enabling the calculation of Bayes factors. The calculated Bayes factors guide the iterative pruning. Through comprehensive evaluations conducted on multiple benchmarks, we demonstrate that our method achieves desired levels of sparsity while maintaining competitive accuracy.
Sunil Mathew, Daniel B. Rowe
2023-08-04T16:34:06Z
http://arxiv.org/abs/2308.02451v1
# Pruning a neural network using Bayesian inference ###### Abstract Neural network pruning is a highly effective technique aimed at reducing the computational and memory demands of large neural networks. In this research paper, we present a novel approach to pruning neural networks utilizing Bayesian inference, which can seamlessly integrate into the training procedure. Our proposed method leverages the posterior probabilities of the neural network prior to and following pruning, enabling the calculation of Bayes factors. The calculated Bayes factors guide the iterative pruning. Through comprehensive evaluations conducted on multiple benchmarks, we demonstrate that our method achieves desired levels of sparsity while maintaining competitive accuracy. _Keywords:_ Bayesian model selection, Bayes Factors, Bayesian pruning schedule Introduction In artificial neural networks (ANN) and machine learning (ML), parameters represent what the network has learned from the data. The number of parameters in a neural network can determine its capacity to learn. With advancements in hardware capabilities, we can now define larger models with millions of parameters. The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and its winners over the years demonstrate how the error rate has decreased with an increase in the number of parameters and connections in neural networks. For instance, in 2012, AlexNet (Krizhevsky et al., 2012), one of the convolutional neural networks (CNNs), had over 60M parameters. The large language model, Generative Pre-trained Transformer 3 (GPT-3) (Brown et al., 2020), comprises 175 billion parameters. Even though deep neural networks with large number of parameters capture intricate underlying patterns, the large number of connections can introduce computational challenges, overfitting, and lack of generalizability. To address these issues, various methods have been developed. Neural network pruning is a widely used method for reducing the size of deep learning models, thereby decreasing computational complexity and memory footprint (LeCun et al., 1989; Han et al., 2015; Liu et al., 2018). Pruning is crucial for deploying large models on resource-constrained devices such as personal computers, mobile phones and tablets. Pruning can also be used to reduce the carbon footprint of deep learning models by reducing the computational requirements (Strubell et al., 2019). Pruning can also be used to improve the interpretability of deep learning models by removing redundant neurons or connections (Han et al., 2015). Pruning methods can be classified into mainly three categories, weight pruning, neuron pruning, and filter pruning (Han et al., 2015; Srivastava et al., 2014; Li et al., 2017; He et al., 2018). Weight pruning involves removing individual weights from the network based on their magnitude or other criteria, neuron pruning and filter pruning involve removing entire neurons or filters that are not important. Even though pruning methods can effectively reduce network size and improve performance, they often lack a principled approach for selecting the most important weights or neurons (Blalock et al., 2020). In Bayesian neural networks, the weights of the network are treated as random variables with a prior distribution, which can be updated to get a posterior distribution using Bayes' rule. It allows us to quantify the uncertainty associated with each weight and select the most important weights based on their relevance to the task the network is being trained for. The posterior distribution reflects our updated belief about the weights based on the observed data and can be used to calculate the probability of each weight being important for the task at hand. Variational inference, which involves minimizing the Kullback-Leibler (KL) divergence between the true posterior and an approximate posterior, is a common approach for approximating the posterior distribution for neural network pruning (Dusenberry et al., 2019; Blundell et al., 2015). Other approaches include Monte Carlo methods and Markov chain Monte Carlo (MCMC) sampling (Molchanov et al., 2019). However, these methods are computationally expensive and can prove to be difficult to be scaled to large networks. In this work, we propose a Bayesian pruning algorithm based on Bayesian hypothesis testing. It provides a principled approach for pruning a neural network to a desired size without sacrificing accuracy. We compare two neural network models at every training iteration, the original unpruned network, and the pruned network. This comparison helps us to determine which model fits the data better. The ratio of the posterior probabilities of the pruned network to the posterior probabilities of the unpruned network (Bayes factor) can be then used to determine whether to prune the network further or skip pruning at the next iteration. This approach enables us to implement this method in regular neural networks without the need for additional parameterization as in the case of Bayesian neural networks. ### Pruning Neural Networks using Bayesian Inference The pruning system, seen in Figure 1, incorporates pruning into the training process. The training data is divided into batches and processed by the neural network through a forward pass, consisting of matrix multiplications and non-linear activations. The network's output is then compared with the ground truth labels to compute the loss. The weights of the network is adjusted through a backward pass using an optimizer such as Stochastic Gradient Descent (SGD) or Adam (Kingma and Ba, 2015). After each epoch, the weights are pruned using the pruning algorithm, and the pruned weights are used in the subsequent epochs. The pruning algorithm is based on Bayesian hypothesis testing, which is a statistical framework that can be used to compare two models, two network configurations in this case, to determine which one fits the data better. To test the hypothesis that the pruned network fits the data better than the unpruned network, we define the null hypothesis as the unpruned network fitting the data better (\(\theta=\psi\)) and the alternative hypothesis as the pruned network fitting the data better (\(\theta=\phi\)). The Bayes factor, which is the ratio of the posterior probability of the alternative hypothesis to the posterior probability of the null hypothesis, is computed as follows: \[\text{Bayes factor}=\frac{P(\theta=\phi|D)}{P(\theta=\psi|D)}\] Here, \(D\) represents the training data. The posterior probability of the null hypothesis (\(P(\theta=\psi|D)\)) is computed as: \[P(\theta=\psi|D)=\frac{P(D|\theta=\psi)P(\theta=\psi)}{P(D)}\] Figure 1: Pruning system block diagram. Similarly, the posterior probability of the alternative hypothesis (\(P(\theta=\phi|D)\)) is computed as: \[P(\theta=\phi|D)=\frac{P(D|\theta=\phi)P(\theta=\phi)}{P(D)}\] The Bayes factor is then calculated as the ratio of the posterior probabilities: \[\text{Bayes factor}=\frac{P(D|\theta=\phi)P(\theta=\phi)}{P(D|\theta=\psi)P( \theta=\psi)}\] A Bayes factor greater than 1 indicates that the pruned network fits the data better, while a value less than 1 indicates that the unpruned network fits the data better. For a classification problem, the likelihood of the data is given by the categorical cross-entropy loss function: \[\log p(y_{pred}|y_{true})=\log\mathcal{C}(\text{softmax}(y_{pred}))y_{true}\] Here, \(y_{\text{pred}}\) represents the neural network's predictions for the classes, and \(y_{\text{true}}\) is the ground truth. A Gaussian prior with mean \(\mu\) and variance \(\sigma^{2}\) is used for weights: \[p(w)=\mathcal{N}(\mu,\sigma^{2})\] The log prior and log likelihood for the weight parameters are used to compute the log posterior distribution of the weights: \[\log p(w|D)=\log p(D|w)+\log p(w)\] The log posterior is calculated before and after weight pruning to compute the Bayes factor. If the Bayes factor exceeds a predefined threshold, a certain percentage (\(r\)) of the weights are pruned as, \[w_{\text{new}}=w_{\text{old}}\odot m \tag{1}\] where \(\odot\) represents element-wise multiplication, \(w_{\text{old}}\) is the old weight matrix, and \(m\) is the binary mask indicating which weights should be pruned (i.e., have a value of 0) and which weights should be kept (i.e., have a value of 1). The resulting matrix \(w_{\text{new}}\) has the same dimensions as \(w_{\text{old}}\), but with some of its weights pruned. Algorithm 1 outlines the Bayesian pruning process. ``` Input: Trained neural network \(f(\cdot,w)\), pruning rate \(r\), dataset \(\mathcal{D}=\left(\mathbf{x}i,y_{i}\right)_{i=1}^{n}\), \(\beta\) Bayes factor threshold Output: Pruned neural network \(f_{r}(\cdot,w)\) Compute the posterior probability of the weights before pruning 3:if\(BF_{01}>\beta\)then Prune \(r\) percentage of weights of \(f(\cdot,w)\) Return \(f_{r}(\cdot,w)\) 6:endif Compute the posterior probability of the weights after pruning Compute the Bayes factor using the posterior probabilities before and after pruning ``` **Algorithm 1** Bayesian Pruning Algorithm In the following sections, we introduce two pruning algorithms that utilize this framework: random pruning, which randomly selects weights for pruning, and magnitude pruning, which prunes weights based on their magnitude. #### Bayesian Random pruning Random pruning is a simple pruning algorithm that randomly selects weights to prune. Here we set the pruning rate to be the desired level of sparsity that we are looking to achieve. After an epoch, we count the number of non-zero parameters in the network and randomly zero out just enough parameters to achieve the desired level of sparsity. The algorithm is summarized in Algorithm 2. ``` 1:\(f(\cdot,w)\): Neural network model with parameters \(w\) 2:\(r\): Desired sparsity level, \(\beta\) Bayes factor threshold 3: Calculate log posterior probability \(p(w|\mathcal{D})\) 4:if\(BF_{01}>\beta\)then 5:for all weights \(w_{i}\in w\)do 6:\(n\leftarrow\text{size}(w_{i})\) 7: number of weights to prune, \(k\leftarrow(n\times r)\) 8:\(I\leftarrow\text{indices of }\text{ non zero weights}\) 9:\(n_{z}\leftarrow\text{number of zero weights}\) 10:\(k^{\prime}\gets k-n_{z}\) 11:\(J\leftarrow\text{random\_sample}(I,k^{\prime})\) 12: set elements in \(w_{i}\) at indices \(J\) to zero 13:endfor 14:endif 15: Calculate log posterior probability \(p(w|\mathcal{D})\) after pruning 16: Calculate Bayes factor \(BF_{01}\) ``` **Algorithm 2** Bayesian Random Pruning #### Bayesian Magnitude pruning Magnitude pruning is a pruning algorithm that selects weights to prune based on their magnitude. This can be seen as pruning weights that are less important. Here we set the pruning rate to be the desired level of sparsity that we are looking to achieve. The lowest weights corresponding to the desired level of sparsity is pruned to get the pruned network. The algorithm is summarized in Algorithm 3. ``` 1:\(f(\cdot,w)\): Neural network model with parameters \(w\) 2:\(r\): Desired sparsity level, \(\beta\) Bayes factor threshold 3: Calculate log posterior probability \(p(w|\mathcal{D})\) 4:if\(BF_{01}>\beta\)then 5:for all weights \(w_{i}\in w\)do 6:\(n\leftarrow\text{size}(w_{i})\) 7: number of weights to prune, \(k\leftarrow(n\times r)\) 8:\(w_{i}\gets sort(w_{i})\) 9: set \(k\) elements in \(w_{i}\) to zero 10:endfor 11:endif 12: Calculate log posterior probability \(p(w|\mathcal{D})\) after pruning 13: Calculate Bayes factor \(BF_{01}\) ``` **Algorithm 3** Bayesian Magnitude Pruning ### Experimental Setup To evaluate the performance of Bayesian Random Pruning and Bayesian Magnitude Pruning, we conduct experiments on three datasets and two neural network architectures for five different levels of desired sparsity. The datasets used arne MNIST (Lecun et al., 1998), MNIST Fashion (Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009). The neural network architectures are a Fully Connected Network (FCN) and a Convolutional Neural Network (CNN). The five different levels of sparsity are 25%, 50%, 75%, 90% and 99%. We use a learning rate of 0.001 and a batch size of 64 for all experiments. Data preprocessing only consist of normalizing the dataset and does not include any data augmentation like Random cropping or flipping of images to have less confounding variables in the studies we conduct to observe the effects of our pruning algorithm. We train the network for 25 epochs on the training set and evaluate its performance on the test set. We evaluate the performance of each method in terms of the accuracy of predictions it makes for the target classes using the test set. Each experiment is repeated 5 times and the mean and standard deviation of the accuracy is reported. The following sections describe the neural network architectures used in our experiments. ### Neural Network Architectures The two neural network architectures used in our experiments are the Fully Connected Network (FCN) and the Convolutional Neural Network (CNN). The same architectures are used for all three datasets. The FCN consists of two hidden layers. The output of the last fully connected layer is fed into a softmax layer to get the class probabilities. The CNN consists of two convolutional layers with 32 and 64 filters respectively followed by two fully connected layers. Each convolutional layer is followed by a max pooling layer with a kernel size of 2 and stride of 2. The output of the second max pooling layer is flattened and fed to the fully connected layers. The output of the fully connected layer is fed into a softmax layer to get the class probabilities. The network architecture of the fully connected network (FCN) is seen in Figure 2. Figure 2: Fully connected neural network architecture The network architecture of the convolutional neural network (CNN) is seen in Figure 3. ### Results The following sections present the results of the experiments. The results are presented in the following order: (1) MNIST dataset, (2) MNIST-Fashion dataset, and (3) CIFAR-10 dataset. The results are presented in the form of learning curves and a table with accuracy for different levels of sparsity for the FCN and CNN model. The accuracy is the percentage of correctly classified images in the test set. The sparsity is the percentage of weights that are pruned in the network. The results are compared to baseline, which is the model trained without pruning, and the non-Bayesian version of the pruning method. ### Mnist Figure 4 shows the learning curves for random pruning, magnitude pruning under a Bayesian framework compared to baseline in a fully connected network (FCN) trained on the MNIST Figure 3: Convolutional neural network architecture dataset. Here the desired level of sparsity is 75%. The figure has two subplots. One shows the training and validation loss as a function of the number of epochs, the other plot (right) shows the Bayes factor, sparsity as a function of the number of epochs. The training loss is the average loss over the training set, and the validation loss is the average loss over the validation set. The figure shows that the training loss decreases as the number of epochs increases, and the validation loss starts to decrease in about 5 epochs. The training loss decreases faster than the validation loss, which indicates that the model is overfitting the training data. As pruning begins, it affects the training and validation loss of both random and magnitude pruning as seen the curves. There are large oscillations in loss values for random pruning as seen in the figure. The Bayes factor begins to reduce as the number of epochs increases and the sparsity of the network becomes stabilized for magnitude pruning, but it remains fluctuating for random pruning and shows an increasing trend for the Bayes factor suggesting that Bayesian random pruning fits the data better than other methods. Figure 5 shows the validation accuracy of random pruning for different sparsity levels. For 25% sparsity the validation accuracy seems to be the highest. Then as the sparsity level increases the validation accuracy begins to decrease. Until 90% sparsity the validation accuracy remains to have a downward trend and combats overfitting compared to the baseline. The network only starts to become worse at 99% sparsity. Figure 4: MNIST (FCN 75%) learning curves for the Bayesian pruning method. Figure 6 shows the validation accuracy of magnitude pruning for different sparsity levels. For 25% sparsity the validation accuracy remains similar to the baseline. Then as the sparsity level increases the validation accuracy starts to improve, but the network still overfits the data until 99% of the parameters are pruned. Figure 7 shows the learning curves for random pruning, magnitude pruning under a Bayesian framework compared to baseline in a convolutional neural network (CNN) trained on the MNIST dataset. The number of parameters in the CNN are comparatively larger than that Figure 5: Validation loss of random pruning for different sparsity levels. Figure 6: Validation loss of magnitude pruning for different sparsity levels. of the FCN. This causes the effects of overfitting to be seen a little later in the training period and less overfitting compared to the FCN at 75% sparsity. Bayes factor for random pruning is higher than that of magnitude pruning, which suggests that Bayesian random pruning fits the data better. Figure 8 shows the validation accuracy of random pruning for different sparsity levels. As the number of parameters of the CNN is larger than that of the FCN, the validation accuracy remains similar to the baseline until 90% sparsity. Then as the sparsity level increases the validation accuracy begins to decrease. Figure 8: Validation loss of random pruning for different sparsity levels. Figure 7: MNIST (CNN 75%) learning curves for the Bayesian pruning method. Figure 9 shows the validation accuracy of magnitude pruning for different sparsity levels. Even pruning 99% of the parameters does not affect the validation accuracy of the CNN. This is because the CNN has an enormous number of parameters and the network overfits the data even after pruning 99% of the parameters. ### MNIST Fashion Figure 10 shows the learning curves for random pruning, magnitude pruning under a Bayesian framework compared to baseline in a fully connected network (FCN) trained on the MNIST Fashion dataset. Here the desired level of sparsity is 90%. The figure has two subplots. One shows the training and validation loss as a function of the number of epochs, the other plot (right) shows the Bayes factor, sparsity as a function of the number of epochs. Figure 9: Validation loss of magnitude pruning for different sparsity levels. The training loss is the average loss over the training set, and the validation loss is the average loss over the validation set. The figure shows that the training loss decreases as the number of epochs increases, and the validation loss starts to decrease in about 5 epochs. The training loss decreases faster than the validation loss, which indicates that the model is overfitting the training data. As pruning begins, it affects the training and validation loss of both random and magnitude pruning as seen the curves. There are large oscillations in loss values for random pruning. The Bayes factor begins to reduce as the number of epochs increases and the sparsity of the network becomes stabilized for magnitude pruning, but it remains fluctuating for random pruning. Bayesian random pruning model fits the data better than magnitude pruning model. Figure 10: MNIST-Fashion (FCN 90%) learning curves for the Bayesian pruning method. Figure 11 shows the validation accuracy of random pruning for different sparsity levels. Similar to the MNIST dataset, the validation loss is the lowest for 25% sparsity. Then as the sparsity level increases the validation accuracy begins to decrease. Figure 12 shows the validation accuracy of magnitude pruning for different sparsity levels. Higher levels of sparsity improves the validation accuracy of the FCN. The effects of overfitting are reduced as the number of parameters are reduced. Figure 11: Validation loss of random pruning for different sparsity levels. Figure 12: Validation loss of magnitude pruning for different sparsity levels. Figure 13 shows the learning curves for random pruning, magnitude pruning under a Bayesian framework compared to baseline in a convolutional neural network (CNN) trained on the MNIST Fashion dataset. Here the desired level of sparsity is 90%. The figure has two subplots. One shows the training and validation loss as a function of the number of epochs, the other plot (right) shows the Bayes factor, sparsity as a function of the number of epochs. The number of parameters in the CNN are comparatively larger than that of the FCN. This causes the effects of overfitting to be seen a little later in the training period. The trends in the learning curves are similar to that of the FCN. The validation accuracy for random pruning decreases at the beginning of training and starts to improve as training progresses. The Bayes factor begins to reduce as the number of epochs increases and the sparsity of the network becomes stabilized for magnitude pruning, but it remains fluctuating for random pruning and shows an increasing trend for the Bayes factor. Bayesian random pruning model fits the data better than magnitude pruning model. Figure 13: MNIST-Fashion (CNN 90%) learning curves for the Bayesian pruning method. Figure 14 shows the validation accuracy of random pruning for different sparsity levels. The trends are similar to the MNIST dataset. The validation accuracy is better for 25% sparsity and decreases as the sparsity level increases. Sparsity levels up to 90% helps in reducing the effects of overfitting. Figure 15 shows the validation accuracy of magnitude pruning for different sparsity levels. Similar to the MNIST dataset, magnitude pruning helps in reducing the effects of overfitting. The validation loss continues to improve as 99% sparsity is achieved. Figure 14: Validation loss of random pruning for different sparsity levels. Figure 15: Validation loss of magnitude pruning for different sparsity levels. ### CIFAR-10 Figure 16 shows the learning curves for random pruning, magnitude pruning under a Bayesian framework compared to baseline in a fully connected network (FCN) trained on the CIFAR-10 dataset. Here the desired level of sparsity is set to 90%. The figure has two subplots. One shows the training and validation loss as a function of the number of epochs, the other plot (right) shows the Bayes factor, sparsity as a function of the number of epochs. Unlike the MNIST, Fashion datasets the input images of CIFAR-10 dataset are of size 32x32x3. This causes the number of parameters in the FCN to be much larger than that of the MNIST, Fashion datasets. This causes the effects of overfitting to be seen a little later in the training period. The trends in the learning curves are similar to that of the MNIST, Fashion datasets. The validation accuracy for random pruning decreases at the beginning of training and starts to improve as training progresses. The Bayes factor begins to reduce as the number of epochs increases and the sparsity of the network becomes stabilized for both magnitude pruning and random pruning. Figure 16: CIFAR-10 (FCN 90%) learning curves for the Bayesian pruning method. Figure 17 shows the validation accuracy of random pruning for different sparsity levels. Due to the larger network size, the effects of overfitting are higher. The trends for random pruning remains similar to that of the MNIST, Fashion datasets. The validation accuracy is better for 25% sparsity and decreases as the sparsity level increases. Figure 18 shows the validation accuracy of magnitude pruning for different sparsity levels. The trends remain the same as that of the MNIST, Fashion datasets. Both Bayesian ranom and Bayesian magnitude pruning helps in reducing the effects of overfitting. The validation Figure 17: Validation loss of random pruning for different sparsity levels. Figure 18: Validation loss of magnitude pruning for different sparsity levels. loss continues to improve as 99% sparsity is achieved. Bayesian random pruning model fits the data better than magnitude pruning model. Figure 19 shows the learning curves for random pruning, magnitude pruning under a Bayesian framework compared to baseline in a convolutional neural network (CNN) trained on the CIFAR-10 dataset. Here the desired level of sparsity is set to 90%. The figure has two subplots. One shows the training and validation loss as a function of the number of epochs, the other plot (right) shows the Bayes factor, sparsity as a function of the number of epochs. The learning trends are similar to that of the FCN. The validation accuracy for random pruning decreases at the beginning of training and starts to improve as training progresses. The Bayes factor begins to increase for magnitude pruning and sparsity fluctuates as training progresses. For random pruning the Bayes factor begins to reduce as the number of epochs increases and the sparsity of the network becomes stabilized. Figure 19: CIFAR-10 (CNN 90%) learning curves for the Bayesian pruning method. Figure 20 shows the validation accuracy of random pruning for different sparsity levels. The trends of random pruning is similar to that of the MNIST, Fashion datasets. The effects of overfitting are reduced by pruning. The validation accuracy decreases as the sparsity level increases to 99%. Figure 21 shows the validation accuracy of magnitude pruning for different sparsity levels. The trends are similar to that of the MNIST, Fashion datasets. Magnitude pruning helps in reducing the effects of overfitting. The validation loss continues to improve as 99% sparsity is achieved. Figure 21: Validation loss of magnitude pruning for different sparsity levels. Figure 20: Validation loss of random pruning for different sparsity levels. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Dataset & Model & Unpruned & Sparsity & Random & Bayes Random & Magnitude & Bayes Magnitude \\ \hline & & & 25.0\% & 0.9684 & **0.9747** & **0.9801** & 0.9759 \\ & FCN & 0.9782 & 50.0\% & 0.9684 & **0.9710** & 0.9791 & 0.9791 \\ & FCN & 0.9782 & 75.0\% & 0.9578 & **0.9706** & 0.9779 & **0.9812** \\ & & & 90.0\% & 0.9624 & **0.9657** & 0.9768 & **0.9772** \\ MNIST & & & 99.0\% & 0.9433 & **0.9439** & 0.9743 & **0.9767** \\ \cline{2-8} & & & 25.0\% & **0.9908** & 0.9835 & 0.9910 & **0.992** \\ & CNN & 0.9918 & 50.0\% & 0.9858 & **0.9906** & 0.9900 & **0.9901** \\ & & & 75.0\% & 0.9872 & **0.9905** & **0.9905** & 0.9892 \\ & & & 90.0\% & **0.9806** & 0.9791 & 0.9880 & **0.9888** \\ & & & 99.0\% & 0.1135 & 0.1135 & 0.9826 & 0.9804 \\ \hline & & & 25.0\% & 0.8699 & **0.8739** & 0.8744 & **0.8778** \\ & FCN & 0.8733 & 50.0\% & **0.8659** & 0.8566 & 0.8725 & **0.8753** \\ & & & 75.0\% & 0.8535 & **0.8558** & **0.8800** & 0.8799 \\ & & & 90.0\% & 0.8416 & **0.8443** & 0.8750 & **0.8675** \\ Fashion & & & 99.0\% & 0.8076 & **0.8212** & 0.8573 & 0.8573 \\ \cline{2-8} & & & 25.0\% & 0.8905 & **0.9030** & 0.8959 & **0.9002** \\ & CNN & 0.9028 & 50.0\% & 0.8957 & **0.9021** & 0.8906 & **0.8982** \\ & & & 75.0\% & 0.8838 & **0.8773** & 0.8894 & **0.8974** \\ & & & 90.0\% & 0.8520 & **0.8589** & 0.8986 & **0.9022** \\ & & & 99.0\% & **0.7851** & 0.7083 & 0.8595 & **0.8768** \\ \hline & & & 25.0\% & **0.5233** & 0.5227 & 0.4857 & **0.4908** \\ & FCN & 0.4869 & 50.0\% & **0.5136** & 0.5111 & 0.4981 & **0.5010** \\ & & & 75.0\% & 0.4950 & **0.4972** & **0.5109** & 0.5086 \\ & & & 90.0\% & **0.4643** & 0.4589 & **0.5314** & 0.5198 \\ & & & 99.0\% & 0.4158 & **0.4381** & **0.4973** & 0.4932 \\ \cline{2-8} & & & 25.0\% & 0.6558 & **0.6574** & 0.6522 & **0.6557** \\ & CNN & 0.6606 & 50.0\% & 0.6732 & **0.6764** & 0.6391 & **0.6570** \\ & & & 75.0\% & 0.6205 & **0.6526** & 0.6409 & **0.6528** \\ & & & 90.0\% & **0.5169** & 0.5092 & **0.6467** & 0.6437 \\ & & & 99.0\% & 0.1000 & 0.1000 & 0.5172 & **0.5537** \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy values at different sparsity levels The accuracy values at different sparsity levels for pruned networks are presented in Table 1. The networks were trained for 25 epochs, and the experiment was repeated 5 times with different random seeds for averaging the results. The table demonstrates that the Bayesian pruning method achieves higher sparsity levels without sacrificing accuracy. It outperforms unpruned networks and shows comparable or better accuracy compared to traditional neural network pruning techniques. ### Discussion Neural networks with a large number of parameters can learn complex functions but are prone to overfitting and are unsuitable for compute-constrained devices. Neural network pruning addresses both these challenges by reducing the network size. The iterative pruning method that we have introduced allows for pruning to a desired level of sparsity without losing any accuracy compared to the baseline. It allows for the network learn a function with fewer connections in principled manner as it checks to see if the network configuration is a good fit for the data. The extensive experiments conducted on three different datasets, two different network types, show that it's an effective method to train neural networks without additional parameterization.
2309.02022
Dynamic Early Exiting Predictive Coding Neural Networks
Internet of Things (IoT) sensors are nowadays heavily utilized in various real-world applications ranging from wearables to smart buildings passing by agrotechnology and health monitoring. With the huge amounts of data generated by these tiny devices, Deep Learning (DL) models have been extensively used to enhance them with intelligent processing. However, with the urge for smaller and more accurate devices, DL models became too heavy to deploy. It is thus necessary to incorporate the hardware's limited resources in the design process. Therefore, inspired by the human brain known for its efficiency and low power consumption, we propose a shallow bidirectional network based on predictive coding theory and dynamic early exiting for halting further computations when a performance threshold is surpassed. We achieve comparable accuracy to VGG-16 in image classification on CIFAR-10 with fewer parameters and less computational complexity.
Alaa Zniber, Ouassim Karrakchou, Mounir Ghogho
2023-09-05T08:00:01Z
http://arxiv.org/abs/2309.02022v1
# Dynamic Early Exiting Predictive Coding Neural Networks ###### Abstract Internet of Things (IoT) sensors are nowadays heavily utilized in various real-world applications ranging from wearables to smart buildings passing by agetechnology and health monitoring. With the huge amounts of data generated by these tiny devices, Deep Learning (DL) models have been extensively used to enhance them with intelligent processing. However, with the urge for smaller and more accurate devices, DL models became too heavy to deploy. It is thus necessary to incorporate the hardware's limited resources in the design process. Therefore, inspired by the human brain known for its efficiency and low power consumption, we propose a shallow bidirectional network based on predictive coding theory and dynamic early exiting for halting further computations when a performance threshold is surpassed. We achieve comparable accuracy to VGG-16 in image classification on CIFAR-10 with fewer parameters and less computational complexity. Dynamic Neural Networks, Predictive Coding, Early Exiting, Edge Devices ## I Introduction The Internet of Things (IoT) is nowadays a paradigm of reference for several applications. IoT is adopted in various domains such as smart farming, drone imaging, industry 4.0, and safety and is implemented in many challenging environments like satellites, submarines, and the Large Hadron Collider (LHC). Hence, huge amounts of data are amassed by sensors and can be overwhelming to process. For example, the LHC generates approximately one petabyte of collision data per second [1]. Therefore, IoT sealed an alliance with Artificial Intelligence (AI) to improve data management pipelines and offer better analytics to clients. In the context of Artificial Intelligence of Things (AIoT), Deep Learning (DL) models, mainly fostered in cloud servers, have proven their efficacy in injecting intelligence into the network of connected IoT devices [2]. Raw data captured by sensors can thus be used to produce pertinent insights for the end user's benefit. With the rapid development of edge computing, AI services have known a migration from cloud servers to edge devices (e.g., IoT gateways and fog nodes). For sensitive applications, such as e-health and smart surveillance, deploying AI services at the edge has numerous benefits [3]. Data will no longer need to be transferred through the Internet, increasing data privacy and reducing security breaches. Latency will also be reduced to account only for the deployed models' computational complexity. However, edge devices can span large computational resource constraints from microcontrollers to cloudlets passing by single-board computers [4]. Therefore, the observed heterogeneity in edge device requirements can be troublesome for deploying state-of-the-art DL models as they usually perform with very deep networks implicating large numbers of parameters, thus resulting in an inevitable increase in memory footprint and inference time. To address this issue, an extensive body of work proposes various techniques of model compression for lighter memory footprints. For instance, quantization aims at coding weights on low-precision arithmetic and knowledge distillation allows the design of a small model (i.e., student) trained on quality features from a bigger network (i.e., teacher) [5]. Another avenue of research takes inspiration from the brain, known for its efficiency and plasticity. The human brain is able to conduct a wide range of tasks dexterously and process huge amounts of data with around 20 Watts of power [6]. The brain's neuronal morphology could explain this frugality in energy consumption. The brain relies on inhibitory mechanisms that are necessary for quick decision-making in a survivalsit situation [7] as well as bidirectional connections between higher and lower visual areas that enhance the brain's abstraction capacities of the surrounding environment [8]. To understand the interactions between both visual areas, predictive coding (PC) [9] theory postulates that the main function of the brain is to minimize a prediction error defined as the difference between the real and predicted stimuli. PC shows that feed-forward connections drive prediction errors to higher layers, whereas feedback connections attempt to predict lower layers' neural activity. The bidirectional movement relies on continuously refining the brain's internal input representations throughout the visual hierarchy. When implemented in convolutional neural networks (CNN), PC often yields higher accuracy than its conventional counterpart [10]. In light of these ideas, our goal is to benefit from PC in designing shallow CNNs. We aim to show empirically that the PC refinement process allows the same expressivity and feature diversity that width and depth allow in CNNs [11]. Furthermore, the refinement of feature representations demands a certain amount of cyclic processing between lower and higher visual areas until equilibrium is reached [12]. To alleviate this issue, early exiting techniques [13] are used to abort further cycling over the feature extractor network once a performance threshold is reached. Our contributions can be formulated as follows: * We apply PC techniques to CNNs to design shallow networks with a considerably reduced memory footprint that are deployable on edge devices * We improve PC cyclic processing with an early-exiting mechanism which further reduces the computational cost and inference time * We evaluate our proposed model against VGG-16 and achieve for image classification on CIFAR-10 comparable results with only a 3% difference and less than 1% of the baseline's number of parameters ## II Related Work This section reviews the main research fields that intersect with our work. The section covers some solutions to deploy DL models on low-resource edge devices, previous work done on PC with DL models, and recent applications of early exiting techniques. ### _DL for Resource-constrained Devices_ When computations are fostered in one edge device, compression is a potential direction for obtaining smaller models. Compression encompasses various techniques that reduce the model size without a severe drop in performance. The two most prominent techniques are quantization and pruning. On the one hand, quantization is the process of representing the model's weights, activations, or gradients with a lower number of bits either during or after training [14]. Rather than optimizing for a target bit-width, mixed quantization allows the model to be coded on longer bits for early feature extraction layers and shorter bits for intermediate or concatenation layers [15]. A recent line of work investigates binary neural networks whose parameters can be coded only on 2 bits [16]. On the other hand, pruning entirely removes weights from a network according to a predefined metric of importance. Thus, numerous metrics exist, such as weight magnitude or impact on loss [17]. These pruning criteria can either be applied to a pre-trained network, during training with the possibility of growing back connections, or even before training at initialization [18]. Instead of working on the model itself with one edge processor, another venue for deployment in resource-constrained hardware would consider the scenario where a heterogeneous network is formed via interconnected devices of different capabilities. Distributed inference [19] is a solution that leverages the power of internet-connected networks. Rather than deploying a heavy model on one device, we would cut the model into small executable segments on the variety of available devices within the network. However, this solution encounters privacy and latency challenges that Federated Learning has proposed to raise by avoiding data transfer between the network's nodes and encouraging collaborative learning between the devices [20]. Our work complements the aforementioned techniques since our proposed models can further reduce their memory footprint and number of operations through quantization and pruning or be deployed in a distributed network. ### _Deep Predictive Coding Networks_ PredNet [21] was the first attempt to implement predictive coding in deep neural networks with the unsupervised task of next-frame video prediction. It uses multiple layers of recurrent convolutional Long Short-Term Memory cells that generate a prediction (next frame). This prediction is used to compute an error representation that is passed to the subsequent layers. However, the proposed architecture did not benefit from the power of predictive coding as it did not force the minimization of the bottom-up error [22]. In a supervised learning scheme, Wen et al. enhanced a conventional CNN with top-down deconvolutional layers using PC updates [10]. It was shown that a deep PC network yields higher performance than its homologous plain feed-forward network. Although the network was trained for one particular number of cycles (\(T=6\)), they reported a positive correlation between accuracy and cycling numbers. Finally, PC-based networks were also shown to improve the robustness of deep models against several types of noises and adversarial attacks [23]. In this study, pre-trained models were enhanced with feedback connections, and only the latter were trained. Our paper is deeply inspired by the advancements in PC-based neural networks. So far, PC has been utilized to improve model accuracy or robustness by adding feedback connections to deep feed-forward networks, which causes the number of parameters to double, and thus the latency to increase. Since our concern is low-resource edge devices, we envision PC as a technique that might help build shallow (i.e., few layers and parameters) and expressive (i.e., unrolling the model through the PC refinement process) networks. ### _Early Exiting Networks_ The most straightforward implementation of dynamic neural networks is based on Early Exiting [13]. It involves mounting tiny decision blocks onto a backbone model to make quick decisions for easy inputs without resorting to the entire network. In image classification, the decision block is a tiny multi-layer perceptron classifier. Therefore, in an early-exit network, a response is returned if the classifier is sufficiently confident based on a performance target. Otherwise, the example is passed onto the subsequent layers for finer processing [24]. In natural language processing, early exiting is used during the decoding of transformer-based large language models through dynamic per-token exit decision-making [25]. The early-exiting halting mechanism introduces the concepts of "easy" and "hard" samples, which imply that easy samples should exit at earlier classifiers and harder samples could traverse the full network. If this is intuitive in image classification, other tasks like object detection need more precise definitions of sample hardness. Therefore, in [26], a sample for object detection is labeled "easy" when the difference in loss (or mean average precision) between a shallow detector and a deep detector is small. Early exiting networks are conventionally trained either via _scalarization_ (i.e., the weighted sum of the internal classifiers losses) or _separate training_ (i.e., each classifier trained separately). However, both methods have limitations as scalarization might cause instabilities due to the accumulation of gradients when the backbone is deep [27], while separate training hinders the collaboration between classifiers leading to computational waste [28]. In the proposed models, early exiting is implemented to halt PC cyclic processing once the user-defined performance target is reached. ## III Proposed Architecture As shown in Figure 1, our architecture is formed of a shared backbone alongside downstream task classifiers. The backbone serves as a feature extractor. It is built with a bidirectional hierarchy of convolutional and deconvolutional layers. The backbone is executed for a maximum of \(T\) cycles before outputting the final feature vector for classification. Blue arrows represent the convolutional forward pass, while red arrows represent the feedback deconvolutions. Each intermediate representation consists of a concatenation of a convolutional and a deconvolutional feature maps, referred to as \(C_{l}\) and \(D_{l}\) respectively. We perform a variable number of cycles \(t<T\) over the backbone. Once cycling is finished, we feed the last obtained feature vector \(C_{l}(t)\) to the classifier corresponding to the achieved number of cycles \(t\) (green arrow in Figure 1). The first cycle consists of three consecutive passes: forward, feedback, and forward. Every other cycle starts with a feedback pass followed by a forward pass. ### _PC Feature Update Rule_ During each cycle, PC adopts a specific feature update rule to merge forward and feedback feature maps to enrich the internal representation. We adopt the same formulation proposed in [10]. Suppose the PC architecture is constituted of \(L\) layers, with layer \(l=0\) being the input image. We denote the forward and feedback convolutions by \(\mathbf{FF}\) and \(\mathbf{FB}\). The convolutional representations \(C_{l}\) are updated on both forward and feedback passes, while the deconvolutional feature maps \(D_{l}\) only change during the feedback pass. Hence, let \(C_{l,f}(t)\) and \(C_{l,b}(t)\) be the state of layer \(l\)'s convolutional feature map at the end of the forward and feedback passes of PC cycle \(t\in[|1,T|]\). At the beginning of cycle 1, we initialize all feature maps with a conventional forward pass: \(\forall l\in[|1,L|],C_{l,f}(0)=\mathbf{FF}\left[C_{l-1,f}(0)\right]\). We can thus define the PC update rules for every cycle \(t\in[|1,T|]\) and for every layer \(l\in[|1,L|]\) as follows: **Feedback pass update:** \[\begin{split}& D_{l-1}(t)=\mathbf{FB}(C_{l,f}(t-1))\\ & C_{l-1,b}(t)=g((1-b_{l-1}).C_{l-1,f}(t-1)+b_{l-1}.D_{l-1}(t)) \end{split} \tag{1}\] **Forward pass update:** \[C_{l,f}(t)=g(C_{l,b}(t)+a_{l}.\mathbf{FF}(C_{l-1,b}(t)-D_{l-1}(t))) \tag{2}\] where \(a_{l}\) and \(b_{l}\) are trainable non-negative layer-dependent updating rates, initialized for all layers by 1.0 and 0.5, respectively. However, \(b_{0}=0\) is non-trainable as the input image \(C_{0}\) is never updated. Finally, function \(g\) is a Rectified Linear Unit (ReLU) non-linearity. The PC update rules attempt to minimize the prediction error between forward and feedback representations. More precisely, after an optimal number of cycles \(T_{opt}\), we will have \(C_{l}(t)\simeq D_{l}(t)\). PC update rules thus tend toward a consistent feature representation: \(\forall l\in[|1,L|],C_{l}(t+1)\simeq C_{l}(t)\). However, this condition is not always met in practice as it depends on the hardness of the sample being processed. For instance, hard images might require high cycling numbers to stabilize representations. Therefore, we do not wait until feature consistency is achieved to halt the computations as proposed explicitly in [23]. Instead, we propose to use early exiting to adapt the number of cycles to the sample hardness. ### _PC Early Exit_ We implement early exit on the number of cycles as follows. After the first cycle, classifier \(1\) is applied to the convolutional feature map \(C_{l,f}(1)\). The classification confidence is then compared with a predefined user threshold. If the confidence is above the threshold, computations are aborted, and a response is returned. Otherwise, a new cycle is initiated, followed by another classification and threshold comparison. In the proposed architecture, the number of classifiers equals the maximum number of cycles \(T\) allowed. The choice of \(T\) different classifiers, instead of one classifier shared by all cycles, is motivated by the fact that feature vectors are updated from one cycle to another. Hence, a classifier trained on a 5-cycle-model feature vector will not discern the patterns that a 1-cycle-model extracts from the same input. Only when feature consistency is achieved can a shared classifier be sufficient. Nevertheless, since our target hardware is highly resource-constrained, we usually stop cycling before consistent feature representations are reached, hence the need for a classifier for each cycle. ### _PC Training_ Our model parameters are the forward, feedback, and the \(T\) classifiers' weights, alongside the \(a_{l}\) and \(b_{l}\) update rates. These parameters are learned with classic back-propagation using a single cross-entropy loss function \(\mathcal{L}_{c_{i}}\) per classifier \(c_{i}\). Hence, Fig. 1: Proposed model - The backbone is cycling for \(t<T\) cycles we can train the network separately for a specific number of cycles corresponding to one of the classifiers. Moreover, after each backpropagation pass, the weights are frozen until the chosen number \(i\) of PC cycles is achieved before recalculating the loss function. To improve the training performance, we can allow classifiers to collaborate with each other. Therefore, we propose that the backbone and \(T\) classifiers are jointly trained using scalarization adopted from multi-objective optimization. The total loss is the weighted sum of each classifier's loss \(\mathcal{L}_{c_{i}}\), as defined in Equation (3): \[\mathcal{L}_{tot}=\sum_{i=1}^{T}\lambda_{i}\mathcal{L}_{c_{i}} \tag{3}\] where \(\lambda_{i}\) is a positive weight for the loss function \(\mathcal{L}_{c_{i}}\). Our proposed training procedure has the main disadvantage of being time-consuming. However, not only does the underlying competition launched between classifiers over the shared weights of the backbone help achieve a Pareto optimal solution, but it also encourages PC cycles to achieve consistency early enough as the loss guides the classifiers to collaborate in generating semantically similar feature vectors. Moreover, the \(\lambda_{i}\) coefficients control the exiting strategy: higher coefficients for the first classifiers (corresponding to small numbers of cycles) stimulate early exiting, and vice versa. In our experiments, exits were uniformly weighted: \(\forall i\in[|1,T|],\lambda_{i}=1/T\). ## IV Experiments ### _Dataset_ In order to evaluate our method, we choose the CIFAR-10 dataset [29]. It includes 60000 32x32 RGB images evenly distributed over 10 classes, with 6000 images per each. CIFAR-10 is adopted by many tiny machine learning benchmarks [30], and the images simulate well numerous IoT applications using low-resolution cameras (e.g., surveillance for eyewear protection detection and smart farming for fruit disease classification). For model learning, the training set is formed of 50000 images, and the rest is destined for testing. As a data augmentation procedure, we applied random translation and horizontal flipping. The training set was clustered in 64-sized batches. ### _Training & Evaluation_ We choose to cycle over the PC backbone for a maximum of \(T=5\) cycles. Mini-batch gradient descent was employed for training via AdamW optimizer initialized with a learning rate of 0.001 and a weight decay of 0.01. Regularization was used as a dropout operator of 10%. We run the model for 500 epochs. Moreover, PC models with one cycling number and one classifier were trained for 200 epochs. We adopt conventional accuracy as a performance metric for evaluation since the dataset is balanced. The number of parameters, memory size, and number of floating-point operations (FLOPs) can be gathered from PyTorch libraries. Latency was computed for one sample on Google Colaboratory CPU with 500 repetitions in the worst-case scenario (i.e., a sample exiting after all \(T=5\) cycles), as it represents the upper bound of our setting. ### _Model configuration_ The model design process was driven by the motivation of exploiting PC dynamics in order to build shallow networks, in terms of both depth and width, that could perform as well as the established VGG-16 architecture and that could be deployed on edge devices of kilobytes (KB) to megabytes (MB) of memory. We design three models with convolutions sharing the same kernel size of 3x3 and stride of 1 but different numbers of channels as shown in bold in Table I. For the sake of readability, we omit PC deconvolutional layers since they have inverted numbers of channels with respect to the feed-forward convolutions. Batch normalization is embedded in each convolutional block. Max-pooling of kernel size 2x2 is activated when the number of channels changes from one layer to another. The models presented in Table I are designed to challenge PC feature updates. Hence, given a shallow Model A, we construct Model B to be wider and Model C deeper in order to show empirically that recurrence could account for the improvements that width and depth bring to the neural network. Finally, our baselines are VGG-16 as well as PC models exclusively trained for \(T=5\) cycles with the aforementioned architectures in Table I, noted PCN-5-{A,B,C}. The custom baseline models PCN-5 are meant to testify to the validity of the chosen training scheme. ## V Discussion Table II and III aggregate the main results of our experiments from both efficiency and performance standpoints, respectively. We observe that the proposed models can fit memory constraints for many edge processors consuming only a few megabytes of storage. As expected, latency and the number of operations increase with the model width and depth without exceeding the reasonable amounts we encounter in edge processing [4]. It is worth mentioning for Model B that a latency of 80 milliseconds (ms) is rather acceptable knowing that the network has already done 5 cycles through the backbone and computed 5 class probabilities, thus, offering better performance and more accurate classification. Based on Table II and Figure 2, Model A presents the most desirable properties for an edge application. It is noticeable that starting from cycle \(T=2\), a linear relationship is found between cycles and FLOPs. It is explained by the fact that one more cycle implies the same amount of floating point operations. However, it is to note that the first cycle computes 2 feed-forward passes as the input should first traverse the network, feedback, and regain again the higher layers near the classifier. More cycles will thus grow the number of FLOPs linearly. However, given that our architecture is shallow, attaining reasonably large numbers of cycles will barely reach the computational cost attained by a conventional deep network like VGG-16, as seen in the FLOPs column of Table II. From a performance perspective, we observe from Table III, an increasing accuracy through cycles as more cycles allow more expressivity to the shallow network. This is especially the case when classes are hardly separable and for which the learned patterns must be well distinctive. To appreciate the impact of PC updates, we note a significant increase of about 20% from \(T=1\) to \(T=5\) in Model B. Moreover, in order to challenge our training scheme, we report models trained for the predefined maximum number of cycles \(T=5\) and one classifier. We observe that models using joint training achieve better accuracy than their homologous PCN-5. This shows that, in a PC setting for which consistency is sought, joint training encourages collaboration between classifiers. Furthermore, our proposed models are not very far from VGG-16 performance on CIFAR-10 with only a 3% difference from the 4-cycle and 5-cycle recurrent processing and a considerable reduction in the numbers of parameters from \(10^{8}\) (i.e., VGG-16) to \(10^{5}\) orders of magnitude. Table III also shows comparable results between the three proposed models. Wide Model B and deep Model C reach almost the same performance as Model A which is tinier and faster. Nevertheless, it is worth mentioning that width helped the model gain high accuracy at later cycles and depth improved the internal classifiers' average performance. Overall, we can conclude that PC dynamics might serve as a complementary key aspect for increasing model expressivity along with depth and width in feed-forward neural networks. Figure 3 endorses the claims advanced in Section III-C. We can spot two main behaviors, other than the collective increase in accuracy which is the goal of multiobjective optimization. The first is the fluctuations through epochs which reveal the status of competition between the classifiers. These fluctuations are severely present in the first epochs since PC dynamics can still not find common ground between the feature vectors each cycle is outputting. The second behavior is also related to these fluctuations: we observe that, near the end of the optimization, the fluctuation amplitude diminishes and the five curves approach one another. This behavior is compatible with the underlying idea of PC dynamics in neural networks that revolves around stability and consistency in the feature representations. Therefore, with the joint training, a type of implicit knowledge distillation happens between the cycles. Hence, in numerous cases, a 1-cycle network will be able to yield a similar feature vector as a 6-cycle network; a property which is most wanted for reducing computational cost. Fig. 3: Test accuracy per exit for Model A Fig. 2: FLOPs and Latency on CPU across exits for Model A Finally, Figure 4 emphasizes the motivation behind using early exit. Given high thresholds, the network can still free with high confidence a large number of well-classified images at the first exit. Nevertheless, as the user threshold increases, we notice the importance of more cycles to classify harder samples correctly. ## VI Conclusion In this paper, we proposed a shallow network for image classification based on predictive coding dynamics and early exiting for resource-constrained edge devices. We found that PC dynamics can play a major role in yielding high accuracy without resorting to deep models. Since PC is based on minimizing an objective function that might demand high numbers of cycles, we employed early-exiting to abort further computation once a user-predefined performance target is reached. Thus, the paper highlights how PC processing can lead to a significant reduction in memory footprint while achieving good accuracy. We intend to continue the present work by putting more emphasis on early exits through preference-vector-based multiobjective optimization. Furthermore, we will attempt to define a hardness measure that estimates beforehand the number of cycles needed in order to avoid calling low-cycle classifiers.
2302.09205
Approximate Thompson Sampling via Epistemic Neural Networks
Thompson sampling (TS) is a popular heuristic for action selection, but it requires sampling from a posterior distribution. Unfortunately, this can become computationally intractable in complex environments, such as those modeled using neural networks. Approximate posterior samples can produce effective actions, but only if they reasonably approximate joint predictive distributions of outputs across inputs. Notably, accuracy of marginal predictive distributions does not suffice. Epistemic neural networks (ENNs) are designed to produce accurate joint predictive distributions. We compare a range of ENNs through computational experiments that assess their performance in approximating TS across bandit and reinforcement learning environments. The results indicate that ENNs serve this purpose well and illustrate how the quality of joint predictive distributions drives performance. Further, we demonstrate that the \textit{epinet} -- a small additive network that estimates uncertainty -- matches the performance of large ensembles at orders of magnitude lower computational cost. This enables effective application of TS with computation that scales gracefully to complex environments.
Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, Benjamin Van Roy
2023-02-18T01:58:15Z
http://arxiv.org/abs/2302.09205v1
# Approximate Thompson Sampling via Epistemic Neural Networks ###### Abstract Thompson sampling (TS) is a popular heuristic for action selection, but it requires sampling from a posterior distribution. Unfortunately, this can become computationally intractable in complex environments, such as those modeled using neural networks. Approximate posterior samples can produce effective actions, but only if they reasonably approximate joint predictive distributions of outputs across inputs. Notably, accuracy of marginal predictive distributions does not suffice. Epistemic neural networks (ENNs) are designed to produce accurate joint predictive distributions. We compare a range of ENNs through computational experiments that assess their performance in approximating TS across bandit and reinforcement learning environments. The results indicate that ENNs serve this purpose well and illustrate how the quality of joint predictive distributions drives performance. Further, we demonstrate that the _epinet_ -- a small additive network that estimates uncertainty -- matches the performance of large ensembles at orders of magnitude lower computational cost. This enables effective application of TS with computation that scales gracefully to complex environments. ## 1 Introduction Thompson sampling (TS) is one of the oldest heuristics for action selection in reinforcement learning (Thompson, 1933; Russo et al., 2018). It has also proved to be effective across a range of environments (Chapelle and Li, 2011). At a high level, it says to 'randomly select an action, according to the probability it is optimal' This approach naturally balances exploration with exploitation, as the agents favours more promising actions, but does not disregard any action that has a chance of being optimal. However, in its exact form, TS requires sampling from a posterior distribution, which becomes computationally intractable for complex environments (Welling and Teh, 2011). Approximate posterior samples can also produce performant decisions (Osband et al., 2019). Recent analysis has shown that, if a sampled model is able to make reasonably accurate _predictions_ it can drive good decisions (Wen et al., 2022). But these results stress the importance of _joint_ predictive distributions -- or joint predictions, for short. In particular, accurate marginal predictive distributions do not suffice. Epistemic neural networks (ENNs) are designed to make good joint predictions (Osband et al., 2021). ENNs were introduced with a focus on classification problems, but we will show in this paper that the techniques remain useful in producing regression models for decision making. This paper empirically evaluates the performance of approximate TS schemes that use ENNs to approximate posterior samples. We build upon _deep Q-networks_(Mnih et al., 2015), but using ENNs to represent uncertainty in the state-action value function. Figure 1: Performance of an approximate TS agent in a neural bandit using different ENNs. Epinet beats large ensembles at fraction of computational cost (Section 5). Figure 1 offers a preview of our results. Among ENNs we consider are ensembles of base models [22, 14] and a single base model enhanced with the recently proposed epinet, which is a small additive network that estimates uncertainty. **We find that, using an epinet, we can outperform large ensembles at orders of magnitude lower computational cost**. More generally, we find that ENNs that produce better joint predictions in synthetic classification problems also perform better in decision problems. ### Key Contributions We introduce ENN-DQN, which unifies algorithms that combine DQN and approximate TS. **We release open-source library for all our experiments at enn_acme** (Section 4). This provides a valuable resource for clear and reproducible research in the field and the first extensive investigation into the effectiveness of posterior samples in deep RL. Our work builds on the existing acme library for RL [11]. **We demonstrate a clear empirical relationship between quality of joint predictions produced by an ENN and the performance of resulting decisions.** ENNs that offer better joint prediction tend to produce better decisions in our benchmark tasks. Interestingly, this is true not only for bandit environments of the neural testbed [22], but also in bsuite benchmark reinforcement learning tasks designed to highlight key aspects of decision making [22]. Importantly, **we show that epinets outperform large ensembles, but at orders of magnitude lower computational cost.** This holds even for regression models, as in temporal difference (TD) learning, not just classification. These results are significant since prior work on ENNs had focused only on the quality of joint predictions [22]. We show that these results also extend to empirical decision making with deep learning systems. ### Related Work This paper builds on a long literature around TS for efficient exploration [10, 19, 18]. Much of this work has been focused on extending and refining performance guarantees around particular problem classes, where exact Bayesian inference allows for efficient generalization between states and actions. From bandits with structure [18], to MDPs [22] or MDPs with generalization [22, 23, 24, 25]. However, in complex environments, even planning with full information may be intractable [20]. For this reason, so-called deep reinforcement learning (RL) algorithms use neural networks to directly assess the value and/or policy functions [19]. Most of these schemes employ simple dithering schemes for exploration, such as epsilon-greedy or boltzmann exploration. There are relatively few approximate TS schemes that have modified these algorithms to attempt to combine the best of this deep RL with so-called 'deep exploration' [22]. Bootstrapped DQN [22] maintains an ensemble of networks as a proxy for neural network uncertainty, but this is just one particular approach popular in the Bayesian deep learning community. Other popular approaches include dropout [13], variational inference [15], or even stochastic Langevin MCMC [23]. However, research in this area has focused mainly on supervised learning tasks [16], with relatively little attention paid to the use of these Bayesian network in driving effective decision making. ## 2 Problem Formulation This section outlines the notation and problem setting. We begin with a review of the family of sequential decision problems we will consider. Next, we provide a quick overview on epistemic neural networks, which can make joint predictions without being Bayesian. Finally, we introduce the ENN-DQN variant that allows for an approximate of Thompson sampling. ### Reinforcement Learning We consider the problem of learning to optimize a random finite-horizon Markov decision problem (MDP) \(M^{*}\)=(\(\mathcal{S}\),\(\mathcal{A}\),\(R^{*}\),\(P^{*}\),\(\bar{s}\),\(\rho\)) over repeated episodes of interaction, where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\bar{s}\in\mathcal{S}\) is the terminal state, and \(\rho\) is the initial state distribution. At the start of each episode the initial state \(s_{1}\) is drawn from the distribution \(\rho\). In each time period \(h=1,2,...\) within an episode, the agent observes a state \(s_{h}\in\mathcal{S}\). If \(s_{h}\neq\bar{s}\), the agent also selects an action \(a_{h}\in\mathcal{A}\), receives a reward \(r_{h+1}\sim R^{*}(\cdot|s_{h},a_{h})\), and transitions to a new state \(s_{h+1}\sim P^{*}(\cdot|s_{h},a_{h})\). An episode terminates once the agent arrives at the terminal state \(\bar{s}\). We use \(H\) to denote the horizon of an episode. Note that \(H\) is a random variable in general1 and the agent arrives at \(\bar{s}\) in period \(H+1\). The agent is given knowledge about \(\mathcal{S}\), \(\mathcal{A}\), \(\overline{s}\), and \(\rho\), but is uncertain about \(R^{*}\) and \(P^{*}\). The unknown MDP \(M^{*}\), together with reward function \(R^{*}\) and transition function \(P^{*}\), are modeled as random variables [10]. A policy \(\mu:\mathcal{S}\rightarrow\mathcal{A}\) maps a state \(s\in\mathcal{S}\) to an action \(a\in\mathcal{A}\). For each MDP \(M\) with state space \(\mathcal{S}\) and action space \(\mathcal{A}\), and each policy \(\mu\), we define the associated state-action value function as: \[Q_{\mu}^{M}(s,a)\!:=\!\mathds{E}_{\mu}\left[\sum_{h=1}^{H}\!r_{h+1}\Big{|}s_{1 }\!=\!s,\!a_{1}\!=\!a,M^{*}\!=\!M\right]\!, \tag{1}\] where the subscript \(\mu\) next under the expectation is a shorthand for indicating that actions over periods \(h\!=\!2\),...,\(H\) are selected according to the policy \(\mu\). Let \(V_{\mu}^{M}(s)\!:=\!Q_{\mu}^{M}(s,\mu(s))\). We say a policy \(\mu^{M}\) is optimal for the MDP \(M\) if \(\mu^{M}(s)\!\in\!\arg\!\max_{\mu}V_{\mu}^{M}(s)\) for all \(s\!\in\!\mathcal{S}\). To simplify the exposition, we assume that under any MDP \(M\) and any policy \(\mu\), \(H\!<\!\infty\) with probability 1. We use \(k\) to index the episode, and we use \(\mathcal{H}_{k}\) to denote the history of observations made _prior_ to episode \(k\). An RL algorithm is a deterministic sequence of functions, \(\{\pi_{k}|k=1,2,\ldots\}\), each mapping \(\mathcal{H}_{k}\) to a probability distribution \(\pi_{k}(\cdot|\mathcal{H}_{k})\) over policies, from which the agent samples a policy \(\mu_{k}\) for the \(k^{\text{th}}\) episode. Denote the regret of a policy \(\mu_{k}\) over episode \(k\) by \[\Delta_{k}:=\sum_{s\in\mathcal{S}}\rho(s)(V_{\mu^{*}}^{M^{*}}(s)-V_{\mu_{k}}^{ M^{*}}(s)), \tag{2}\] where \(\mu^{*}\) is an optimal policy for \(M^{*}\). We define the expected regret incurred by an RL algorithm \(\pi\) up to episode \(K\) as \[\text{Regret}(K,\pi):=\mathds{E}_{\pi}\left[\sum_{k=1}^{K}\Delta_{k}\right], \tag{3}\] where the subscript \(\pi\) under the expectation indicates that policies are generated through algorithm \(\pi\). Note that the expectation in (3) is over the random transitions and rewards, the possible randomization in the learning algorithm \(\pi\), and also the unknown MDP \(M^{*}\) based on the agent designer's prior distribution. ### Epistemic Neural Networks We construct RL agents based on epistemic neural networks (ENN) [11]. A conventional neural network is specified by a parameterized function class \(f\), which produces an output \(f_{\theta}(x)\) given parameters \(\theta\) and an input \(x\). An ENN is specified by a parameterized function class \(f\)_and_ a reference distribution \(P_{Z}\). The output \(f_{\theta}(x,z)\) of an ENN depends additionally on an _epistemic index_\(z\), sampled from the reference distribution \(P_{Z}\). Variation of the network output with \(z\) indicates uncertainty that might be resolved by future data. All conventional neural networks can be written as ENNs, but this more general framing allows an ENN to represent the kinds of uncertainty necessary for effective sequential decision-making [20]. In particular, it allows for an ENN to represent useful joint predictions. Consider a classification problem. Given inputs \(x_{1},\ldots,x_{\tau}\), a joint prediction assigns a probability \(\hat{P}_{1:\tau}(y_{1:\tau})\) to each class combination \(y_{1},\ldots,y_{\tau}\). Using an ENN to output class logits for each input, we can make expressive joint predictions by integrating over the epistemic index. \[\hat{P}_{1:\tau}^{\text{ENN}}(y_{1:\tau})=\int_{z}P_{Z}(dz)\prod_{t=1}^{\tau} \text{softmax}\left(f_{\theta}(x_{t},z)\right)_{y_{t}}. \tag{4}\] This sort of nuanced joint prediction share many similarities with Bayesian neural networks (BNNs), which maintain a posterior distribution over plausible neural nets. However, unlike BNNs, ENNs do not necessarily ascribe Bayesian semantics to the unknown parameters of interest, and they do not generally update with Bayes rule. All BNNs can be expressed as ENNs; for example, an ensemble of \(K\) networks \(f_{\theta_{1}},..,f_{\theta_{K}}\) can be written as an ENN \(\tilde{f}\) with reference distribution \(P_{Z}=\text{Unif}(\{1,..,K\})\) and \(\tilde{f}_{\theta}(x,z):=f_{\theta_{z}}(x)\)[11]. However, there are some ENNs that cannot be expressed naturally as BNNs. ### The Epinet One such example of novel ENNs is the _epinet_: a small additional network designed to estimate uncertainty [11]. An epinet is added to a _base network_: a conventional NN with base parameters \(\zeta\) that takes input \(x\) and outputs \(\mu_{\zeta}(x)\). The epinet acts on a subset of _features_\(\phi_{\zeta}(x)\) derived from the base network, as well as an epistemic index \(z\) sampled from the standard normal in \(D_{Z}\) dimensions. For concreteness, you might think of \(\mu\) as a large neural network and \(\phi\) as the last layer features. For epinet parameters \(\eta\), this produces a combined output: \[\underbrace{f_{\theta}(x,z)}_{\text{ENN}}=\underbrace{\mu_{\zeta}(x)}_{\text{ base net}}+\underbrace{\sigma_{\eta}(\text{sg}[\phi_{\zeta}(x)],z)}_{\text{epinet}}. \tag{5}\] The ENN parameters \(\theta=(\zeta,\eta)\) include those of the base network and epinet2. The epinet \(\sigma_{\eta}\) has a simple MLP-like architecture, with an internal _prior function_ designed to create an initial variation in index \(z\)[11]. That means, for \(\tilde{x}:=\text{sg}[\phi_{\zeta}(x)]\), Footnote 2: The “stop gradient” notation \(\text{sg}[\cdot]\) indicates the argument is treated as fixed when computing a gradient. For example, \(\nabla_{\theta}f_{\theta}(x,z)=[\nabla_{\zeta}\mu_{\zeta}(x),\nabla_{\eta} \sigma_{\eta}(\phi_{\zeta}(x),z)]\). \[\underbrace{\sigma_{\eta}(\tilde{x},z)}_{\text{epinet}}=\underbrace{\sigma_{ \eta}^{L}(\tilde{x},z)}_{\text{learnable}}+\underbrace{\sigma^{P}(\tilde{x},z )}_{\text{prior net}}. \tag{6}\] The prior network \(\sigma^{P}\) represents prior uncertainty and has no trainable parameters. The learnable network \(\sigma_{\eta}^{L}\) can adapt to the observed data with training. This paper focuses on simple neural networks based around MLPs with ReLU activation. Let \(C\) denote the number of classes and \(D_{Z}\) denote the index dimension. The learnable network \(\sigma_{\eta}^{L}(\phi_{\zeta}(x),z)=g_{\eta}([\phi_{\zeta}(x),z])^{T}z\), where \(g_{\eta}(\cdot)\) is an MLP with outputs in \(\mathds{R}^{D_{Z}\times C}\), and \([\phi_{\zeta}(x),z]\) is concatenation of \(\phi_{\zeta}(x)\) and \(z\). The prior network \(\sigma^{P}\) is a mixture of an ensemble of \(D_{Z}\) particles sampled from the distribution of the data generating model that acts directly on the input \(x\) (Section 4). ### Enn-Dqn We now motivate and develop ENN-DQN, a novel DQN-type agent for large-scale RL problems with value function approximation. Specifically, it uses an ENN to maintain a probability distribution over the state-action value function \(Q^{*}\), which may be thought of as an approximate posterior of the optimal state-action value function. We consider ENNs \(f_{\theta}(s,a)\in\Re^{|\mathcal{A}|}\) that take a state and an epistemic index, and output a real value for each action in \(\mathcal{A}\), similar to an DQN. ENN-DQN selects actions using Thompson sampling (TS). It can be viewed as a value-based approximate TS algorithm via ENN. Similar to existing work on ENNs [10], the agent needs to define a loss function to update the ENN parameters. In general, for a given ENN \(f_{\theta}\), a _target ENN_\(f_{\theta^{\text{target}}}\), and an observed dataset \(\mathcal{D}\), the agent updates its ENN of the state-action value function by minimizing \[\mathcal{L}(\theta,\theta^{\text{target}},\mathcal{D})=\\ \mathds{E}_{z\sim P_{Z}}\left[\sum_{d\in\mathcal{D}}\ell(d,z; \theta,\theta^{\text{target}})\right]+\psi(\theta), \tag{7}\] where \(\ell(d,z;\theta,\theta^{\text{target}})\) is the loss associated with the observed transition \(d=(s,a,r,s^{\prime})\) as well as the epistemic index \(z\), and \(\psi(\theta)\) is a regularization term. In this paper we use \(\psi(\theta)=\lambda\|\theta\|_{2}^{2}\) for some \(\lambda>0\), which corresponds to a Gaussian prior over \(\theta\). We will discuss the specific choices of \(\ell\) at the end of this section. Note that the target ENN is necessary for the stability of learning in many problems, as discussed in [14]. We optimize \(\mathcal{L}\) through stochastic gradient descent. At each gradient step, we sample a mini-batch of data \(\tilde{\mathcal{D}}\) and a batch of indices \(\tilde{\mathcal{Z}}\) from \(P_{z}\), and we take a gradient step with respect to the loss \[\tilde{\mathcal{L}}(\theta,\theta^{\text{target}},\tilde{\mathcal{ D}},\tilde{\mathcal{Z}})=\\ \frac{|\mathcal{D}|}{|\tilde{\mathcal{D}}|}\frac{1}{|\tilde{ \mathcal{Z}}|}\sum_{z\in\tilde{\mathcal{Z}}}\sum_{d\in\tilde{\mathcal{D}}}\ell (d,z;\theta,\theta^{\text{target}})+\psi(\theta). \tag{8}\] Algorithm 1 describes the ENN-DQN agent. Specifically, at each episode \(k\), the agent samples an epistemic index \(z_{k}\) and takes actions greedily with respect to the associated state-action value function \(f_{\theta}(\cdot,z_{k})\). The agent updates the ENN parameters \(\theta\) in each episode according to (8), and it updates the target parameters \(\theta^{\text{target}}\) periodically. ``` 0: initial parameters \(\theta_{0}\), ENN for action-value function \(f_{\theta}(s=\cdot,z=\cdot)\) with reference distribution \(P_{Z}\). 1:\(\theta^{\text{target}}\leftarrow\theta_{0}\) 2: initialize buffer 3:for episode \(k=1,2,...\)do 4: sample index \(z_{k}\sim P_{z}\) 5:\(h\gets 1\) 6: observe \(s_{k,1}\) 7:while\(s_{k,h}\neq\bar{s}\)do 8: apply \(a_{k,h}\in\arg\max_{a}f_{\theta}(s_{k,h},z_{k})_{a}\) 9: observe \(r_{k,h+1},s_{k,h+1}\) 10: buffer.add\(((s_{k,h},a_{k,h},r_{k,h+1},s_{k,h+1}))\) 11:\(\theta,\ \theta^{\text{target}}\leftarrow\text{update}(\text{buffer},\theta, \theta^{\text{target}})\) 12:\(h\gets h+1\) ``` **Algorithm 1** ENN-DQN agent Finally, we discuss the choices of data loss function \(\ell\). Note that the choices of \(\ell\) are usually problem-dependent. For bandit problems with discrete rewards, such as either the finite Bernoulli bandits we consider in Section 3, or the neural bandit we consider in Section 5, we use the classic cross-entropy loss. For general RL problems, such as the ones we consider in Section 6, we use the quadratic temporal difference (TD) loss \[\ell(d,z;\theta,\theta^{\text{target}})=\\ \left(f_{\theta}(s,z)_{a}-r-\gamma\max_{a^{\prime}}f_{\theta^{ \text{target}}}(s^{\prime},z)_{a^{\prime}}\right)^{2},\] where \(\gamma\in[0,1]\) is a discount factor chosen by the agent which reflects its planning horizon. Our next section examines the performance of this style of agent in a simplistic decision problem. ## 3 Analysis in Bandits The quality of decision-making in RL relies crucially on the quality of _joint_ predictions. As established in [20], accurate _joint_ predictions are both necessary and sufficient for effective decision-making in bandit problems. To help build intuition, we present a simple, didactic bandit example in this section. **Example 1** (Bandit with one unknown action).: _Consider a bandit problem with \(A\) actions. The rewards for actions \(1,..,A-1\) are known to be independently drawn from Bernoulli(0.5). The final action \(A\) is deterministic, but either rewards 0 or 1 and both environments are equally likely._ The optimal strategy to maximize the cumulative reward in Example 1 is to first select the uncertain action \(A\) and, if that is rewarding, then pick that one for all future timesteps, otherwise default to any of the first \(1,..,A-1\). Exact Thompson sampling algorithm will incur an \(\mathcal{O}(1)\) regret in this example. However, depending on the quality of ENN approximation, approximate TS based on an ENN can sometimes do much worse. To see it, note that action \(A\) is indistinguishable from other actions based on marginal predictions. Consequently, any agent making decisions only based on marginal predictions cannot perform better than a random guess and will incur an \(\mathcal{O}(A)\) regret in Example 1. On the other hand, the results of Wen et al. (2022) show that suitably-accurate _joint_ predictions, that is predictions over the possible rewards \(r_{1},..,r_{\tau}\) for \(\tau\) time steps into the future _do_ suffice to ensure good decision performance of a variant of approximate TS algorithm (see Theorem 5.1 of that paper). Indeed, for Example 1 even \(\tau=2\) will suffice, as the agent can distinguish the informative action \(A\) that has all probability on either both rewards being rewarding, or both being non-rewarding if it is selected. ## 4 Benchmark Enns Our results build on open-source implementations of Bayesian deep learning, tuned for performance in the Neural Testbed (Osband et al., 2022). Table 1 shows the agents we consider. This section will review the key results and evaluation of these agents in Neural Testbed benchmark, then outline the open-source libraries that we release together with our paper submission. ### Neural Testbed The Neural Testbed sets a prediction problem generated by a random neural network. The generative model is a simple 2-layer MLP with ReLU activations and 50 hidden units in each layer. We outline the agent implementations in Table 1, together with the hyperparameters that were tuned for their performance. Since we are taking open-source implementations we do not re-tune the settings for either testbed or decision problem, except where explicitly mentioned. For our epinet agent, we initialize base network \(\mu_{\zeta}(x)\) as per the baseline mlp agent. The agent architecture follows Section 2.3 and we tune the index dimension and hidden widths for performance and compute. After tuning, we chose epinet hidden layer widths \((15,15)\), with an index dimension of 8 and standard Gaussian reference distribution. Figure 2 shows the results of evaluating these benchmark agents on the Neural Testbed in both marginal (\(\tau=1\)) and joint (\(\tau=10\)) predictions over 10 random seeds, each seed working over many internal generative model instances. After tuning, most of the agents perform similarly in terms of marginal predictions, and are statistically indistinguishable from the well-tuned baseline MLP at 2 standard errors. However, once we look at _joint_ predictions, we can see significant differences in agent performance. Importantly, the epinet matches the performance of large ensembles, but at orders of magnitude lower computational cost. In the rest of this paper we will see that this difference in joint prediction is highly correlated with the resultant agent performance in decision problems. ### Open-Source Code As part of our research effort we release all code necessary to reproduce our experimental results. These do not require access to specialized hardware, and can be run on typical cloud computing for less than 10 USD. Our code builds principally on two existing opensource libraries enn(Osband et al., 2021) and acme(Hoffman et al., 2020). These provide frameworks for ENN and RL agent design, respectively. To run our experiments on Neural Bandit, we make minor edits to the neural_testbed library (Osband et al., 2022), which we anonymize as part of our submission. Our main contribution comes in the enn_acme library, that contains the ENN-DQN algorithm, together with the experiments and implementation details. This library allows for simple comparison between different Bayesian (and non-Bayesian) ENNs for use in deep RL experiments. We believe that it will provide a useful base for future research in the area. Figure 2: Evaluating quality of marginal and joint predictions on the Neural Testbed. ## 5 Neural Bandit In this section we present an empirical evaluation of the ENNs from Table 1 on a 'neural bandit' problem. We begin by describing the environment, which is derived from the open-source Neural Testbed for evaluating joint predictions (Osband et al., 2022). Then, we review the agent structure, with the details of the ENN-DQN variant we employ. Finally, we review the results which show that ENNs that perform better in joint prediction tend to drive better decisions. ### Environment The neural bandit (Osband et al., 2022) is an environment where rewards are generated by neural-network-based generating processes. We take the 2-layer MLP generative model from the Neural Testbed (Section 4). We consider \(N=1000\) actions, drawn i.i.d. from a 100-dimensional standard normal distribution. At each timestep, the reward of selecting an action \(a\) is generated by first forwarding the vector \(a\) through the MLP, which gives 2 logit outputs. The reward \(\in\{0,1\}\) is then sampled according to the class probabilities obtained from applying softmax to the logits. Our agents re-use the ENN architectures from Section 4 to estimate value functions that predict immediate rewards (i.e. apply discount factor 0). We run the agents for 50,000 timesteps and average results over 30 random seeds. We consider this problem as a simple sanitised problem where we have complete control over the generative model, but also know that a deep learning architecture is appropriate for inference. We hope that this clean and simple proof of concept can help to facilitate understanding. This problem represents a neural network variant of the finite armed bandit problem of Section 3. ### Agents We run the ENN-DQN agents for all of the ENNs of Table 1. Since the problem is only one timestep we train with the cross-entropy loss on observed rewards. We apply an \(L_{2}\) weight decay scheme that anneals with \(1/N\) for \(N\) observed datapoints. As outlined in Table 1 we tune the \(L_{2}\) decay for each of these agents to maximize performance. We use a replay buffer of size 10,000 and update the ENN parameters after each observation with one stochastic gradient step computed using a batch of 128 observations from the replay buffer and a batch of i.i.d index samples from \(P_{Z}\). To compute the gradient, epinet agent used a batch of 5 index samples and other agents used the respective default values specified in [https://github.com/deepmind/neural_testbed](https://github.com/deepmind/neural_testbed). We use Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001 for updating the ENN parameters based on the gradient. ### Results The results of Figure 1 clearly show that, the epinet leads to lower total regret than other ENNs. These results are particularly impressive once you compare the computational costs of the epinet against the other methods. Figure 3 looks at the average regret through time over the 50,000 steps of interaction. We can clearly see that the epinet leads to better regret at all stages of learning. These results are significant in that they are some of the first to actually show the benefits of epinet in an actual decision problem. The scatter plots of Figure 4 report the correlation between prediction quality on the Neural Testbed and bandit performance. The multiple points for any given \begin{table} \begin{tabular}{|l|l|l|} \hline **agent** & **description** & **hyperparameters** \\ \hline mlp & Vanilla MLP & \(L_{2}\) decay \\ ensemble & ‘Deep Ensemble’ (Lakshminarayanan et al., 2017) & \(L_{2}\) decay, ensemble size \\ dropout & Dropout (Gal and Ghahramani, 2016) & \(L_{2}\) decay, network, dropout rate \\ hypermodel & Hypermodel (Dwaracherla et al., 2020) & \(L_{2}\) decay, prior, index dimension \\ ensemble+ & Ensemble + prior functions (Osband et al., 2018) & \(L_{2}\) decay, ensemble size, prior scale \\ epinet & Last-layer epinet (Osband et al., 2021) & \(L_{2}\) decay, network, prior, index dimension \\ \hline \end{tabular} \end{table} Table 1: Summary of benchmark agents, taken from Neural Testbed (Osband et al., 2022). Figure 3: Regret through time for different ENNs. agent represent results generated with different random seeds. The plot titles provide the estimated correlation, together with bootstrapped confidence intervals at the 5th and 95th percentiles. Concretely, 'correlation=-0.01 (-0.23, 0.21) in Figure 3(a) means that the correlation is estimated at -0.01, but the bootstrapped distribution of correlation estimates has a 5th percentile at -0.23 and a 95th percentile at 0.21. However, examining the corresponding correlation of 0.73 in Figure 3(b), with confidence intervals at (0.65, 0.81) we can see that agents with accurate joint predictions tend to perfom better in the neural bandit. These results mirror the previous results of Osband et al. (2022), but now include the epinet agent, which continues to follow this trend. ## 6 Behaviour Suite for RL This section repeats the evaluation of Section 5, but in reinforcement learning problems with long-term consequences. We review the set of environments and benchmarks included in bsuite Osband et al. (2020). Next, we provide implementation details of our ENN-DQN algorithms. Finally, we present the results which, at a high level, mirror those of the bandit setting. ### Environment The behaviour suite for reinforcement learning, or bsuite for short, is a collection of environments carefully-designed to investigate core capabilities of RL agents Osband et al. (2020). We repeat our analysis of ENNs applied to these environments. We use the ENNs from Section 5 to estimate value functions with discount \(\gamma=0.99\). For all agents using prior functions (ensemble+, hypermodel, and epinet) we scale the value prior to have mean 0 and variance 1 based on the observations over the first 100 timesteps under a random action policy. We choose to work with bsuite since these are challenging environments designed by RL researchers and _not_ given by neural network generative models. In addition, these problems are created with particularly challenging issues in exploration, credit assignment and memory that do not arise in the neural testbed. Evaluating on these extreme, but simple, tasks allows us to stress test our methodology. ### Agents We run the ENN-DQN agents for all of the ENNs of Table 1. All agents use a replay buffer of size 10,000 and update the ENN parameters after each interaction with the environment. Each update consists of taking a step in the direction of the gradient of the loss function, Equation (1), using a batch of 128 observations from the replay buffer and a batch of 20 i.i.d index samples from the reference distribution. We make use of discount factor \(\gamma=0.99\) for all ENN agents in our experiments. For epinet we use a similar architecture to Section 4 but only a single-hidden layer epinet with 50 hidden units along with a 2-hidden layer MLP base model, 2-dimensional normal Gaussian distribution as the reference distribution. We use a single set of hyperparameters for all the bsuite environments. However, different bsuite environments have different maximum possible rewards, and a single value of prior scale might not suffice for all the environments. To overcome this, we first run a uniform random action policy, which samples actions with equal probability from the set of possible actions, for 100 time steps. We use this data to scale the output of the prior value functions to have a mean 0 and variance 1 for all the agents which use prior functions (hypermodel, ensemble+, and epinet). Appendix A presents a detailed breakdown of performance of different agents across environments. Figure 4: Relating bandit performance to prediction quality in the Neural Testbed. ### Results In bsuite, an agent is assigned a score for each experiment. Figure 5 plots the "bsuite loss", which we define to be one minus the average score against computational cost. Once again, epinet performs similarly with large ensembles, but at orders of magnitude less computational cost. Empirically, we observe the biggest variation with ENN design in the 'DeepSea' environments designed to test efficient exploration. Here, only the epinet and ensemble+ agents are able to consistently solve large problem sizes. We include a more detailed breakdown of agent performance by competency in Appendix A. The scatter plots of Figure 6 report the correlation between prediction quality on the Neural Testbed and bsuite performance. The multiple points for any given agent represent results generated with different random seeds. The plot titles provide the estimated correlation, together with bootstrapped confidence intervals at the 5th and 95th percentiles, just as in Figure 6. Once again, our results mirror those of the Neural Testbed. Agents that produced accurate joint predictions performed well in the bsuite. However, the quality of marginal predictions showed no strong relation with performance on bsuite. These results are significant for several reasons. First, we show that the high level observation that joint prediction quality relates to decision performance extends beyond synthetic neural network generative models. Further, these results occur even when we move beyond the simple classification setting of one-step rewards, towards a multi-step TD learning algorithm. Taken together, these provide a broader form of robustness around the efficacy of learning with epinet, and the importance of predictions beyond marginals. ## 7 Conclusion This paper investigates the use of different epistemic neural networks to drive approximate Thompson sampling in decision problems. We find that, on average, ENNs that perform better in joint prediction on the Neural Testbed also tend to perform better in decision problems. These results are particularly significant in that they appear to be somewhat robust to the structure of the environment's generative model, with predictive power even when the tasks are very different from a 2-layer ReLU MLP. Importantly, our experiments show that novel ENN architectures such as the epinet are able to match or even outperform existing approaches at orders of magnitude lower computational cost. This is the first paper to extend those results from the somewhat synthetic task of joint prediction, to actual decision making. We believe that this work, together with the open source code, can help set a base for future research into effective ENN architectures for better decision making in large deep learning systems. Figure 5: Evaluating performance and computational costs on bsuite reinforcement learning benchmark. Figure 6: Relating bsuite performance to prediction quality in the Neural Testbed. ## Acknowledgements We thank John Maggs for organization and management of this research effort and Rich Sutton, Yee Whye Teh, Geoffrey Irving, Koray Kavukcuoglu, Vlad Firoiu, Botao Hao, Grace Lam, Mehdi Jafarnia and Satinder Singh for helpful discussions and feedback.
2308.01602
Deep Learning-based surrogate models for parametrized PDEs: handling geometric variability through graph neural networks
Mesh-based simulations play a key role when modeling complex physical systems that, in many disciplines across science and engineering, require the solution of parametrized time-dependent nonlinear partial differential equations (PDEs). In this context, full order models (FOMs), such as those relying on the finite element method, can reach high levels of accuracy, however often yielding intensive simulations to run. For this reason, surrogate models are developed to replace computationally expensive solvers with more efficient ones, which can strike favorable trade-offs between accuracy and efficiency. This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs in the presence of geometrical variability. In particular, we propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme where a GNN architecture is used to efficiently evolve the system. With respect to the majority of surrogate models, the proposed approach stands out for its ability of tackling problems with parameter dependent spatial domains, while simultaneously generalizing to different geometries and mesh resolutions. We assess the effectiveness of the proposed approach through a series of numerical experiments, involving both two- and three-dimensional problems, showing that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios. We also assess, from a numerical standpoint, the importance of using GNNs, rather than classical dense deep neural networks, for the proposed framework.
Nicola Rares Franco, Stefania Fresca, Filippo Tombari, Andrea Manzoni
2023-08-03T08:14:28Z
http://arxiv.org/abs/2308.01602v1
Deep Learning-based surrogate models for parametrized PDEs: handling geometric variability through graph neural networks ###### Abstract Mesh-based simulations play a key role when modeling complex physical systems that, in many disciplines across science and engineering, require the solution of parametrized time-dependent nonlinear partial differential equations (PDEs). In this context, full order models (FOMs), such as those relying on the finite element method, can reach high levels of accuracy, however often yielding intensive simulations to run. For this reason, surrogate models are developed to replace computationally expensive solvers with more efficient ones, which can strike favorable trade-offs between accuracy and efficiency. This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs in the presence of geometrical variability. In particular, we propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme where a GNN architecture is used to efficiently evolve the system. With respect to the majority of surrogate models, the proposed approach stands out for its ability of tackling problems with parameter dependent spatial domains, while simultaneously generalizing to different geometries and mesh resolutions. We assess the effectiveness of the proposed approach through a series of numerical experiments, involving both two- and three-dimensional problems, showing that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios. We also assess, from a numerical standpoint, the importance of using GNNs, rather than classical dense deep neural networks, for the proposed framework. **Short summary** Geometric variability is a major obstacle in surrogate modeling, as classical approaches, such as the reduced basis method, can account for such degree of complexity only under severe simplifications, in an intrusive way, featuring remarkable computational costs. In this paper, we propose the use of graph neural networks to efficiently evolve dynamical systems defined on different domains and geometries. The networks are trained on a collection of trusted samples, obtained through accurate numerical simulations, and are shown to be capable of generalizing to unseen geometries without loss of accuracy. Despite assessed on a series of simplified test cases, numerical results suggest that the proposed approach can pave a new way for handling geometric variability in surrogate modeling, potentially leading to novel methodologies capable of combining GNNs and classical techniques. ## 1 Introduction Thanks to accurate and reliable numerical simulations, we are now able to simulate, monitor and forecast very complex physical phenomena such as those arising in computational physics, biology and engineering. However, when it comes to many-query applications, such as, e.g., optimal control and uncertainty quantification tasks, the elevated computational cost constitutes a major limitation that hinders the effective potential of numerical simulations. As already explored by several researchers, one way to overcome this complexity is to rely on _surrogate models_: suitable emulators that are capable of replicating the outputs of classical PDE solvers - thereby referred to as Full Order Models (FOM) - at a reduced computational cost. This practice is also known as Reduced Order Modeling (ROM). As of today, domain practitioners can count on a very large number of ROM techniques, each with its own advantages and limitations. Just to mention some of them, these include: intrusive and non-intrusive projection-based ROMs [30, 37, 21, 22, 15, 16], which can effectively tackle diffusive problems, especially in the case of affinely parametrized operators; adaptive methods based on, e.g., ROM augmentation [7, 20], clustering [19], interpolation [1] or space-time splittings [32], which are particularly suited for modeling shock waves, Hamiltonian systems, etc.; nonlinear reduction techniques based on, e.g., spectral submanifolds [6, 27] and library representations [4], which can provide users with solid theoretical guarantees; Deep-Learning based ROMs (DL-ROMs) relying on deep autoencoders, which, if provided with enough data, can address both stationary and time-dependent problems, even in the presence of severe nonlinearities and singular behaviors [12, 10, 13, 5, 8, 9, 38]. All these approaches are grounded on a common assumption, that is: the underlying FOM must be identified once and for all, with a fixed spatial discretization and a precise number of degrees of freedom (dofs) \(N_{h}\). This fact, however, poses a major limitation when having to deal with PDEs defined over parametrized domains. Assume for instance that the governing equations depend on a vector of geometrical parameters \(\mathbf{\mu}\), which can affect the shape and the configuration of the underlying spatial domain \(\Omega=\Omega_{\mathbf{\mu}}\). Then, at the discrete level, each \(\mathbf{\mu}\) instance will correspond to a suitable high-fidelity mesh \(\mathcal{M}_{\mathbf{\mu}}^{h}\) entailing \(N_{\mathbf{\mu}}^{h}\) dofs. The issue, here, is that as soon as we change the value of the geometrical parameters, say from \(\mathbf{\mu}\) to \(\mathbf{\mu}^{\prime}\neq\mathbf{\mu}\), the total number of dofs might change, \(N_{\mathbf{\mu}^{\prime}}^{h}\neq N_{\mathbf{\mu}}^{h}\); furthermore, in general, even if \(N_{\mathbf{\mu}^{\prime}}^{h}=N_{\mathbf{\mu}}^{h}\), we will not be able to match the dofs in the two meshes. For projection-based ROMs, this makes the construction of a unique projection matrix \(\mathbf{V}\in\mathbb{R}^{N_{h}\times n}\) impossible; similarly, we cannot rely on naive DL-ROMs as these would require the construction of an autoencoder network \(\Psi\circ\Psi^{\prime}\) with \(\Psi^{\prime}:\mathbb{R}^{N_{h}}\to\mathbb{R}^{n}\) and \(\Psi^{\prime}:\mathbb{R}^{n}\to\mathbb{R}^{N_{h}}\). Local techniques based on clustering algorithms can partially resolve this issue by providing a different projector \(\mathbf{V}_{i}\) for each parametric instance \(\mathbf{\mu}_{i}\) observed in the so-called _offline stage_, with \(i=1,\ldots,q\). However, this approach would incur in severe limitations during its _online_ usage, as the resulting ROM would not be applicable whenever a new parametric instance \(\mathbf{\mu}\notin\{\mathbf{\mu}_{i}\}_{i=1}^{q}\) is given. Similar issues are encountered when dealing with FOMs that use mesh-adaptive strategies, even if the geometry is kept fixed. The purpose of this work is to overcome these limitations by relying on Graph Neural Networks (GNNs), thus providing a flexible approach to surrogate modeling that is capable of handling geometric variability and generalizing to unseen geometries. The idea is inspired by the recent successes of GNNs in scientific applications [44, 45, 23] and shares some similarities, that we discuss below, with other recent works. ### GNNs in surrogate and reduced order modeling GNNs are a particular class of neural network architectures that were originally proposed as a way to handle statistical data defined over graphs [42, 3]. To simplify, given a (directed) graph \(G=(V,E)\) with vertices \(V\) and edges \(E\subseteq V\times V\), a GNN is a computational unit that can receive a set of node features at input, \(\mathbf{v}:V\to\mathbb{R}^{l}\), and return a corresponding set of node features at output \(\mathbf{v}^{\prime}:V\to\mathbb{R}^{l^{\prime}}\). The same GNN unit can process data coming from different graphs. The only restrictions are: i) the size of the input and output features, \(l\) and \(l^{\prime}\), respectively; ii) the fact that each input-output pair must be defined over the same graph. This is possible because, differently from other architectures such as dense deep feed forward networks (DNNs), GNNs adopt a local perspective: information is processed at the nodal level through a combination of message-passing steps (communication of nearby nodes) and aggregation routines. This added flexibility makes GNN capable of handling data defined over different graphs and, eventually, provides them with the ability to generalize over unseen geometries. In the Deep-Learning literature, this fact is known as _relational inductive bias_. In general, the term _inductive bias_ refers to the ability of a learning algorithm to prioritize one solution (or interpretation) over another, independently of the observed training data, and it can express (explicitly or implicitly) assumptions about either the data-generating process or the space of solutions[3]. In the case of GNNs, the implicit assumption is that the output of a given neuron is primarily affected by its neighbouring neurons (thus the term _relational_), so that local effects are stronger than global ones. Our idea for the present work is to exploit the capabilities of GNNs in order to learn a nonintrusive data-driven time-stepping scheme for evolving high dimensional parameter dependent dynamical systems. To this end, we interpret discrete FOM solutions \[\mathbf{u}_{\boldsymbol{\mu}}=\left[\mathbf{u}_{\boldsymbol{\mu},1},\ldots, \mathbf{u}_{\boldsymbol{\mu},N_{\boldsymbol{\mu}}^{h}}\right]^{T}\in\mathbb{R}^ {N_{\boldsymbol{\mu}}^{h}}\] as collections of nodal features \(\boldsymbol{u}_{\boldsymbol{\mu}}:V_{\boldsymbol{\mu}}\rightarrow\mathbb{R}\), were \[V_{\boldsymbol{\mu}}=\{\mathbf{x}_{\boldsymbol{\mu},i}\}_{i=1}^{N_{\boldsymbol {\mu}}^{h}}\] are the vertices of the underlying mesh (sorted coherently with the FOM dofs), so that \[\boldsymbol{u}_{\boldsymbol{\mu}}\left(\mathbf{x}_{\boldsymbol{\mu},i}\right) :=\mathbf{u}_{\boldsymbol{\mu}}^{(i)}.\] Then, this graph-mesh equivalence allows us to construct a GNN module that can evolve discrete solutions defined over different meshes (and different domains). This work finds its main inspiration in a recent contribution by Pfaff et al.[34], where the authors propose a GNN architecture for learning mesh-based simulations in a time-dependent framework. Our purpose is to transpose their ideas to the realm of ROM for parametrized PDEs, and to propose a systematic approach for handling geometric variability. To this end, we shall adopt a purely mathematical perspective, as to convey the overall idea in the language that ROM practitioners are mostly familiar with. Nonetheless, aside from the surrounding framework and the mathematical formalism, our proposal also features a few practical differences with respect to the work by Pfaf et al., namely: i) the introduction of _global features_, which we use to extend the overall approach to nonautonomous systems and, possibly, to PDEs that depend both on physical and geometrical parameters; ii) the definition of the loss function, which we complement with an additional term concerning the approximation of the time-derivative; iii) an explicit superimposition of a Runge-Kutta-like time-stepping scheme. In this sense, our work is much closer to the one by Pegolotti et al.[33], where the authors explore the use of GNNs for reduced order modeling of cardiovascular systems. Still, their framework remains quite different from ours as they only consider a fixed number of possible geometries, thus not allowing for a continuous parametrization, and they focus on a specific physical system. A more flexible use of GNNs is found in the recent contribution by Gladstone et al.[14], where a similar paradigms is exploited to surrogate classical PDE solvers. Their analysis, however, is limited to time-independent PDEs and does not transfer to dynamical systems. Finally, for what concerns surrogate and reduced order modeling, we mention that some authors are also exploring the integration of GNNs together with ROM techniques: see, e.g., the GCA-ROM, a GNN-variation of the DL-ROM approach recently proposed by Pichi et al.[35] Nonetheless, these techniques are extremely different with respect to our proposal, as, in order to tackle both stationary and time-dependent PDEs, they neglect the dynamical nature of the system, that is: they treat time as an additional parameter, thus ignoring the Markovian structure that characterizes the majority of evolution equations. ### Outline of the paper The paper is organized as follows. First, in Section 2, we formally introduce the problem of surrogate modeling for parametrized dynamical systems. Then, in Section 3, we provide the reader with the fundamental building blocks required for our construction and present the corresponding GNN architectures. We then put things into action in Section 4, where we dive into the details of the proposed approach. Finally, we devote Section 5 to the numerical experiments. ## 2 Modeling time-dependent PDEs We consider a PDE system depending on a set of input parameters \(\boldsymbol{\mu}\in\Theta\), where the parameter space \(\Theta\subset\mathbb{R}^{p}\) is a bounded and closed set; in our analysis, input parameters may represent both physical and geometrical properties of the system, like, e.g., material properties, boundary conditions, or the shape of the domain itself. For the time being, however, we focus on the treatment of geometrical parameters, since the extension to the case where both physical and geometrical parameters is straightforward. Throughout the paper, we adopt a fully algebraic perspective and assume that the governing equations have already been discretized in space by means of a suitable high-fidelity approximation - which, here, is allowed to depend on \(\boldsymbol{\mu}\) - such as, e.g., the finite element method. Regardless of the spatial discretization adopted, the FOM can be expressed as a nonlinear, high-dimensional parametrized dynamical system. Hence, given \(\boldsymbol{\mu}\in\Theta\), we aim at solving the initial value problem: \[\begin{cases}\dot{\mathbf{u}}_{\boldsymbol{\mu}}(t)=\mathbf{f}(t,\mathbf{u}_{ \boldsymbol{\mu}}(t),\boldsymbol{\mu})\,,\qquad t\in(0,T),\\ \mathbf{u}_{\boldsymbol{\mu}}(0)=\mathbf{g}_{\boldsymbol{\mu}},\end{cases} \tag{1}\] where \(\mathbf{u}_{\boldsymbol{\mu}}:[0,T)\rightarrow\mathbb{R}^{N^{\boldsymbol{ \mu}}_{\boldsymbol{\mu}}}\) is the parametric solution to (1), while \[\mathbf{g}_{\boldsymbol{\mu}}\in\mathbb{R}^{N^{\boldsymbol{\mu}}_{ \boldsymbol{\mu}}}\quad\text{and}\quad\mathbf{f}(\cdot,\cdot,\boldsymbol{\mu} ):(0,T)\times\mathbb{R}^{N^{\boldsymbol{\mu}}_{\boldsymbol{\mu}}}\rightarrow \mathbb{R}^{N^{\boldsymbol{\mu}}_{\boldsymbol{\mu}}}\] are the initial condition and a - possibly nonlinear - function encoding the dynamics of the system, respectively. The FOM dimension, \(N^{\boldsymbol{\mu}}_{\boldsymbol{\mu}}\), is related to the finite dimensional subspaces introduced for the sake of space discretization - here \(\boldsymbol{\mu}>0\) denotes a discretization parameter, such as the maximum diameter of the elements in the computational mesh \(\mathcal{M}^{h}_{\boldsymbol{\mu}}\); consequently, \(N^{h}_{\boldsymbol{\mu}}\) can be extremely large if the PDE problem describes complex physical behaviors and/or high degrees of accuracy are required for its solution. Furthermore, the number of degrees of freedom (dots) of the problem may depend on the geometrical parameters contained in \(\boldsymbol{\mu}\) since, by modifying their values, the number of vertices in the computational mesh \(\mathcal{M}^{h}_{\boldsymbol{\mu}}\) can vary. We thus aim at approximating the set \[\mathcal{S}=\{\mathbf{u}_{\boldsymbol{\mu}}(t)|\;t\in[0,T),\;\boldsymbol{\mu} \in\Theta\subset\mathbb{R}^{p}\}\subset\bigcup_{\boldsymbol{\mu}\in\Theta} \mathbb{R}^{N^{\boldsymbol{\mu}}_{\boldsymbol{\mu}}} \tag{2}\] of the solutions to (1) when \((t;\boldsymbol{\mu})\) varies in \([0,T)\times\Theta\), also referred to as solution manifold. In order to numerically approximate problem (1), even at the FOM level, one must rely on suitable time-integration schemes, such as the backward differentation formulae [36]. Thus, having fixed a uniform partition of \((0,T)\) in \(N_{t}\) equally spaced subintervals, and by denoting with \(\mathbf{u}^{n}\), the solution \(\mathbf{u}\) at time \(t^{n}=n\Delta t\), where \(\Delta t:=T/N_{t}\), our ultimate aim is to solve: \[\begin{cases}\dfrac{\mathbf{u}_{\boldsymbol{\mu}}^{n+1}-\mathbf{u}_{ \boldsymbol{\mu}}^{n}}{\Delta t}=\mathbf{f}\left(t^{n+1},\mathbf{u}_{ \boldsymbol{\mu}}^{n+1},\boldsymbol{\mu}\right),\qquad n\geq 0,\\ \mathbf{u}_{\boldsymbol{\mu}}^{0}=\mathbf{g}_{\boldsymbol{\mu}}.\end{cases} \tag{3}\] Equation (3) requires the solution, at each time instance, of a nonlinear system depending on the input parameter vector \(\boldsymbol{\mu}\) which may entail high computational times, especially when dealing with a multi-query or real-time context. To achieve computational efficiency, multi-query analysis and real-time problems must rely on suitable surrogate models which can be built according to different strategies. Motivated by this, our goal is the efficient approximation of the solution manifold in (2) by decreasing the complexity related to the solution of the FOM and preserving high level of accuracy. In this work, we introduce a Deep Learning-based surrogate model that exploits graph neural networks (GNNs) [17] to efficiently evolve the time-discrete dynamical system in (3). Here, the use of GNNs is motivated by their unique ability of handling data defined on different graphs/meshes, which can result in extremely flexible models capable of generalizing to new, unseen, geometries and spatial resolutions (in the Deep Learning literature, this fact is usually referred to as relational inductive bias[3]). In mathematical terms, we aim at constructing a GNN architecture \(\Phi\) such that: \[\mathbf{u}_{\boldsymbol{\mu}}^{n+1}\approx\boldsymbol{\Phi}(\mathbf{u}_{ \boldsymbol{\mu}}^{n},t^{n},\boldsymbol{\mu}).\] From an abstract point of view, the above can be seen as an extension of the MeshGraphNet model as originally proposed by Pfaff et al.[34]: here, in fact, the time variable is included explicitly, which allows us to address the more general case of nonautonomous systems. ## 3 Graph Neural Networks GNNs were initially conceived as an extension of convolutional neural networks (CNNs) to operate on graph-structured data and overcome their limitations in this domain. In graph theory, graphs are used to describe systems made of nodes and their connections (edges). An image can be regarded as a graph with regular and well-organized connections in the Euclidean space, where each pixel corresponds to a node in the graph. In this particular case, the aforementioned CNN architectures can exploit the peculiar structure of the graph to extract meaningful spatial features. However, these models become inapplicable as soon as the structure of the underlying graph becomes slightly more sophisticated: in practice, this fact has remarkable consequences as in many real-time applications (e.g., traffic networks, socialnetworks) data are naturally defined over general, possibly non-Euclidean, graphs. To promote the use of Deep Learning in those applications, GNNs were developed to extract spatial features over general graphs, by inspecting neighboring nodes, with arbitrary connection in a non-Euclidean space [31, 47]. Thus, GNNs can be considered as a generalized version of CNNs over general graphs. Clearly, this generalization comes with some differences between the two architectures. For instance, while the outputs of CNNs are affected by the ordering of the pixels, the same is not true for GNNs, as the action of the latter is uniquely determined by the connectivity of the graph (in this concern, note, for instance, that the connectivity of an image remains the same even if we flip it either vertically or horizontally). Moreover, GNNs adopt a _graph-in, graph-out_ architecture meaning that these model types accept a graph as input, with information loaded into its nodes and edges, and progressively transform these embeddings, without changing the connectivity of the input graph: in contrast, CNN layers usually modify the resolution of the input upon their action. The fundamental ingredient of a basic GNN layer is the so-called _message passing_ operation, which enables the aggregation of node information while leveraging the depth of the graph. More precisely, a message passing step consists of two components: * message computation: each node creates a message to be sent to other nodes later; * aggregation: each node aggregates the messages from the neighborhood. This message-passing propagation can be seen as an information retrieval task from different levels of depth of the graph. A simple visualization of the message propagation is shown in Figure 1. For each node, the information comes from the neighbors. In this way, adding message-passing steps can be seen as connecting nodes that can be also far from each other. Graph-based algorithms, such as, e.g, graph convolutional networks [26], GraphSage [18], graph attention networks [46], graph transformer operator [43] and interaction networks [2], differ in the way the message is computed and the aggregation Figure 1: Message propagation and aggregation. The information is broadcasted from different levels of depth of the graph. For each node, at each message passing step,the information is collected from the neighbors and aggregated. In this way, adding message-passing steps can be seen as connecting nodes which can also be far from each other. Figure courtesy of Phillip Lippe (University of Amsterdam, QUVA lab). is performed. In particular, depending on the chosen framework, the message-passing step may also involve the edges of the graph, where a corresponding set of _edge features_ can be loaded: in the next few pages, we shall describe this situation in full mathematical detail, as it will be of key importance for our construction. In order to perform a message-passing operation, GNNs leverage on suitable data structures for representing the topology and connectivity of the graph. In this concern, a classical choice is to exploit the edge connectivity matrix, that is, a \(n_{edges}\times 2\) matrix where each row \(k\) contains the indices of the source and destination nodes of the \(k\)th edge; this allows GNNs to stall the overall topology of the graph with a memory complexity of \(\mathcal{O}(n_{edges})\). Roughly speaking, this is equivalent to storing a sparse version of the adjacency matrix of the graph, which, in principle, consists of \(\mathcal{O}(n_{nodes}^{2})\) entries. Before coming to our own use of GNNs for surrogate modeling, within this Section we take the chance to present some of the fundamental ingredients required for our construction. In particular, we shall describe in mathematical terms the concept of message-passing, and we shall introduce a particular GNN architecture known as the Encoder-Processor-Decoder model. ### The message-passing block: formal definition Given \(l\in\mathbb{N}\), a graph-forward-pass with \(l\) hidden features is a computational unit \(F=F(\mathbf{v},\mathbf{e},G)\) that takes as input 1. a directed graph structure, \(G=(V,E)\); 2. a collection of vertex features, \(\mathbf{v}:V\to\mathbb{R}^{l}\); 3. a collection of edge features, \(\mathbf{e}:E\to\mathbb{R}^{l}\) and outputs a new collection of vertex features with \(l\)-features per node, namely \[F(\mathbf{v},\mathbf{e},G):V\to\mathbb{R}^{l}.\] We think of \(F\) as an object that transforms the vertex features associated to the nodes in the graph. In GNN architectures, a message-passing block is a particular type of graph-forward-pass routine that exploits the local structure of the input graph \(G\), only allowing communication of nearby nodes. Specifically, a message-passing block \(F\) is comprised of two Multi-Layer Perceptron[29] (MLP) units, \[\psi_{\mathbf{v}}:\mathbb{R}^{2l}\to\mathbb{R}^{l}\quad\text{and}\quad\psi_{\mathbf{ e}}:\mathbb{R}^{N}\to\mathbb{R}^{l},\] that completely characterize the action of \(F\). However, in order to properly explain how the forward pass is carried out, we first need to introduce some notation. Given a graph \(G=(V,E)\) and a collection of vertex features \(\mathbf{v}:V\to\mathbb{R}^{l}\), we write \(\mathbf{v}_{\text{in}}\) and \(\mathbf{v}_{\text{out}}\) for the maps \[\mathbf{v}_{\text{in}}:E\to\mathbb{R}^{l}\qquad\mathbf{v}_{\text{out}}:E\to\mathbb{R}^ {l}\] given by \[\mathbf{v}_{\text{in}}(v_{1},v_{2}):=\mathbf{v}(v_{1}),\quad\mathbf{v}_{\text{out}}(v_{1},v_{2}):=\mathbf{v}(v_{2}),\] respectively, where \((v_{1},v_{2})\in E\) represents an oriented edge going from \(v_{1}\) to \(v_{2}\). In other words, passing from \(\mathbf{v}\) to \(\mathbf{v}_{\text{in}}\) is equivalent to transferring the information from the nodes to the edges, with the convention that a given edge inherits the features from its own _source node_. Similarly, going from \(\mathbf{v}\) to \(\mathbf{v}_{\text{out}}\) is a way for storing the information about the _destination nodes_. In the same spirit, it is also useful to define the dual operation, which transfers information from the edges to the nodes. More precisely, given \(\mathbf{e}:E\to\mathbb{R}^{l}\), we shall write \(\overline{\mathbf{e}}\) for the map \(\overline{\mathbf{e}}:V\to\mathbb{R}^{l}\) defined as \[\overline{\mathbf{e}}(v):=\sum_{(v_{1},v)\in E}\mathbf{e}(v_{1},v),\] that is, to go from \(\mathbf{e}\) to \(\overline{\mathbf{e}}\), we collapse all the features corresponding to edges with the same destination node. Lastly, we shall denote by \(\oplus\) the concatenation operator. Specifically, given any two functions with a common domain, e.g., \(\mathbf{f}:X\to\mathbb{R}^{a}\) and \(\mathbf{g}:X\to\mathbb{R}^{b}\), we write \(\mathbf{f}\oplus\mathbf{g}\) to intend the map from \(\Omega\) to \(\mathbb{R}^{a+b}\) given by \[\mathbf{f}\oplus\mathbf{g}(x):=[f_{1}(x),\ldots,f_{a}(x),g_{1}(x),\ldots,g_{b}(x)],\] where \(x\in X\) is a generic input, while \(f_{i}\) and \(g_{j}\) are the \(i\)th and \(j\)th components at the output of \(\mathbf{f}\) and \(\mathbf{g}\), respectively. We now have all the ingredients to rigorously define the forward-pass of a message-passing block. The action of a message-passing block \(F\) with \(l\) hidden features and computational units \(\psi_{\nu},\psi_{e}\), is defined as \[F(\mathbf{v},\mathbf{e},G)=\psi_{\nu}\circ\left(\mathbf{v}\oplus\overline{\psi_{e}\circ( \mathbf{e}\oplus\mathbf{v}_{\text{in}}\oplus\mathbf{v}_{\text{out}})}\right), \tag{4}\] where, as usual, \(\circ\) denotes functional composition. In plain words, Eq. (4) states that the vertex features at output, \(F(\mathbf{v},\mathbf{e},G)\), are obtained as follows: first, the information available in the graph vertices is transferred to the edges and concatenated with the existing features, \(\mathbf{e}\oplus\mathbf{v}_{\text{in}}\oplus\mathbf{v}_{\text{out}}\); then, an MLP, \(\psi_{\mathbf{e}}\), is applied to the extended features to extract meaningful information; the latter, is then transferred back to the node vertices, yielding \(\overline{\psi_{e}\circ(\mathbf{e}\oplus\mathbf{v}_{\text{in}}\oplus\mathbf{v}_{\text{out }})}\). These hidden features - which now live of the graph vertices - are then appended to the original ones and later fed to a terminal MLP block, here given by \(\psi_{\nu}\). In general, we remark that the action of a message-passing step \((\mathbf{v},\mathbf{e},G)\mapsto F(\mathbf{v},\mathbf{e},G)\) is nonlinear because of the two MLPs, \(\psi_{\nu}\) and \(\psi_{e}\), entering the pipeline. **Remark 1**: _Note that the operations \(\mathbf{v}\mapsto\mathbf{v}_{\text{in}}\), \(\mathbf{v}\mapsto\mathbf{v}_{\text{out}}\) and \(\mathbf{e}\mapsto\overline{\mathbf{e}}\), require an exact knowledge of the graph structure \(G\). Here, this fact is left implicit to keep the notation lighter._ **Remark 2**: _In the literature, GNNs have been defined in several ways. One major difference lies in that some authors only talk about \(\ast\)node features\(\ast\), without contemplating the existence of \(\ast\)edge features\(\ast\). Here, we are adopting one of the most recent formulation of GNNs, as proposed by Battaglia et al.[3] Nonetheless, we believe that finding connections between different definitions can enhance practical understanding. To this end, we mention that in the classical formulation by Scarselli et al. (that is, without edge features), a major role is played by the \(\ast\)aggregation step\(\ast\), in which information coming from neighbouring nodes is collapsed onto a single value, e.g. via summation (see Equation 3 in the original work by Scarselli et al. [42]). Here, the same effect can be obtained via \(\mathbf{v}\mapsto\overline{\mathbf{v}_{\text{in}}\oplus\mathbf{v}_{\text{out}}}-(\mathbf{n}-1) \cdot\mathbf{v}\), where \(\mathbf{n}:V\to\mathbb{N}\) is a feature map that returns the connectivity of each node, whereas \(\ast\) stands for pairwise multiplication._ ### The Encoder-Processor-Decoder model The Encoder-Processor-Decoder model is a powerful GNN-based architecture that can process mesh-based data [3, 34, 41]. More precisely, the latter accepts as input: * a directed graph \(G=(V,E)\) associated to some mesh \(\mathcal{M}\) embedded in a suitable ambient space \(\mathbb{R}^{d}\), so that \(V\subset\mathbb{R}^{d}\); * an input signal defined over the mesh vertices, namely \(\mathbf{u}:V\to\mathbb{R}^{q}\); * a global feature vector, \(\xi\in\mathbb{R}^{s}\), describing a given nonspatial property of the system (e.g., time). Then, the output of such a model is a new signal \(\mathbf{u}^{\prime}:V\to\mathbb{R}^{q^{\prime}}\) defined over the given mesh. As the name suggests, the Encoder-Processor-Decoder model is comprised of three modules, which we explain in detail below. These are all characterized by a common hidden-dimension, \(l\in\mathbb{N}\), which we assume to be fixed hereon. #### 3.2.1 Encoder module The encoder module is used to preprocess the input data and return a collection of hidden features defined, respectively, over the graph vertices \(\mathcal{E}_{v}=\mathcal{E}_{v}(\mathbf{u},\xi,G)\) and the edge vertices \(\mathcal{E}_{e}=\mathcal{E}_{e}(G)\). The two are obtained as follows. The node features \(\mathcal{E}_{v}(\mathbf{u},\xi,G)\), are computed by combining a fixed nonlearnable transformation together with an MLP unit \(\Psi^{v}_{\mathcal{E}_{v}}:\mathbb{R}^{q+s+1}\rightarrow\mathbb{R}^{l}\) that maps onto the hidden-state space. The former has the purpose of expanding the node features with information coming from the global variables, \(\xi\), and the graph \(G\). More precisely, let \(\mathbf{b}_{G}:V\rightarrow\{0,1\}\) be a flag for those nodes that lie on the boundary of the mesh, i.e., \(\mathbf{b}_{G}(v)=1\) if and only if \(v\) is a boundary vertex. Then, the action of \(\mathcal{E}_{v}\) reads \[\mathcal{E}_{v}(\mathbf{u},\xi,G):=\Psi^{v}_{\mathcal{E}}\circ(\mathbf{u}\oplus\xi \oplus\mathbf{b}_{G}), \tag{5}\] so that \(\mathcal{E}_{v}(\mathbf{u},\xi,G):V\rightarrow\mathbb{R}^{l}\). Here, with little abuse of notation, we have identified the vector \(\xi\) with a constant map defined over \(V\). As we mentioned, the preliminary transformation \(\mathbf{u}\mapsto\mathbf{u}\oplus\xi\oplus\mathbf{b}_{G}\) is nonlearnable and has the sole purpose of augmenting the nodal features; conversely, the MLP unit introduces a learnable block that is optimized during training. The edge features \(\mathcal{E}_{e}(G)\) are computed following similar ideas. First, a set of nonlearnable features \(e_{G}:E\rightarrow\mathbb{R}^{d+1}\) is extracted starting from the mesh coordinates. This is achieved by letting \[e_{G}(\mathbf{x}_{1},\mathbf{x}_{2}):=\left[\frac{x_{1}^{(1)}+x_{2}^{1}}{2}, \ldots,\frac{x_{1}^{(d)}+x_{2}^{d}}{2},|\mathbf{x}_{1}-\mathbf{x}_{2}|\right]\] where \((\mathbf{x}_{1},\mathbf{x}_{2})\in E\). In other words, \(e_{G}\) maps each edge to a vector containing the coordinates of its midpoint together with the edge length. These preliminary features are then fed to an MLP \(\Psi^{e}_{\mathcal{E}}:\mathbb{R}^{d+1}\rightarrow\mathbb{R}^{l}\), i.e. \[\mathcal{E}_{e}(\mathbf{u},\xi,G):=\Psi^{e}_{\mathcal{E}}\circ e_{G}, \tag{6}\] which returns the encoded edge features. #### 3.2.2 Processor module The encoded features, \(\mathbf{v}:=\mathcal{E}_{v}(\mathbf{u},\xi,G)\) and \(\mathbf{e}:=\mathcal{E}_{e}(G)\), are then elaborated by a GNN-based unit, called the processor \(\mathcal{P}\). The latter consists of \(m\) message-passing-blocks, \(F_{1},\ldots,F_{m}\), each one acting as in (4). More precisely, the output of the processor module is given by \[\mathcal{P}(\mathbf{v},\mathbf{e},G):=F_{m}(\mathbf{h}_{m},\mathbf{e},G), \tag{7}\] where \[\begin{cases}\mathbf{h}_{1}=\mathbf{v}\\ \mathbf{h}_{j+1}=F_{j}(\mathbf{h}_{j},\mathbf{e},G)\end{cases}j=1,\ldots,m-1,\] so that the final output is obtained by applying the blocks \(F_{1},\ldots,F_{m}\) iteratively. We highlight how each message-passing-step transforms the node features but not the edge features. We also point out that, since a single message-passing-block allows neighbouring nodes to exchange information, a processor module with \(m\) units allows communication between nodes that are \(m\) edges faraway. Thus, by changing the number of message-passing-steps, one can move from local to nonlocal transforms (with the latter possibly being more expressive). However, we must also mention that large values of \(m\) may give rise to oversmoothing phenomena[40], reason for which, in practice, a suitable compromise is required. #### 3.2.3 Decoder module In the end, the processor outputs some collection of node features \(\mathbf{v}^{\prime}:=\mathcal{P}(\mathbf{v},\mathbf{e},G)\), with \(\mathbf{v}^{\prime}:V\rightarrow\mathbb{R}^{l}\). At this point, a terminal module, called the decoder, is exploited to recover the desired output. Here, we assume the latter to be consistent with the input signal \(\mathbf{u}\), and thus consist of \(q\) nodal features. In practice, this is achieved by relying on a suitable MLP unit \(\Psi_{\mathcal{P}}:\mathbb{R}^{l}\rightarrow\mathbb{R}^{q}\), that transforms the original \(l\) features onto the \(q\) desired ones. In other words, the decoder module operates nodewise, and its action can be written as \[\mathcal{D}(\mathbf{v}^{\prime}):=\Psi_{\mathcal{D}}\circ\mathbf{v}^{\prime}. \tag{8}\] #### 3.2.4 Overall architecture To summarize, the computational workflow of an Encoder-Processor-Decoder model reads \[\Phi(\mathbf{u},\xi,G):=\mathcal{D}(\mathcal{P}(\mathcal{E}_{v}(\mathbf{u},\xi,G), \mathcal{E}_{e}(G),G)). \tag{9}\] The reader can also find a visual depiction of Eq. (9) in Figure 2. Since the notation might be trouble-some, we remark that the output of the Encoder-Processor-Decoder, \(\mathbf{\phi}:=\Phi(\mathbf{u},\xi,G)\), is nothing but a collection of \(q\)-dimensional node features \(\mathbf{\phi}:V\to\mathbb{R}^{q}\). **Remark 3**: _In the literature of surrogate and reduced order modeling, the words \(\ast\)Encoder\(\mapsto\) and \(\ast\)Decoder\(\mapsto\) are often associated to the concept of dimensionality reduction, where the two objects operate, respectively, to achieve data compression and reconstruction. Here, however, the meaning is completely different. The encoder module acts as a feature extractor and, in general, may increase the dimension of the input; conversely, the decoder module is used to map the nodal feature space onto the nodal output space (usually decreasing the dimension at each node), so to recover the quantities of interest. While the notation might be confusing to some of the readers, we have decided to stick to the one adopted by the GNN community [34, 41]._ ## 4 Application to surrogate modeling of parametrized PDEs Our goal is to predict an approximate solution \(\bar{\mathbf{u}}\) at time \(t^{n+1}\), given the state of the system at time \(t^{n}\), for each node \(i=1,\ldots,N_{\mathbf{\mu}}^{h}\) of the computational mesh \(\mathcal{M}_{\mathbf{\mu}}^{h}\), that is \[\Phi(\mathbf{u}_{\mathbf{\mu}}^{n},t^{n},\mathbf{\mu})\approx\mathbf{u}_{\mathbf{\mu}}^{n+1}.\] Inspired by the general form of explicit Runge-Kutta methods, we model the time stepping scheme \(\Phi\) by letting \[\Phi(\mathbf{v},t^{n},\mathbf{\mu}):=\mathbf{v}+\Delta t\Phi(\mathbf{v},t^{n},\mathcal{M}_{ \mathbf{\mu}}^{h}), \tag{10}\] where \(\bar{\Phi}\) is a GNN architecture based on the Encoder-Processor-Decoder paradigm. Here, with little abuse of notation, we are identifying the computational mesh \(\mathcal{M}_{\mathbf{\mu}}^{h}\) with its underlying graph \(G_{\mathbf{\mu}}=(V_{\mathbf{\mu}},E_{\mathbf{\mu}})\), and the dof vector \(\mathbf{u}_{\mathbf{\mu}}^{n}\in\mathbb{R}^{N_{\mathbf{\mu}}^{h}}\) with its corresponding vertex feature map \(\mathbf{u}_{\mathbf{\mu}}^{n}:V_{\mathbf{\mu}}\to\mathbb{R}\). We also remark that in Eq. (10), the GNN model is made aware of the current time instant: in fact, according to our notation in Section 3.2, the latter is being interpreted as a global feature \(\xi=t^{n}\). Therefore, Figure 2: Visual representation of the Encoder-Processor-Decoder model, Section 3.2. Rigid arrows represent algorithmic computations (learnable and nonlearnable), while dashed arrows act as pointers (no computation implied). In gray, the encoder module, Eqs. (5)-(6); in blue, the message-passing-blocks defining the processor unit, Eq. (7); in green, the decoder module, Eq. (8). the proposed approach aims at modeling the time-stepping scheme using an Encoder-Processor-Decoder architecture, \(\tilde{\Phi}\), that incorporates both the nodal features at a specific time instance \(t^{n}\), as well as the geometrical features of the mesh. The architecture aggregates information from neighboring nodes, processes it, and decodes the system solution at time \(t^{n+1}\). This allows us to evaluate Equation (10) independently of the number of dofs, while simultaneously accounting for the graph structure of the mesh. In particular, this makes it possible to train the model using a variety of different geometries and subsequently predict solutions for new meshes that were not included in the training data. This flexibility is guaranteed by the relational inductive bias of GNNs, which ultimately comes the message-passing paradigm: the model first computes the messages between neighboring nodes, and then performs a suitable aggregation of the information. In contrast, models based on FFNNs and CNNs are constrained by the number of nodes in the computational mesh, which prevents them from generalizing to different domains. The same issue is also encountered by other architectures, such as Mesh-Informed Neural Networks[11] (MINNs): in fact, even though MINNs can handle very complicated geometries at reduced training times, their implementation requires fixing the shape of the spatial domain and the resolution of the space discretization. In this sense, the additional flexibility provided by GNNs is extremely valuable. ### Training and testing algorithms From an operational point of view, the GNN model \(\tilde{\Phi}\) in (10) is trained on a suitable dataset of FOM solutions that serves as a ground truth reference. More precisely, after having constructed and initialized the GNN model, we exploit the FOM solver to generate a collection of training snapshots, \[\{\boldsymbol{\mu}_{i},\mathbf{u}^{0}_{\boldsymbol{\mu}_{i}},\ldots,\mathbf{u }^{N_{t}}_{\boldsymbol{\mu}_{i}}\}_{i=1}^{N_{\text{train}}},\] containing a total of \(N_{\text{train}}\) different trajectories, each corresponding to a different geometrical configuration \(\boldsymbol{\mu}\). For the sake of simplicity, we assume that all the trajectories consist of \(N_{t}\) snapshots in time: however, this assumption is not fundamental to our construction and it can easily dropped. Let \(\boldsymbol{\theta}\) be the vector collecting all the parameters of the GNN module. To emphasize the dependency of the latter on \(\boldsymbol{\theta}\), let us write \(\tilde{\Phi}_{\boldsymbol{\theta}}\) in place of \(\tilde{\Phi}\). We train the GNN architecture by minimizing the loss function below \[\begin{split}\mathscr{L}(\boldsymbol{\theta})=& cw_{1} \sum_{i=1}^{N_{\text{train}}}\sum_{n=0}^{N_{t}-1}|\mathbf{u}^{n+1}_{ \boldsymbol{\mu}}-\mathbf{u}^{n}_{\boldsymbol{\mu}}-\Delta t\tilde{\Phi}_{ \boldsymbol{\theta}}(\mathbf{u}^{n}_{\boldsymbol{\mu}_{i}},t^{n},\mathscr{M} ^{h}_{\boldsymbol{\mu}_{i}})|^{2}+\\ & cw_{2}\sum_{i=1}^{N_{\text{train}}}\sum_{n=0}^{N_{t}-1}|\tilde{ \mathbf{u}}^{n}_{\boldsymbol{\mu}}-\tilde{\Phi}_{\boldsymbol{\theta}}( \mathbf{u}^{n}_{\boldsymbol{\mu}_{i}},t^{n},\mathscr{M}^{h}_{\boldsymbol{\mu} _{i}})|^{2},\end{split} \tag{11}\] where \(c=1/N_{\text{train}}N_{t}\) is a normalizing factor, whereas \(w_{1}\) and \(w_{2}\) are suitable hyperparameters to be tuned manually. The term \(\tilde{\mathbf{u}}^{n}_{\boldsymbol{\mu}}\), instead, refers to a suitable finite-difference approximation of the ground truth time-derivative (e.g., computed by relying either on the forward or backward formulae). The loss function in (11) is made of two contributes: the first one, quantifies the error of the time-stepping scheme after a single iteration; the second one, instead, links the FOM derivative with the output of the GNN model. In particular, we do not rely on full rollouts or any other form of recursive training: this allows us to fully exploit the capabilities of GPUs tensor calculus and mitigate memory usage. Clearly, the downside to this is that, even after a successful training, our GNN model might be subject to error propagation when advancing in time multiple times. To limit this issue and ensure robust rollouts, we exploit the following strategies. At each epoch, that is, at each iteration of the optimization routine: * we do not directly optimize (11), but rather rely on randomly selected mini-batches; * we gitter the input data with random Gaussian noise, as to limit the sensitivity of \(\tilde{\Phi}\) and to enhance the stability of the rollouts at prediction; In practice, the optimization of the GNN model is carried out by relying on back-propagation [39] and ADAM [25], with a variable learning rate that we decrease by a factor \(\gamma>0\) after a specific number of epochs (see Algorithm 1). In general, the training of the GNN model can be carried out iteratively until a stopping criterion is met. For instance, one may simply stop the training after a predefined number of epochs, see, e.g., Algorithm 1. ``` 0: network \(\tilde{\Phi}\); timestep \(\Delta t\); a list of \(N_{train}\) training trajectories \(\mathbf{U}\) (each of length \(N_{t}\)); a list of edge connectivity matrices \(\mathbf{E}\); a list of edge features matrices \(\mathbf{W}\); a list of inner nodes \(\mathbf{I}\); learning rate \(\nu\); decay factor \(\gamma\); number of training epochs epochs; batch size \(N_{b}\); noise variance \(\sigma^{2}\). 0: optimal model parameters \(\boldsymbol{\theta}^{*}\). 1:\(\texttt{epoch}=0\). 2: Randomly initialize \(\boldsymbol{\theta}^{0}\). 3:while\(\texttt{epoch}<\textit{max\_epoch}\)do 4: Create the list \(indices=[1,\ldots,N_{train}]\) and shuffle it randomly. 5:for\(sim\) in\(indices\)do 6:\(\mathbf{U}_{sim}=\mathbf{U}[sim]\), \(\mathbf{U}_{sim}\in\mathbb{R}^{N_{t}\times N_{b}\times q}\) where \(N_{t}\) is the total number of time instances, \(N_{h}=N_{h}(\boldsymbol{\mu}_{sim})\) are the mesh dofs and \(q\) is the number of node features. 7:\(\mathbf{E}_{sim}=\mathbf{E}[sim]\), \(\mathbf{E}_{sim}\in\mathbb{R}^{N_{edge}\times 2}\). 8:\(\mathbf{W}_{sim}=\mathbf{W}[sim]\), \(\mathbf{W}_{sim}\in\mathbb{R}^{N_{edge}\times N_{e}}\) where \(N_{e}\) is the number of edge features. 9:\(\mathbf{I}_{sim}=\mathbf{I}[sim]\), \(\mathbf{I}_{sim}\in\mathbb{R}^{N_{h}}\) with \(\mathbf{I}_{sim}[i]=1\) if node \(i\) is an inner node, 0 otherwise. 10:\(b=0\). 11:while\(b<N_{t}\)do 12:\(\mathbf{U}_{b}=U_{sim}[b:b+N_{b}]\). 13: Create noise tensor \(\mathbf{\Sigma}=\sigma\mathbf{Z}\) where \(\mathbf{Z}\in\mathbb{R}^{N_{b}\times N_{t}\times q}\) is a random tensor with \(N_{t}\) inner nodes. 14: Initialize \(\mathbf{U}_{noise}=\mathbf{U}_{b}\). 15:\(\mathbf{U}_{noise}[:,I_{sim}]+=\ \Sigma\). 16: Calculate target derivative \(\mathbf{U}_{dot}=(\mathbf{U}_{b}[1:]-\mathbf{U}_{noise}[:-1])/\Delta t\). 17: Make a forward pass through the network \(\tilde{\Phi}(\mathbf{U}_{noise},\mathbf{E}_{sim},\mathbf{W}_{sim})\). 18: Calculate network solution \(U_{net}=\mathbf{U}_{noise}[:-1]+\Delta t\Phi\). 19: Calculate training loss \(\mathcal{L}_{b}\). 20: Back-propagation through the net and parameters update: \(\boldsymbol{\theta}^{1}=\mathrm{ADAM}(\nu,\boldsymbol{\theta}^{0})\). 21:\(\boldsymbol{\theta}^{0}=\boldsymbol{\theta}^{1}\). 22:\(b\gets b+N_{b}\). 23:endwhile 24:endfor 25:if\(\mathrm{mod}(\texttt{epoch},500)=0\)then 26: Reduce learning rate by a factor \(\gamma\). 27:endif 28:\(\texttt{epoch}\leftarrow\texttt{epoch}+1\). 29:endwhile Pick the last weights updated \(\boldsymbol{\theta}^{1}\). ``` **Algorithm 1** Training Algorithm Once the model has been trained and a suitable vector of parameters \(\boldsymbol{\theta}^{*}\) has been selected, the GNN is fully operational. That is, given any configuration of the geometric parameters \(\boldsymbol{\mu}\) and any initial condition \(\mathbf{u}_{\boldsymbol{\mu}}^{0}\), we can exploit the GNN model and (10) _online_ to evolve the system iteratively and produce a complete rollout \(\{\tilde{\mathbf{u}}_{\boldsymbol{\mu}}^{n}\}_{n=0}^{N_{t}}\), where \[\begin{cases}\tilde{\mathbf{u}}_{\boldsymbol{\mu}}^{n+1}=\Phi(\tilde{\mathbf{u} }_{\boldsymbol{\mu}}^{n},t^{n},\boldsymbol{\mu})\quad n\geq 0\\ \tilde{\mathbf{u}}_{\boldsymbol{\mu}}^{0}:=\mathbf{u}_{\boldsymbol{\mu}}^{0}. \end{cases} \tag{12}\] Here, to further improve stability, one may also enforce any external constraint, such as Dirichlet bound ary conditions, at each time iteration. To test the quality of the GNN surrogate, we compare its predictions with those of the FOM for a set of new parameter instances. In particular, differently from the training stage, we now compare the overall trajectories and use the GNN to produce full rollouts of the solution. Quantitatively, we compute the prediction error as the relative MSE (RMSE) error between the network prediction and the ground truth solution: \[RMSE(\tilde{\mathbf{u}}_{\boldsymbol{\mu}}^{1},\ldots,\tilde{\mathbf{u}}_{ \boldsymbol{\mu}}^{N_{t}};\ \mathbf{u}_{\boldsymbol{\mu}}^{1},\ldots,\mathbf{u}_{ \boldsymbol{\mu}}^{N_{t}})=\frac{1}{N_{t}}\sum_{n=1}^{N_{t}}\frac{|\tilde{ \mathbf{u}}_{\boldsymbol{\mu}}^{n}-\mathbf{u}_{\boldsymbol{\mu}}^{n}|^{2}}{| \mathbf{u}_{\boldsymbol{\mu}}^{n}|^{2}}, \tag{13}\] where the GNN rollout is obtained as in (12). ## 5 Numerical experiments In this Section, we assess the capabilities of the proposed approach over three advection-diffusion problems of increasing complexity: * a scalar diffusion in a 2D square with a circular obstacle and a time-varying advection term; * a 2D Stokes flow in proximity of a bump; * a 3D Stokes flow around a cylinder. All the examples are characterized by parameter dependent spatial domains, where a given obstacle is allowed to move across the domain, with possible changes in terms of shape and dimension. In this way, we can effectively test the ability of GNNs in handling geometric variability. ### Advection-Diffusion problem in a square domain with a circular obstacle To start, we consider the following advection-diffusion problem: \[\begin{cases}\dfrac{\partial u}{\partial t}-D\Delta u+\mathbf{b}\cdot\nabla u =0&\text{in }\Omega\times(0,T]\\ u(x,y)=(x-1)^{2}+(y-1)^{2}&\text{on }\partial\Omega\times(0,T]\\ u_{0}(x,y)=(x-1)^{2}+(y-1)^{2}&\text{in }\Omega,\end{cases} \tag{14}\] where \(\Omega=(0,1)^{2}\setminus C\), with \[C=\{(x,y):\ (x-c_{x})^{2}+(y-c_{y})^{2}\leq(0.15)^{2}\}.\] Here, we set \(T=2\), \(D=0.1\) and \(\mathbf{b}(t)=[1-t,1-t]\). In particular, due to the time-varying convection field, \(\mathbf{b}\), the resulting dynamical system can be regarded as nonautonomous. In our simulations, we parametrize the center of the circle as \[\boldsymbol{\mu}=(c_{x},c_{y})\in\Theta:=\{(x,y):\ 0<x<1,\ y\geq 0.5\},\] which we let vary as we generate the training data. In agreement with Equations (1)-(3), the ground truth FOM simulations of Problem (14) are obtained by first discretizing in space via P1 Continuous Galerkin Finite Elements, and then in time using the Backward Euler Method. The time step chosen is \(\Delta t=0.02\), resulting in 101 time snapshots for each simulation. We also mention that, following our notation in Section 3, we have \(q=1\), as the solutions to (14) are scalar fields. #### 5.1.1 Problem data We collected a dataset composed of 100 simulations, each obtained for a different position of the center of the obstacle, with a number of mesh nodes varying from 770 to 790. The training set is composed of 80 randomly selected simulations, while the remaining 20 are kept for testing. For what concerns the design of the GNN architecture and its training, we have reported a synthetic overview in Table 1. In particular, in this case, we adopt a simplified loss function that only features the approximation of the time-derivative; in other words, we set \(w_{1}=0\) and \(w_{2}=1\) in Equation (11). #### 5.1.2 Numerical results Results are in Table 2. As we can see, all the predictions RMSEs are of order \(10^{-3}\) to \(10^{-4}\). Moreover, our model outperforms significantly the ground truth solver in the simulation time at testing stage. The dynamic of the problem is well predicted and no propagation errors are spotted. Hence, our model appears capable of solving problems concerning evolutionary PDEs, in that it can approximate multiple time steps in a stable way. Still, it is worth looking at some of the simulations obtained during the testing phase, as to further appreciate the ability of the proposed approach in handling different geometric configurations. For instance, Figures 3 and 4 show two different GNN rollouts corresponding to two different positions of the obstacle. Despite these trajectories being different from the ones seen during training, the model manages to capture all the main features characterizing the solutions, such as the behavior near the obstacle and the direction of propagation. We highlight that a GNN-based approach follows a _local-to-global_ paradigm, first processing information at the node level (encoder), and then aggregating the output at the neighbour level (processor). Clearly, the lack of smoothness in PDE solutions can pose some challenges, as GNNs are known to struggle with capturing such properties. In this sense, it is not surprising to see that the prediction in Figure 4 is worse than the one in Figure 3. In fact, in the former case, the obstacle is closer to the corner of the spatial domain. Of note, we mention that the trajectory in Figure 4 is actually the worst across the whole test set. It is also interesting to see that the prediction error exhibits an oscillating trend. In fact, after a first increase, the accuracy appears to improve (\(t=0.5\) vs \(t=1.00\)), which is most likely caused by the presence of a diffusion phenomenon; then, however, the approximation deteriorates again due to the presence of the convection field, which pushes the errors either towards the obstacle (Figure 3) or the bottom boundary (Figure 4). We can further appreciate this phenomenon in Figure 5 (top row), where we have synthesized the dynamics of the relative \(L^{2}\)-error. More precisely, the picture shows how the quality of the approximation changes within time: to account for the variability in the test set, both median and quantile curves are reported. The trend appears to be fairly general, although the behavior quickly differentiates among different simulations (note how, as \(t\) goes from 0 to 0.25, the upper and lower quartiles rapidly split apart). #### 5.1.3 The _message passing steps_ hyperparameter Among all the hyperparameters, the one most influencing the goodness of the model is the number of message-passing steps. This number represents how much in-depth we look at the neighborhood when we propagate the message. A small number of message-passing steps may result in underfitted areas of the mesh, while a big one will slow down the training, increasing too much the number of parameters, possibly yielding overfitting. Here, we tuned this parameter via trial and error. A plot of the corresponding results can be seen in \begin{table} \begin{tabular}{|l|l l l l|l l l l l l|} \hline **Problem** & **MP** & **MLP** & \(l\) & **Activ** & **Epochs** & **Batch** & **lr** & \(\gamma\) & \(\sigma^{2}\) & \(w_{1}\) & \(w_{2}\) \\ & **steps** & **layers** & & & **(max)** & **size** & & & & \\ \hline **Adve-diff** & 12 & 2 & 32 & SiLU & 1500 & 25 & \(10^{-3}\) & 0.1 & \(10^{-6}\) & 0 & 1 \\ **Stokes 2D** & 18 & 2 & 32 & SiLU & 3000 & 25 & \(10^{-3}\) & 0.1 & \(10^{-6}\) & 0.5 & 0.5 \\ **Stokes 3D** & 15 & 2 & 32 & SiLU & 2000 & 25 & \(10^{-4}\) & 0.1 & \(10^{-5}\) & 0.5 & 0.5 \\ \hline \end{tabular} \end{table} Table 1: GNN architecture and training hyperparameters for the three case studies. MP = message passing, MLP layers = (common) depth of all the MLP units in the Encoder-Processor-Decoder pipeline, \(l\) = (local) feature space dimension, lr = learning rate, \(\gamma\) = learning rate decay factor (applied every 500 epochs), \(\sigma^{2}\) = noise variance, \(w_{i}\) loss function weights. SiLU = Sigmoid weighted Linear Unit, \(x\to x/(1+\exp(-x))\). Figure 4: Test case 1, Advection-Diffusion problem. Prediction obtained for \(\mathbf{\mu}=(0.25,0.75)\) with the obstacle on the top left corner. First row: rollout prediction. Second row: RMSE related to each time step between the prediction and the corresponding ground truth solution. Figure 3: Test case 1, Advection-Diffusion problem. Prediction obtained for \(\mathbf{\mu}=(0.29,0.5)\) with the obstacle close to the source. First row: rollout prediction. Second row: RMSE related to each time step between the prediction and the corresponding ground truth solution. Figure 5 (bottom row). The test RMSE reaches a local minimum for \(m=8\) message-passing steps. This is the best choice if we want to keep control of the number of total parameters of the network, which are only 61825 in this case. However, since the architecture obtained for \(m=12\) is still reasonably complex, we stick to the latter one. We do not proceed further as the improvement rate, in terms of \(m\), no longer justifies favoring a larger number of message-passing steps. #### 5.1.4 Generalization to obstacles with different dimensions Problem 14 can also be extended to domains in which both the position and the dimension of the obstacle change. To this end, we modify our training dataset slightly by adding new simulations in which the obstacle has either a smaller and a larger radiusMathematically speaking, this corresponds to considering an augmented parameter space where \(\boldsymbol{\mu}=(c_{x},c_{y},r)\in\Theta=\{(x,y):\,0<x<1,\,y\geq 0.5\}\times\{0.1,0.15,0.2\}\). Again, we test the model on new simulations which have varying obstacle positions and dimensions. In Figure 6 the prediction for a new test simulation is reported. The model can generalize well on this problem even if the geometries differ a lot from each other, in terms of sizing. Moreover, there is no need to increase the number of message-passing steps, meaning that the GNN architecture has the same complexity as before. An important question that arises is whether our model can predict solutions where the obstacle has a different shape, and whether we can achieve this without having to retrain the whole network. To investigate this, we present an example in Figure 7 of a prediction obtained with a square obstacle located in the top right of the domain: strictly speaking, this configuration cannot be described in terms of our previous parametrization; nonetheless, we can still apply our GNN surrogate, as the latter only depends \begin{table} \begin{tabular}{|l l l l l l|} \hline & **RMSE** & **RMSE** & **RMSE** & \(t_{\text{FOM}}\) & \(t_{\text{GNN}}\) \\ & (mean) & (max) & (min) & & \\ \hline **Adve-diff**\$5.1 & 1.20e\(-3\) & 6.10e\(-3\) & 4.0e\(-4\) & 159.80s & 9.83\(s\) \\ **Stokes 2D**\$5.2 & 1.64e\(-2\) & 7.35e\(-2\) & 1.2e\(-3\) & 115.65\(s\) & 7.51\(s\) \\ **Stokes 3D**\$5.3 & 4.37e\(-2\) & 6.24e\(-2\) & 1.9e\(-2\) & 729.42\(s\) & 10.4\(s\) \\ \hline \end{tabular} \end{table} Table 2: Comparison between FOM and GNN-surrogate in terms of model accuracy and computational time for the three case studies. Figure 5: Test case 1, Advection-Diffusion problem. Left: \(L^{2}\) relative error vs Time plot. The dashed lines represent the first and the third quantiles of the \(L^{2}\) errors among all the test predictions, while the orange line is the median. The shaded area can be considered a confidence region for the simulation error. Right: Test RMSE vs message passing steps. on the geometrical parameters through the underlying mesh (by itself, the parametrization never enters the equation). Surprisingly, the errors are of the same order of magnitude as those discussed earlier, and the prediction of the overall dynamics is remarkably accurate. This result is attributed to the ability of the model to understand different geometries by means of its inductive structure. GNNs, in particular, can automatically incorporate the geometrical structure of the domain by utilizing both the edge connectivity matrix and the edge features. However, some difficulty is observed in handling the nodes surrounding the obstacle, especially at the corners, but this does not appear to affect the overall accuracy of the prediction. These findings suggest that our model has the potential to generalize well to other geometries, without the need for extensive retraining, thus enhancing its practical applicability in real-world scenarios. ### Advection-Diffusion problem in a 2D Stokes flow in proximity of a bump We now consider another advection-diffusion problem as (14), where the advection field \(\mathbf{b}\) is no longer fixed by hand, but it is rather obtained by solving the following stationary Stokes problem: \[\begin{cases}-\nu\Delta\mathbf{b}+\nabla p&=0\qquad\text{in }\Omega\\ \nabla\cdot\mathbf{b}&=0\qquad\text{in }\Omega\end{cases} \tag{15}\] where \(p\) is the pressure field and the boundary conditions are given by: \[\mathbf{b}=0\text{ on }\Gamma_{D},\quad\mathbf{b}=\mathbf{b_{in}}\text{ on }\Gamma_{in},\quad\nu\frac{\partial\mathbf{b}}{\partial\mathbf{n}}-p \mathbf{n}=0\text{ on }\Gamma_{N},\] with \[\mathbf{b_{in}}=\left(\frac{40Uy(0.5-y)}{0.5^{2}},0\right),\quad U=0.3,\quad \nu=10^{-3}; \tag{16}\] \(\mathbf{b_{in}}\) represents the value of \(\mathbf{b}\) at the inflow \(\Gamma_{in}\), while \(\Gamma_{D}\) and \(\Gamma_{N}\) denote the Dirichlet wall (top and bottom) sides, and the Neumann right outflow boundaries, respectively. Figure 6: Test case 1, Advection-Diffusion problem. Prediction obtained for \(\boldsymbol{\mu}=(0.6,0.52,0.1)\). First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution. This time, the domain \(\Omega\) is a rectangular channel \((0,1)\times(0,0.5)\) with a parametrized bump along the top wall edge. Here \(\Gamma_{in}=\{x=0\}\), \(\Gamma_{D}=\{y=0\}\cup\{y=0.5\}\) and \(\Gamma_{N}=\{x=1\}\). During our simulations, we shift the position of the bump in a way that its center \(c_{x}\) varies from \(0.35\) to \(0.65\). Hence, we consider \(\mu\in\Theta=[0.35,0.65]\). Regarding the advection-diffusion problem, at the inflow \(\Gamma_{in}\) we impose the Dirichlet boundary condition \[u_{in}(x,y)=\frac{4y(0.5-y)}{0.5^{2}},\] which is also the initial condition, while on \(\Gamma_{D}\) we impose no-slip boundary conditions and on \(\Gamma_{N}\) we set \(\partial u/\partial n=0\). The final simulation time is \(T=0.5\) and \(D=0.01\). Our results will only focus on the approximation of the solution \(u\) of the advection-diffusion problem, despite the latter also depending implicitely on (15). #### 5.2.1 Problem data Our dataset is composed of \(125\) simulations, each obtained for a different position of the bump. In each of these cases, the mesh is rebuilt yielding a number of mesh nodes varying from \(937\) to \(1042\). The chosen time step is \(\Delta t=0.01\), resulting in \(51\) time snapshots for each simulation. The training set is made by \(100\) simulations, while the test set includes \(25\) simulations, both chosen randomly among the \(125\) FOM simulations. Differently from our previous test case, we consider a loss function where the two terms in \(11\) are weighted equally (cf. Table 1). As before, we refer to Table 1 for further details about GNN and training hyperparameters. #### 5.2.2 Numerical results As before, quantitative results are in Table 2. In this more complex problem, the RMSEs are higher than the ones obtained in the previous example; however, the predictions are still fairly accurate and we still outperform the ground truth solver in terms of time efficiency. Indeed, as shown in Figures 8-9, the predicted dynamics is still very accurate and we do not spot any propagation error. The prediction seems to get worse at some nodes which are either close to the bump or Figure 7: Test case 1, Advection-Diffusion problem. Prediction obtained for \(\mathbf{\mu}=(0.7,0.7,0.3)\) with a square obstacle. First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution. to the upper edge, in which we have imposed the no-slip conditions. Conversely, the errors in proximity of the inflow are higher at initial times, but they tend to fade out as the simulation evolves (this is true also for our worst simulation, Figure 9). Our qualitative considerations are also supported by the plot in Figure 10, which reports the behavior in time of the \(L^{2}\) relative error between predictions and FOM solutions. Clearly, the first time instants are the most challenging ones, as that is when the inflow and the bump position determine the dynamics of the system. Overall, we highlight that the model is able to self-adjust, since errors tend to decrease as the simulation time evolves, also showing some degree of robustness to the possible presence of noise during the simulation. Of note, these considerations hold uniformly over the test set, as clearly indicated by the width of the quantile bands. This is a desirable property since real-world problems often have some degree of uncertainty or noise, and a model that can handle different scenarios is more likely to be useful in practice. #### 5.2.3 Generalization to bumps with different positions and dimensions This example can be generalized by letting the bump vary its dimension and possibly switch from the upper to the lower edge. Hence, we consider a new dataset consisting of 185 simulations in which the height of the bump is allowed to change, \(h\in\{0.08,0.12,0.175\}\), and its center can vary along both the upper and lower edge in the interval \([0.4,0.6]\), respectively. In other words, the new geometrical parameters are \(\boldsymbol{\mu}=(c_{x},c_{y},h)\in\Theta:=[0.4,0.6]\times\{0.,0.5\}\times\{0.08,0.12,0.175\}\). The results show that the implemented GNN-based model is able to learn correctly the geometry of the problem even if we the domain varies substantially within the dataset. For instance, in Figure 11 the height of the bump influences a lot the system dynamics, however the network correctly infers the behavior of the flow around the obstacle. Here, the bump has height \(h=0.175\) and is located at the lower edge with center at \(x=0.453\), that is \(\boldsymbol{\mu}=(0.453,0,0.175)\). The height of the bump has a significant impact on the accuracy of model prediction, particularly near the upper edge of the domain. Errors that arise in this region can propagate throughout the domain, affecting the accuracy of predictions at other Figure 8: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. Prediction obtained for \(\mu=0.58\) with the bump on the right part of the upper edge. First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution Figure 10: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. \(L^{2}\) relative error vs Time plot: The dashed lines represent the first and the third quantiles of the \(L^{2}\) errors among all the test predictions, while the orange line is the median. The shaded area can be seen as a confidence region for the simulation error. Figure 9: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. Worst case scenario: bump close to the inflow (\(\mu=0.355\)). First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution locations as well. However, the self-adjustment mechanism of the model is effective in mitigating these errors as they propagate toward the outflow, resulting in improved accuracy in this region. Overall, the model ability to account for the influence of the bump height on the flow dynamics contributes to its strong predictive performance. In Figure 12 the bump has height \(h=0.08\) and is located at the upper edge with center at \(x=0.467\), that is \(\mathbf{\mu}=(0.467,0.5,0.08)\). The accuracy of the model predictions decreases when the size of the bump is smaller. This is primarily due to the fact that, as the size of the domain increases, so does the number of nodes, making the inference process more challenging. In this problem, the number of nodes varies from 936 to 1054, which is a wide range for unstructured meshes and geometries that differ significantly from each other. As a result, error propagation is more significant in this case compared to the other examples. This is highlighted by the persistence of relatively large errors at \(T=0.25\), despite the overall dynamics being well-predicted by the model. This suggests that the model can effectively capture the underlying physics of the system, even in cases where the inference is more challenging due to the higher number of nodes. In general, the \(L^{2}\)-errors exhibit the same behavior as before: see Figure 13 in comparison with Figure 10. We can further test the robustness of our model by evaluating its ability to predict solutions in channels with varying shapes of the bump, without requiring any retraining. Figure 14 displays the prediction results of a simulation with a triangular bump located on the upper edge. In addition to the fact that the errors are of the same order of magnitude as previously discussed, the overall dynamics is, once again, accurately predicted. However, the regularity of the solution poses some difficulty for the model. Nonetheless, this does not appear to significantly impact the accuracy of the prediction. This example highlights the flexibility of graph neural networks in handling simulations with variable geometries and limited training data, while still producing reliable results. ### Advection-Diffusion problem in a 3D Stokes flow around a cylinder To further increase the problem difficulty, we finally consider the same problem discussed in Section 5.2, now set in a 3D domain obtained by an extrusion on the z-axis of the rectangle \(R=(0,1)\times(0,0.5)\) with Figure 11: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. Prediction obtained for \(\mathbf{\mu}=(0.453,0,0.175)\) with the bump in the lower edge with center at \(x=0.453\) and height \(h=0.175\). First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution. Figure 12: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. Prediction with bump in the lower edge with center at \(x=0.453\) and height \(h=0.08\) (\(\mathbf{\mu}=(0.467,0.5,0.08)\)). First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution. Figure 13: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. \(L^{2}\) relative error vs Time plot. a cylindrical hole \[C=\{(x,y):\;(x-c_{x})^{2}+(y-c_{y})^{2}\leq(0.05)^{2}\}.\] We let the position of the obstacle vary as \(\boldsymbol{\mu}=(c_{x},c_{y})\in\Theta:=[0.2,0.4]\times[0.2,0.3]\). This time, we exploit the FOM to generate 150 different simulations, 125 for training and 25 for testing. #### 5.3.1 Problem data The mesh nodes of the simulations vary from 1353 to 1542, thus increasing the complexity of the problem with respect to the other examples we have discussed so far. We implement the approach following the same ideas adopted for the previous test cases: we refer to Table 1 for further details about the network design and the training hyperparameters. In this regard, we have made some minor modifications to account for the increased complexity entailed by the presence of a three-dimensional geometry. These concern: an increased number of message-passing steps (to better cover the spatial domain), an increased noise variance (to further improve the stability of our simulations during rollout), and a reduced number of epochs (to avoid overfitting). By adopting this training strategy, we aim to strike a balance between model accuracy and computational efficiency while still being able to capture the complex dynamics of the system. #### 5.3.2 Numerical Results The results of the rollout predictions of the test simulations are summarized in Table 2. Clearly, the higher error obtained in this example is due to the increased complexity of the problem. However, it is noteworthy that despite the higher error, there is a significant improvement in time complexity. Once trained, our model can be up to two orders of magnitude faster compared to the FOM solver. This reduction in time complexity can lead to faster and more efficient simulations, which is particularly important for time-critical applications or when a large number of simulations are required. Therefore, despite the slightly higher error, our model can still provide a significant advantage in terms of time and computational resources. Figure 14: Test case 2, Advection-Diffusion problem in a 2D Stokes flow. Prediction with a triangular bump on the upper edge. First row: rollout prediction. Second row: RMSE related to each timestep between the prediction and the corresponding ground truth solution. Upon examining the predictions in greater detail, as shown in Figure 15, a comparison can be made between the prediction of a simulation with the obstacle positioned centrally, and its corresponding ground truth solution. It is evident that the simulation deteriorates as it progresses toward the outflow. Unlike the 2D case, self-adjustment is not observed in this scenario, as the nodes located to the right of the obstacle are heavily influenced by its position. This may result in some values being underestimated in the prediction, particularly in the tail of the flow. Unfortunately, this is a known drawback of GNNs, as deep architectures tend to oversmooth predictions. Therefore, even if the dynamics are predicted accurately, node values may be more dispersed. Furthermore, this problem is exacerbated by an increase in the number of message passing steps, which, in this example, are necessary for an acceptable prediction. Nevertheless, as illustrated in Figure 16, modifying the position of the obstacle does not significantly affect the overall accuracy of the solution. Despite the aforementioned issues, the flow pattern is captured correctly, and no propagation of the errors is observed. Of remarkable importance is the consistently accurate prediction in the proximity of the obstacle, which is always a critical aspect to be predicted. This observation underscores the model ability to learn the geometrical properties of the problem while preserving the graph structure of the mesh. Therefore, these results suggest that the model is sufficiently robust in predicting flow patterns in various configurations, and can generalize well to other geometries. Upon observing the \(L^{2}\) relative error plot on the test set in Figure 17, we can draw quantitative conclusions regarding the previously discussed results. The plot indicates that the test error has an appropriate Figure 16: Test case 3, Advection-Diffusion problem in a 3D Stokes flow. Prediction obtained for \(\mathbf{\mu}=(0.23,0.3)\). 3 time steps of simulation. First row: Rollout prediction. Second row: ground truth solution. Figure 15: Test case 3, Advection-Diffusion problem in a 3D Stokes flow. Prediction obtained for \(\mathbf{\mu}=(0.29,0.25)\). 3 time steps of simulation. First row: Rollout prediction. Second row: ground truth solution. upper bound and that it increases significantly during the first few time steps, which is consistent with the observed prediction behavior. After the initial increase, the error gradually decays, showing that the model has learned the underlying dynamics of the system. However, towards the end of the simulation, we observe a slight increase in the error, which is coherent with what we have previously mentioned about the tendency of these architectures to dispersion. This behavior may be due to the accumulation of errors during the long-term prediction. Therefore, we can conclude that while the GNN-based model shows promising results, there is still room for improvement in terms of accuracy and robustness. ### Comparison with Feed Forward Neural Networks Feed Forward Neural Networks (FFNNs) are usually employed for building reduced-order models because they have the capability to capture strong nonlinearity through their fully connected structure [22, 28, 24]. As we explained in the introduction, however, using FFNNs is not straightforward when dealing with geometric variability, as these models require fixing both the input and output dimension. We recall, in fact, that FFNN architectures are nothing but MLP units. More precisely, any FFNN model of depth \(s\geq 1\) is a map of the form \[\Psi:=L_{s+1}\circ L_{s}\circ\cdots\circ L_{1}, \tag{17}\] where \(L_{i}:\mathbb{R}^{n_{i}}\rightarrow\mathbb{R}^{n_{i+1}}\) are nonlinear maps (layers) operating as \[L_{i}:\mathbf{v}\mapsto\rho_{i}\left(\mathbf{W}_{i}\mathbf{v}+\mathbf{b}_{i} \right),\] where \(\rho_{i}:\mathbb{R}\rightarrow\mathbb{R}\) is the activation function (acting componentwise), while \(\mathbf{W}_{i}\) and \(\mathbf{b}_{i}\) are the trainable parameters (weights and biases, respectively). Compared to GNNs, these computational units are substantially less flexible, as they can only accept inputs of a specific dimension (here, \(n_{1}\)), and will always produce outputs of a given size (here, \(n_{s+2}\)). Consequently, it would be impossible to implement a model such as (10) using naive FFNNs, at least not in the case of parametrized domains. Still, it is true that we could circumvent this problem by interpolating each PDE solution over a fixed rectangular grid, with the convention that \(u_{\mathbf{\mu}}(\mathbf{x})=0\) if \(\mathbf{x}\notin\Omega_{\mathbf{\mu}}\). Then, it would be possible -in principle- to replicate the same construction proposed in Section 4, but only using dense architectures such as (17). Our claim, however, is that without the _relational inductive bias_ of GNNs, these model have no hope Figure 17: Test case 3, Advection-Diffusion problem in a 3D Stokes flow. \(L^{2}\) relative error vs Time plot: The dashed lines represent the first and the third quantiles of the \(L^{2}\) errors among all the test predictions, while the orange line is the median. The shaded area can be seen as a confidence region for the simulation error. of generalizing over unseen geometries. To prove this, we shall compare the performances obtained by using either FFNNs or GNNs in (10) on the two examples of Sections 5.1 and 5.2. In particular, to simplify, we consider for each problem the following datasets: * for the first Advection-Diffusion problem, we let the obstacle vary only in its position, resulting in 100 random simulations (80 for training and 20 for testing) ; * for the Advection-Diffusion problem in a 2D Stokes flow, we let the bump vary in its position along the upper and lower edges but not in its height, resulting in 125 random simulations (100 for training and 25 for testing). In order to train the FFNNs, we map all the simulations onto a common rectangular grid consisting of \(128\times 128\) vertices. We then repeat the same construction presented in Section 4, up to replacing the GNN modules with MLP layers. Since FFNNs tend to overfit if trained for a long time, we train the model for 500 epochs, using the same learning rate and loss weights as the ones described in Sections 5.1 and 5.2. Results are in Figures 18-20. As testified by the boxplots, the quality of FFNN predictions can change a lot from case to case; conversely, GNN-surrogates are much more stable and report errors with a smaller variability. Two examples of FFNN predictions on test set simulations, together with the corresponding GNN predictions, are shown in Figures 19 and 20. The predictions are done using the same values of geometrical parameters in order to highlight the different performances in the generalization on unseen domains. Here, the difference between the two approaches becomes evident. While FFNNs may capture the overall dynamics of the system quite well, they fail at understanding the geometrical properties of the solution, which ultimately makes them unable to generalize (see, e.g., Figure 20, where the FFNN clearly ignores the actual location of the bump). Another key aspect to analyze is the number of model parameters. In fact, due to their fully connected structure, the complexity of FFNNs can increase dramatically with the problem complexity, implying a higher tendency to overfitting (thus the need of more training data) and less scalable models. A possible strategy to overcome both these issues could be to rely on grid-based models, such as Convolutional Neural Networks (CNNs). In fact, these models can reduce the number of parameters by sharing them, which helps to mitigate the overfitting issue. However, while CNNs might be able to capture the dynamics of the system, they would still ignore the geometric structure of the problem, which makes them unsuitable for complex geometries. Similarly, alternative approaches such as Mesh-Informed Neural Networks (MINNs) [11] do not provide a comprehensive solution, as they can only tackle one geometry at a time. Figure 18: Boxplots of the RMSEs of the two predictions Figure 19: Test case 1, Advection - Diffusion problem. Comparison between FFNN and GNN prediction for \(\mathbf{\mu}=(c_{x},c_{y})=(0.4,0.5)\). First row: FFNN prediction. Second row: GNN prediction. Figure 20: Test case 2, Advection - Diffusion problem in a 2D Stokes flow. Comparison between FFNN and GNN prediction for \(\mathbf{\mu}=(c_{x},c_{y})=(0.6,0)\). First row: FFNN prediction. Second row: GNN prediction. Conclusions We presented a novel approach to surrogate modeling based on Graph Neural Networks (GNNs) for the efficient evolution of dynamical systems defined over parameter dependent spatial domains. The approach differs substantially from classical Reduced Order Modelling techniques, in that it provides a way to handle parameter dependent PDEs with a variable number of degrees of freedom. The method is based on a data-driven time-stepping scheme that explicitly accounts for the Markovian structure of the dynamical system, while also including geometric information via GNN modules. The approach is shown capable of yielding stable simulations, even for long rollouts, while simultaneously generalizing to unseen geometries, thus providing remarkable benefits when compared to other techniques based on different neural network architectures. Although limited to fairly simple problems, our results indicate that GNNs can be a valuable tool for ROM practitioners, providing researchers with new ways for handling geometric variability. Future research may involve the exploration of hybrid approaches where GNNs are combined with other well-established Deep Learning-based reduced order models, such as autoencoders and U-Net-like architectures, in an attempt to generalize the whole idea to more complicated problems with thousands or millions of degrees of freedom. Another interesting question could be whether this approach can benefit from the integration of suitable attention mechanisms, or other forms of neural network architectures, that can selectively weight the contributions of different nodes in the graph. We leave these considerations for future work. ## Acknowledgments The present research is part of the activities of project Dipartimento di Eccellenza 2023-2027, funded by MUR, and of project FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme (M4C2, Investment 1.3, Line on Artificial Intelligence). NF, SF and AM are members of Gruppo Nazionale per il Calcolo Scientifico (GNCS) and of Istituto Nazionale di Alta Matematica (INdAM). ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Author Declarations The authors have no conflicts to disclose.
2308.11411
Extracting Relational Triples Based on Graph Recursive Neural Network via Dynamic Feedback Forest Algorithm
Extracting relational triples (subject, predicate, object) from text enables the transformation of unstructured text data into structured knowledge. The named entity recognition (NER) and the relation extraction (RE) are two foundational subtasks in this knowledge generation pipeline. The integration of subtasks poses a considerable challenge due to their disparate nature. This paper presents a novel approach that converts the triple extraction task into a graph labeling problem, capitalizing on the structural information of dependency parsing and graph recursive neural networks (GRNNs). To integrate subtasks, this paper proposes a dynamic feedback forest algorithm that connects the representations of subtasks by inference operations during model training. Experimental results demonstrate the effectiveness of the proposed method.
Hongyin Zhu
2023-08-22T13:00:13Z
http://arxiv.org/abs/2308.11411v1
Extracting Relational Triples Based on Graph Recursive Neural Network via Dynamic Feedback Forest Algorithm ###### Abstract Extracting relational triples (subject, predicate, object) from text enables the transformation of unstructured text data into structured knowledge. The named entity recognition (NER) and the relation extraction (RE) are two foundational subtasks in this knowledge generation pipeline. The integration of subtasks poses a considerable challenge due to their disparate nature. This paper presents a novel approach that converts the triple extraction task into a graph labeling problem, capitalizing on the structural information of dependency parsing and graph recursive neural networks (GRNNs). To integrate subtasks, this paper proposes a dynamic feedback forest algorithm that connects the representations of subtasks by inference operations during model training. Experimental results demonstrate the effectiveness of the proposed method. ## Introduction Extracting triples from unstructured documents is a fundamental technology in constructing knowledge bases. This process involves identifying and extracting (subject, predicate, object) from textual input, as depicted in Figure 1. Named entity recognition (NER) and relation extraction (RE) methods can collaborate in the triple extraction pipeline to identify entities and relations, respectively. Prior works mainly train these two subtasks separately because there is a cascade process between them, and the output of the NER and the input of the RE subtask are completely different. Despite the flexibility of the pipeline methods, they block the connection between two subtasks. Some approaches [14] overcome the joint extraction problem by sharing features of two subtasks, but two-stage models require separate model training and it is difficult to fully exploit the relationship between subtasks. Despite task conversion resolving this issue by eliminating the cascade process, i.e., converting joint extraction into the sequence labeling process Zheng et al. (2017), their performance is potentially limited by the design of the new tag scheme. Ideally, we hope to overcome these two problems, by integrating two subtasks in one model and not modifying the original tag scheme. Smoothly integrating cascaded tasks is a nontrivial task. The difficulty lies in that the relation classification (RC) aims to classify the relation between an entity pair while we have not known where are the entities. The main gap between NER and RC subtasks is how to generate relation candidates, which makes model integration a challenging issue. This paper introduces a dynamic feedback forest algorithm (DFF) to integrate subtasks into a joint optimization process by using inference operations during model training. These operations can directly connect the representations of NER and RE. Then, the model can use the original tag to supervise the two subtasks together. The DFF algorithm dynamically constructs multiple subgraphs to form the forest and teach the model according to the deviation of current prediction and ground truth. This algorithm executes the inference and feedback operations during model training, adapting the model to new samples. The second challenge is to unify the representations of entity and relation. This paper maps the entity and its relation to the dependency graph. We applied the idea of the recursive neural network to the dependency graph so that we could directly label a vertex as an entity, a relation instance, or none. Sometimes, there are different relations between the same entity pair, i.e., "_Iraq's capital is Baghdad_" and "_Iraq contains Baghdad_", as shown in Figure 1. The prior new tag scheme is incapable of dealing with overlapping relations (an entity belongs to more than one relation). Other works [20, 17] design new strategies to solve this problem. Our GRNN model is naturally compatible with different relations between an entity pair without changing the network. To further simplify the network, we also test the hypothesis that a unified representation can be obtained to resolve two subtasks by our Figure 1: An example to extracting the triples from unstructured text GRNN. We conduct experiments on the NYT1 dataset. Experimental results demonstrate the effectiveness of our approach. The major contributions of this paper can be summarized below. Footnote 1: [https://github.com/shanzhenren/CoType](https://github.com/shanzhenren/CoType) (i) This paper explores a graph labeling approach to resolving the triple extraction task. (ii) This paper utilizes graph recursive networks to model the triple extraction task with joint task learning and joint representation learning. (iii) This paper introduces the DFF algorithm, which enables the model to connect representations of different subtasks through inference operations during training, facilitating the learning process. ## Related Work For the joint extraction task, Miwa and Bansal (2016) propose a method using sequential and separate tree-structured LSTM-RNNs for NER and RC. Their work represents the relation by the shortest dependency path, while our model uses a vertex to represent any entity or relation. Their method use scheduled sampling, entity pretraining, and label embedding to enhance the training process, while we use the maximum log-likelihood with the DFF algorithm. Ren et al. (2017) use a domain-agnostic segmentation algorithm to mine entity mentions, and convert the task into a global embedding problem. Khashabi (2013) present a recursive neural network-based method to extract entity and relation separately, but leave the joint learning for future work. Our model solves the joint learning problem. Zheng et al. (2017) propose a novel tag scheme to convert this task into a sequence labeling problem, but it increases about eight times of classes and cannot deal with the overlapping relations. Wang et al. (2018) propose a neural transition-based approach and a suite of transition schemes, while our method is only based on the graph structure of dependency parsing. Zeng et al. (2018) propose the One-Decoder and Multi-Decoder approaches to extract relational facts with copy mechanism. They divide the sentences into three types according to the triplet overlap degree, and our approach is also compatible with overlapping relations. Liu et al. (2018) propose the Seq2RDF which aims to map textual input to existing RDF triples, so it relies on the knowledge graph vocabulary, while our approach aims to extract triples directly from unstructured documents in a general way. ## Methods ### Task Definition Given a text sequence \(x=[w_{1},...,w_{n}]\) where \(w_{i}\) is the \(i\)-th token, the triple extraction task aims to extract multiple (\(s_{[type]}\), \(p\), \(o_{[type]}\)) where \(s\) and \(o\) represent two non-overlapping consecutive spans, \(s=[w_{s1},w_{s1+1}...,w_{s2}]\) and \(o=[w_{o1},w_{o1+1}...,w_{o2}]\). \(p\in R\) is the relation type, where \(R\) is a set of predefined relation types. The subscript \([type]\in C\) is the entity type, where \(C\) is a set of predefined entity types. ### The Graph Labeling Scheme Figure 2 uses an example to demonstrate the idea of how to convert the joint extraction task into a graph labeling problem. The lower layer uses a Bi-LSTM encoder to generate the contextual representation. Then, the intermediate layer uses the Stanford CoreNLP [10] to convert the sentences into dependency graphs. This model uses vertices to represent entities and relations. Finally, this model integrates the subtasks using the DFF algorithm (introduced in subsection Training Algorithm). We take a simple example to demonstrate the graph labeling scheme. Figure 3 contains three entities (Airbus [ORG], Toulouse [LOC], and France [LOC]). Each entity is mapped to the corresponding vertex. Each relation is mapped to the least common ancestor (LCA) of two entity subgraphs, i.e. the relation of (Airbus, relation, Toulouse) is represented by the vertex "based" which is the LCA of "Airbus" and "Toulouse", which implies the relation type is "_/business/company/place_founded_". Any relation or entity can be represented by a vertex, so this model can directly classify the vertices. For the entity phrase, this model maps each entity as a subgraph and takes the root vertex as the representation like [11]. In some cases, two entities share the same root, for example, the _Los Angeles Lakers_ and _Kobe Bryant_ of Figure 4 share the same root _Bryant_ in the dependency graph. To better differentiate the entities, an entity vector is finally composed of the vertex representation and the average poli Figure 3: An example to demonstrate the entity and relation representation Figure 2: Joint task learning (JTL) network for integrating subtasks Sometimes, there are different relations between two entities. In addition to the 1-of-n classification, this model can be extended to the multi-label classification form which can extract different relations of the same entity pair by multi-label graph labeling for each vertex. ### GRNN Units We have obtained the graph structure of each sample. The gated units and CNN have achieved impressive performances in many deep-learning models. To better process the graph data, we implement different neural units (LSTM [10]), GRU [12] and CNN [14]) that are compatible with our GRNN model. **Graph RNN** For the basic perceptron, each source vertex accumulates the information from its target vertices through a non-linear activation function as below. \[h_{t}=tanh(Wx_{t}+\Sigma_{j=0}^{P(t)}U_{m(t,j)}h_{j}+b) \tag{1}\] where \(W\in\mathbb{R}^{d\times l},U\in\mathbb{R}^{m\times d\times d},h\in\mathbb{R}^ {d},x\in\mathbb{R}^{l},b\in\mathbb{R}^{d}\). The \(t\) and \(j\) are the indexes of the source and target vertices respectively. \(P(t)\) is the number of target vertices of the source vertex \(t\). \(U\) is a group of edge parameters where \(m\) denotes the 193 edge types, and \(d\times d\) is the dimension of an edge embedding. \(U_{m(t,j)}\) denotes the edge embedding that connects source \(t\) and target \(j\). Considering edge types can make the model control substreams in different weights. \(h\) and \(x\) denote the vertex representation and input respectively. To resolve the gradient vanishing problem, we also implemented the gated units. **Graph LSTM** For the LSTM unit, this paper simplifies the network by using a group of edge parameters (same as the variable \(U\) in formulation (1)) based on the study of [12]. Compared with the linear chain LSTM unit, the main improvement is the separate forget gate for each input edge, which can achieve selective control of different edges. \[i_{t} =\sigma(W_{i}x_{t}+\Sigma_{j=1}^{P(t)}U_{m(t,j)}h_{j}+b_{i}) \tag{2}\] \[f_{m(t,j)} =\sigma(W_{f}x_{t}+U_{m(t,j)}h_{j}+b_{f})\] (3) \[o_{t} =\sigma(W_{o}x_{t}+\Sigma_{j=1}^{P(t)}U_{m(t,j)}h_{j}+b_{o})\] (4) \[\tilde{c_{t}} =\tanh(W_{c}x_{t}+\Sigma_{j=1}^{P(t)}U_{m(t,j)}h_{j}+b_{c})\] (5) \[c_{t} =i_{t}\odot\tilde{c_{t}}+\Sigma_{j=1}^{P(t)}f_{m(t,j)}\odot c_{j}\] (6) \[h_{t} =o_{t}\odot\tanh(c_{t}) \tag{7}\] where \(h\), \(c\), and \(o\) are the hidden state, the cell state, and the output respectively. \(W\), \(U\) and \(b\) are model parameters. The \(\odot\) represents the Hadamard product (pointwise multiplication). **Graph GRU** Analogy with the graph LSTM unit, the graph GRU unit separates the reset gate for each edge. In the graph LSTM, the output gate, and the cell state can limit the hidden state into an effective range as formulation (7). However, for the large values of hidden states, the outputs of graph GRU might grow large in magnitude. To counteract this effect, this paper adds a non-linear activation in the edge to limit the response in practice. \[p_{j} =\tanh(U_{m(t,j)}h_{j}) \tag{8}\] \[z_{t} =\sigma(W_{z}x_{t}+\Sigma_{j=1}^{P(t)}p_{j}+b_{z})\] (9) \[r_{m(t,j)} =\sigma(W_{r}x_{t}+b_{r}+p_{j})\] (10) \[\tilde{h_{t}} =\sigma(W_{h}x_{t}+U_{r}\Sigma_{j=1}^{P(t)}(r_{m(t,j)}\odot p_{j} )+b_{h})\] (11) \[h_{t} =(1-z_{t})\odot\Sigma_{j=1}^{P(t)}p_{j}+z_{t}\tilde{h_{t}} \tag{12}\] where \(h\), \(z\), and \(r\) are the output vector, update gate vector and reset gate vector. \(W\), \(U\) and \(b\) are model parameters. **Graph RCNN** The recursive neural network [15] can only process the binary combination and is not suitable for graph data, since a source vertex may have two or more targets. This paper adopts the RCNN unit [15] which can deal with the k-ary parsing tree. To make the RCNN unit compatible with our GRNN, we add different edge types and generalize the RCNN unit to DAG. Convolutional neural networks [14] utilize layers with convolving filters to extract local features. CNN models have been proven effective for many NLP tasks [15]. \[H^{(t)}=tanh(W*X) \tag{13}\] where \(*\) denotes the 1-D convolution. \(W\in\mathbb{R}^{c\times l}\) is the convolving filter where \(c\) is window size and \(l\) is vector dimension. As shown in Figure 5, the left part is the predicted triples and the right part is a graph RCNN network. The input \(X\in\mathbb{R}^{P(t)\times l}\) is composed of \(v_{(t,i)}\). Let \(\oplus\) represent the concatenation operation. \[X =[v_{(t,1)},v_{(t,2)}...,v_{(t,P(t)}] \tag{14}\] \[v_{(t,i)} =x_{t}\oplus h_{(t)}\oplus d_{(t,i)} \tag{15}\] where the \(x_{t}\) is the word embedding of the current (source) vertex. The \(h_{t(i)}\) (\(i=1,...P(t)\)) denotes the representation of the \(i\)-th target vertex of source vertex \(t\). \(d_{(t,i)}\) is the distance embedding [15] of vertex \(i\). The distance embedding is a way to represent the relative distance between the source vertex \(t\) and the \(i\)-th target vertex with a fixed length vector. To keep the order invariant, for the target vertices, this network uses the natural order of words in the sentence. The vertex without any target vertex consists of its word embedding and a zero vector. The output of the convolution operation is \(H^{(t)}=[h_{1},h_{2},...,h_{K}]\) where \(K\) is dynamic depending on the number of target vertices. Then the pooling operation captures the most informative features on rows. \[h_{t}=\max_{j}H^{(t)}_{ij} \tag{16}\] Figure 4: An example that two entities share the same root The above neural units enhance the graph data processing through different feature extraction operations. To reduce the computational complexity, we set the maximum recursive depth to 6. The uni-directional GRNN is a top-down process, and we also built the bidirectional GRNN which can capture the features of both top-down and bottom-up directions. ### Joint Learning Networks We employ two networks to integrate our neural units. The main difference between the two networks is the way they decode each subtask. **Joint task learning network.** As shown in Figure 2, the higher layer uses two decoders to jointly learn task-specific representations. We refer to it as joint task learning (JTL). For the NER subtask, we adopt the BIOES scheme [14] in a uni-directional LSTM-RNN. To keep more influence of the previous step this decoder also inputs the previous hidden state to the current step. For the RE subtask, the input of GRNN is a graph where each vertex is mapped to the contextual representation of the encoder layer. The edge embeddings are jointly learned to control children's streams. The vertices of GRNN are mainly used to construct triple representations for the RC subtask. **Joint representation learning network.** To test whether this network can learn a unified representation [11, 12], we simplify the network by only keeping the standalone decoder. We refer to it as the joint representation learning (JRL) network, as shown in Figure 6. The output of the encoder layer is input to the GRNN sub-network, and then the two subtasks directly use the representation of these vertices for prediction. ### Training Algorithm During model training, the DFF algorithm has an inference operation that predicts entities and dynamically generates triple candidates using all possible combinations of entities. This algorithm enables the model to dynamically adapt to new patterns in the data. The downside is that there is a sort of information loss [11] in the discrete inference process. This algorithm mainly contains two steps, inference, and feedback, as shown in algorithm 1. (1) In the inference step, this model first predicts/infers the label sequence, as shown in line 1, and uses the combination of the predicted entities as triple candidates, as shown in lines 2-3. Then, the model dynamically extracts subgraphs of multiple triples to construct the forest on the top layer, as shown in lines 5-9. This model uses the indices of \(s\) and \(o\) to find the LCA index as \(p_{w}\). Then, the model will map the above indices into the GRNN to get the vector representations for a triple as shown in line 8. In lines 10-16, this model also memorizes the unexpected entity pairs that are not predicted, using the same operations of lines 5-9. (2) In the feedback step, this model generates the relation type assumption for each predicted triple by using a single-layer neural network, as shown in line 17. Then, it compares the assumptions with the ground truth to get the feedback signal to update the model state through backpropagation. The feedback signal is generated by the loss function to calculate the gradients, as shown below. \[\max_{\theta}L=\sum_{i=1}^{|\mathbb{D}|}(\log\prod_{j}p_{ner}(y_{nerj}^{(i)}| \mathbf{z}_{i};\theta)+\log\prod_{k}p_{re}(y_{rek}^{(i)}|\mathbf{z}_{i};\theta))\] where the \(p_{ner}\) and \(p_{re}\) represent the prediction of the actual class of NER and RE subtasks respectively. \(|\mathbb{D}|\) is the dataset size and \(z\) denotes the input sequence. \(j\) and \(k\) denote the indices of the entity or relation respectively. \(\theta\) is the model parameter. The final model generated by the DFF algorithm is a recursive computational graph where the parameters can be optimized jointly. Figure 5: The GRNN-RCNN extraction example of ”Airbus is based in Toulouse, France.” Figure 6: JRL network for integrating subtasks and representations ## Experiments ### Experiment Setup DatasetThe NYT dataset is generated by aligning the Freebase relations with the news article of the 1987 \(\sim\) 2007 New York Times. The training set contains 1.18M sentences with 47 entity types and 24 relation types [Ren et al. (2017)], i.e., _"/business/company/founders"_, _"/sports/sports_team/location"_, etc. We exclude the "None" label relation, like [Zheng et al. (2017); Ren et al. (2017)]. During the training process, the samples with only the "None" label relations have little effect on the final result and we remove them. Thus, we use 66,336 training samples (about 1/3 of the training set) to reduce the training time. The test set contains 395 samples manually annotated by the author of [Hoffmann et al. (2011)]. EvaluationWe adopt the standard micro F1 score, recall (Rec.), and precision (Prec.) as the metrics for the NER and RE subtasks. For the final result, a correct prediction is that the extracted triple matches the ground truth including two entities, relation direction and relation type. For the NER subtask, we consider the entity type, length, and position in sentences. HyperparametersThe input word is projected to a 200-D pre-trained GloVe [Pennington, Socher, and Manning (2014)] word embedding. The hidden state of the encoder is 300-D. The dimension of GRNN units (including the LSTM, GRU and RCNN units) is 100-D. We split the training data into 100 pieces to select better models. We first use the single-sample training to get a good model and then adopt the batch (64) training to fine-tune the model. We ran the experiments on an AMD Ryzen 5 1500X Quad-Core Processor @ 3.5GHz (Mem: 16G) and RTX 1070Ti GPUs (8G). ### Results of JTL network This subsection first reports the results of the 1-of-n classification form. We use Bi to represent the bidirectional modeling. The RCNN, GRU, and LSTM denote the computational units in the GRNN models. Table 1 reports the results. The first two parts are the pipeline methods and the joint extraction methods respectively. The third part is our methods where different units are implemented to augment the basic GRNN. To eliminate the influence of random factors we ran the experiments three times and take the average. The basic GRNN (Bi-GRNN) also gets a good result, which means that the mechanism of the GRNN is effective in this task. Compared with other studies, the results of GRNN models are more balanced, while the recall and precision of other joint extraction models are not balanced enough. This indicates that using the global optimization process allows the model to find a better balance. The gated units improve the results since they alleviate the gradient vanishing problem. This indicates considering long-time graph dependency can help to encode rich relation representation. This setting improves 4.40% and 2.90% F1 scores on the NER and RE subtasks, respectively. Although such a strategy is naive, by using the \(W,b\) parameters, this model achieves a competitive result (52.6% F1 score) to the LSTM decoder. This is because the NER representation is enhanced by the RE subtask by jointly updating the model. This indicates that keeping order invariant is essential. Comparison of the training processTo observe the training process, we split the first training epoch into 25 intervals and evaluate the models of table 2 on the test set. As shown in Figure 8, the JTL and JRL networks learn the NER and RE subtasks simultaneously. The learning process is effective, and some model states achieved good results. This suggests that the model selection is important in this approach. We also observe the Bi-Standalone+sort (sorted JRL as shown in Figure 7) achieved higher results on the RE subtask than the Bi-GRNN-LSTM. This is because the JRL network encodes more entity information in the relation representation. This experiment shows that using the same learning representation for different subtasks can help compress model parameters. ## Conclusion This paper introduces a novel approach to extracting relational triples from English textual input by leveraging graph recursive network models. The proposed methods integrate named entity recognition (NER) and relation extraction (RE) subtasks in a joint optimization process, utilizing the GRNN and the DFF training algorithm. This approach eliminates the need for designing new tag schemes and bridges the gap between subtasks by connecting the representations in both subtasks through inference operations during model training. Experimental results demonstrate the effectiveness of our model in integrating subtasks and representations. Moreover, this model can be adapted to encode subgraphs [11] and applied in downstream applications [11, 12].
2308.13212
SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases
Graph Neural Networks (GNNs) with equivariant properties have emerged as powerful tools for modeling complex dynamics of multi-object physical systems. However, their generalization ability is limited by the inadequate consideration of physical inductive biases: (1) Existing studies overlook the continuity of transitions among system states, opting to employ several discrete transformation layers to learn the direct mapping between two adjacent states; (2) Most models only account for first-order velocity information, despite the fact that many physical systems are governed by second-order motion laws. To incorporate these inductive biases, we propose the Second-order Equivariant Graph Neural Ordinary Differential Equation (SEGNO). Specifically, we show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property. Furthermore, we offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states, which is crucial for model generalization. Additionally, we prove that the discrepancy between this learned trajectory of SEGNO and the true trajectory is bounded. Extensive experiments on complex dynamical systems including molecular dynamics and motion capture demonstrate that our model yields a significant improvement over the state-of-the-art baselines.
Yang Liu, Jiashun Cheng, Haihong Zhao, Tingyang Xu, Peilin Zhao, Fugee Tsung, Jia Li, Yu Rong
2023-08-25T07:15:58Z
http://arxiv.org/abs/2308.13212v2
# Physics-Inspired Neural Graph ODE for ###### Abstract Simulating and modeling the long-term dynamics of multi-object physical systems is an essential and challenging task. Current studies model the physical systems utilizing Graph Neural Networks (GNNs) with equivariant properties. Specifically, they model the dynamics as a sequence of discrete states with a fixed time interval and learn a direct mapping for all the two adjacent states. However, this direct mapping overlooks the continuous nature between the two states. Namely, we have verified that there are countless possible trajectories between two discrete dynamic states in current GNN-based direct mapping models. This issue greatly hinders the model generalization ability, leading to poor performance of the long-term simulation. In this paper, to better model the latent trajectory through discrete supervision signals, we propose a **P**hysics-**I**nspired **N**eural **G**raph **O**DE (PINGO) algorithm. In PINGO, to ensure the uniqueness of the trajectory, we construct a Physics-Inspired Neural ODE framework to update the latent trajectory. Meanwhile, to effectively capture intricate interactions among objects, we use a GNN-based model to parameterize Neural ODE in a plug-and-play manner. Furthermore, we prove that the discrepancy between the learned trajectory of PIGNO and the true trajectory can be theoretically bounded. Extensive experiments verify our theoretical findings and demonstrate that our model yields an order-of-magnitude improvement over the state-of-the-art baselines, especially on long-term predictions and roll-out errors. ## 1 Introduction It is a vital problem to simulate and model the complex dynamics of multi-object physical systems, i.e. N-body systems. This problem is relevant to numerous fundamental scientific domains, including molecular dynamics [19], protein folding [11], drug and catalyst virtual screening [32], robot motion planning/control [30], and cosmological simulation [34]. Because of the complex interaction of multiple objects in the N-body system, recent studies propose to use Graph Neural Networks (GNNs) [10; 28] to model the N-body systems. Specifically, they model the objects in the physical system as nodes, the physical relations as edges, and use a message-passing network to learn interactions among nodes. Recently, some works try to encode the physical symmetry into GNNs to ensure the translating/rotating/reflecting equivariant between the geometric input and output of GNN. These models are known as equivariant GNNs [3; 14; 15; 28] and have emerged as a type of leading approaches for N-body system modeling. Most of the existing studies on equivariant GNNs treat the dynamic process as a sequence of discrete states, encompassing the positions, velocities, and forces of each object. Then given an input state, a direct mapping is learned by constraining the model output to approximate the next adjacent state after the input one, over a fixed time interval. We refer to this type of method as direct-mapping models. However, we question the validity of the direct-mapping models on two fronts. Firstly, models derived from this paradigm exhibit limited generalizability over time. Specifically, it is possible to apply models trained on short-term data for predictions on longer time intervals through rollout, but not vice versa. Secondly, a direct-mapping model can not capture the continuous physical knowledge between discrete states. To verify this, we train multiple EGNN models with different random seeds on a 3-body system with two adjacent stats \(t_{0},t_{1}\). We extract the object positions from their hidden layers, as the estimation of intermediate motion trajectories between two status \(t_{0.5}\), and perform one-step rollouts to make the prediction of next state \(t_{2}\). We visualize all predicted trajectories (in dotted grey), the mean trajectory (in blue), and the mean and variance of predicted stats in the left part of Figure 1 (a). As shown in Figure 1 (a), the predictions of EGNN have a high variance at both intermediate and rollout states, indicating that the direct-mapping models are unable to learn a uniquely determined motion trajectory. This is a crucial reason why existing models struggle with long-term physical simulations. In this paper, to better model the latent motion trajectory under discrete supervision signals, we propose **P**hysics-**I**nspired **N**eural **G**raph **O**rinary Differential Equation, dubbed PINGO. Unlike the direct-mapping models which use GNNs to fit all kinematic states, PINGO introduces a Neural ODE framework built upon motion equations to update the position and velocity of the physical system. Theoretically, we prove the uniqueness of the learned latent trajectory of this framework and further provide an upper bound on the discrepancy between the learned and the actual latent trajectory. Figure 1 (a) also depicts an example of prediction results of the latent trajectory of PINGO, following the same training protocol of EGNN. PINGO achieves a significant small variance on both the intermediate and rollout state, and it coincides with the true latent trajectory. Meanwhile, we employ a GNN model to parameterize the force relationships in the Neural ODE framework. Owing to the GNNs' expressive power, PINGO can adeptly address the force prediction issues for each object in the system. Figure 1 (b) shows the key difference between PINGO and direct-mapping methods. Moreover, we provide theoretical proof that PINGO exhibits equivariance properties identical to those of the input GNN model. This property offers the flexibility to adapt various GNN backbones in PINGO to suit different downstream tasks. We conduct extensive experiments on both synthetic and real world physical systems. The results shows that PINGO have an order-of-magnitude improvement over the state-of-the-art baselines on both direction prediction and rollout setting, especially for the extreme long-term simulation. Figure 1: **(a)** Comparison of the trajectory prediction of EGNN and PINGO2. The green line is the true trajectory extracted from a 3-body system. The dotted grey line is the predicted trajectory from the same model with different random seeds. The red line is the average trajectory aggregated by the predicted trajectories. The green circle (\(t_{0},t_{1}\)) is the ground truth state used for training. The red circle (\(t_{0.5}\)) is the predicted latent state from the average trajectory. The dark blue circle (\(t_{2}\)) is the predicted rollout state. The blue area is the variance of the predicted latent state and rollout state. **(b)** The overview of PINGO. \(\mathbf{q},\dot{\mathbf{q}}\) and \(\ddot{\mathbf{q}}\) represent the position, velocity, and acceleration (force), respectively. Related Works GNNs for Modeling the Dynamics of N-body SystemsInteraction Network (IN)[1] is a pioneer work that models objects and relations as graphs and applies multi-step rollouts to make predictions. After that, several studies [21; 24; 27] leverage GNNs and their variants to explicitly reason about the dynamics behind object interactions. Recently, researchers have introduced physical symmetry into models of interacting objects of physical system. For example, TFN [31] and SE(3) Transformer [9] employ spherical harmonics to construct models with 3D rotation equivariance in the Euclidean group for higher-order geometric representations. LieConv [7] and LieTransformer [18] leverage the Lie convolution to extend equivariance on Lie groups. In addition to these methods, which rely on irreducible representations of certain symmetry groups, recent studies [15; 28; 29] apply scalarization techniques to introduce equivariance into the message-passing process in GNNs. Furthermore, SEGNN [3] generalizes EGNN [28] and extends invariant scalars on nodes/edges to covariant vectors and tensors. Finally, EGHN [14] extends the framework of GMN [15] to design equivalent pooling and up-pooling modules for hierarchical modeling of large-scale dynamic systems, such as proteins. Nevertheless, these methods model dynamics in physical systems solely by learning direct mappings between discrete states and conducting long-term simulations via the rollout method. Physics-Inspired Neural NetworksInfusing physical knowledge has been shown to improve learning neural networks for dynamical system modeling. Broadly speaking, energy conservation, symplectic nature, and ODE are three common physical biases. Our work focuses on ODE bias which models derivatives of the state rather than the states directly. Representation models include Lagrangian Neural Networks (LNN) [8; 23], Hamiltonian neural networks (HNN) [12], and Neural ODE [4; 13]. Recent studies [2; 25; 26] have Integrated Neural ODE and GNNs to learn the motion of interactive particles. Graph Neural Ordinary Differential Equation [25] uses graph convolutional networks to parameterize the first- and second-order derivatives of system states. A concurrent work [2] directly uses actual acceleration to train GNNs. However, how they approximate the system trajectory is still an open problem. Motivated by the physical laws, we theoretically justify how our approach can improve the generalization ability in the time dimension and quantify the error brought by the discrete ODE solver. Another research line [16; 17; 33; 35; 36] employs GNN and the first-order Neural ODE to produce smooth trajectories of multi-agent systems. They validate these methods on complex irregularly sampled and partially observed systems in COVID-19 and social scenarios. In contrast, our work lies in the strong approximation ability of the second-order Neural ODE and equivariant GNNs to learn the dynamics of physical systems. ## 3 Preliminary N-body SystemWe study N-body systems [15; 21] with a set of objects \(\mathcal{P}=\{P_{i}\}_{i=1}^{N}\). At time \(t\), the state of the system is represented by its geometric feature \((\mathbf{q}^{(t)},\mathbf{\dot{q}}^{(t)})\), where \(\mathbf{q}^{(t)}\in\mathbb{R}^{N\times 3}\) and \(\mathbf{\dot{q}}^{(t)}\in\mathbb{R}^{N\times 3}\) are the position and the velocity vector, respectively, for the object \(P_{i}\). Additionally, objects \(P\) are associated with non-geometric attributes such as mass or charge, which is denoted by \(\mathbf{h}\in\mathbb{R}^{N\times d}\). We use a graph \(\mathcal{G}=\{\mathcal{P},\mathcal{E}\}\) to represent the spatial connections in the system where \(\mathcal{E}\) is an edge set that is constructed via geometric distance cutoff or physical connectivity. The attributes of edge \(e_{ij}\in\mathcal{E}\) (e.g., object distances) are denoted by \(a_{ij}\). The system state at time \(t\) is abbreviated as \(\mathcal{S}^{(t)}=(\mathcal{G},\mathbf{q}^{(t)},\mathbf{\dot{q}}^{(t)},\mathbf{h})\). Since the predictions at different times share the same model, without ambiguity, we will omit the temporal superscript \(t\) for all variables for brevity when necessary. Physical BackgroundAccording to Newton's second law, _"When a body is acted upon by force, the time rate of change of its momentum equals the force"_. For physical systems that follow Newtonian mechanics, their motion can be described by the following general ordinary differential equation \[\mathbf{\ddot{q}}^{(t)}=\frac{d^{2}\mathbf{q}^{(t)}}{dt^{2}}=f(t,\mathbf{q}^{(t)},\mathbf{\dot {q}}^{(t)},\mathbf{h}). \tag{1}\] In this work, we focus on dynamical systems that can be formulated as: \[\mathbf{\ddot{q}}^{(t)}=f(\mathbf{q}^{(t)},\mathbf{h}), \tag{2}\] where \(\mathbf{\ddot{q}}^{(t)}\) is the acceleration at time \(t\). If we have a closed-form solution of \(f\), given the initial position \(\mathbf{q}^{(t_{0})}\) and velocity \(\mathbf{\dot{q}}^{(t_{0})}\) at time \(t_{0}\), the position at time \(t_{1}>t_{0}\) can be obtained by integrating the above differential equation: \[\mathbf{q}^{(t_{1})}=\mathbf{q}^{(t_{0})}+\int_{t_{0}}^{t_{1}}(\dot{\mathbf{q}}^{(t_{0})}+ \int_{t_{0}}^{t}\ddot{\mathbf{q}}^{(m)}\,dm)\,dt=\mathbf{q}^{(t_{0})}+\int_{t_{0}}^{t_{1} }g(\mathbf{q}^{(t)},\mathbf{h})\,dt. \tag{3}\] Under the aforementioned assumptions, we have \[\dot{\mathbf{q}}^{(t)}=g(\mathbf{q}^{(t)},\mathbf{h}). \tag{4}\] But such a solution is not generally available, especially when the system is complex. Thus, current works are interested in learning a graph neural network to directly approximate \(\mathbf{q}^{(t_{1})}\) on observed system trajectories. GNN-based Direct-mapping ModelExisting studies seek to use a GNN to directly approximate the above unavailable integration results with training pairs \(\big{(}\mathbf{q}^{(t_{0})},\mathbf{q}^{(t_{1})}\big{)}\). Namely, given the system state at time \(t_{0}\), modern GNN simulators \(\psi_{\theta}\) with parameters \(\theta\) predicts \(\mathbf{q}^{(t_{1})}\) and update node features via message passing. Specifically, each layer of \(\psi_{\theta}\) computes \[\mathbf{q}^{\prime}_{i},\dot{\mathbf{q}}^{\prime}_{i},\mathbf{h}^{\prime}_{i}=\varphi(\bm {q}_{i},\dot{\mathbf{q}}_{i},\mathbf{h}_{i},\sum_{j\in\mathcal{N}_{i}}\mathbf{m_{ij}}), \qquad\mathbf{m_{ij}}=\phi(\mathbf{q}_{i},\mathbf{q}_{j},\dot{\mathbf{q}}_{i},\dot{\mathbf{q}}_{j},\mathbf{h}_{i},\mathbf{h}_{j},a_{ij}), \tag{5}\] where \(\phi\) and \(\varphi\) are the edge message function and node update function, respectively, which typically are MLPs. \(\mathbf{q}_{i},\dot{\mathbf{q}}_{i}\in\mathbb{R}^{3}\) and \(\mathbf{h}_{i}\in\mathbb{R}^{d}\) are the features of particle \(P_{i}\). \(\mathbf{m_{ij}}\) defines the message between node \(i\) and \(j\). \(\mathcal{N}_{i}\) collects the neighbors of node \(i\). The prediction is obtained by applying several iterations of message passing. Although the direct method precisely predicts the state of object dynamics with the interval \(T=t_{1}-t_{0}\), its prediction with other intervals is notably inconsistent and suboptimal. Problem DefinitionThis study concentrates on the temporal generalization capacity of the model. Adopting established approaches, the model is trained to predict the subsequent position \(\mathbf{q}^{(t_{1})}\) as accurately as feasible, considering the system state at time \(t_{0}\). Additionally, the model is capable of (1) accurately predicting unobserved intermediate time points by further uniformly partitioning the time interval \(T=t_{1}-t_{0}\) into increments of timestep \(\Delta t\) (i.e., \(t_{0}<t_{0}+\Delta t<t_{0}+2\Delta t<\cdots<t_{1}\)); (2) simulating rollout trajectories \(\mathbf{q}^{(t_{0}:t_{k})}=(\mathbf{q}^{(t)},\ldots,\mathbf{q}^{(t_{k})})\) where \(t_{k}=t_{0}+k(t_{1}-t_{0}),k\in\mathbb{N}^{+}\). ## 4 Physics-Inspired Neural Graph ODE In this section, we introduce how the proposed Physics-Inspired Neural Graph ODE (PINGO) works. For notation, we employ \(\mathbf{q}^{(t)}_{\theta},\dot{\mathbf{q}}^{(t)}_{\theta},\ddot{\mathbf{q}}^{(t)}_{\theta} \in\mathbb{R}^{N\times 3}\) to denote GNN approximations of the entire system, whereas \(\mathbf{q}^{(t)},\dot{\mathbf{q}}^{(t)},\ddot{\mathbf{q}}^{(t)}\in\mathbb{R}^{N\times 3}\) represent the actual trajectories in classical mechanics. Note that, trajectories based on classical mechanics are continuous and typically governed by second-order derivatives, describing how objects change their position and velocity over time. ### Physics-Inspired ODE Framework Leveraging the insights from physics, we propose the parameterization of \(f\) in Eq. 2 using GNNs: \[\ddot{\mathbf{q}}^{(t)}_{\theta}=f_{\theta}(\mathbf{q}^{(t)}_{\theta},\mathbf{h}), \tag{6}\] where \(f_{\theta}\) represents a GNN model. Given the initial position \(\mathbf{q}^{(t_{0})}_{\theta}=\mathbf{q}^{(t_{0})}\) and velocity \(\dot{\mathbf{q}}^{(t_{0})}_{\theta}=\dot{\mathbf{q}}^{(t_{0})}\), we should be able to compute the position \(\mathbf{q}^{(t_{1})}_{\theta}\) at time \(t_{1}\) using Eq. 3 with a perfect learned \(f_{\theta}\). If we take \(t_{0}\) and \(t_{1}\) as the input and target timesteps respectively, and in line with prior studies [3, 28], \(f_{\theta}\) is trained to minimize the discrepancy between the exact and approximated positions: \[\mathcal{L}_{\text{train}}=\sum_{s\in\mathcal{D}_{\text{train}}}||\mathbf{q}^{(t_{ 1})}_{\theta,s}-\mathbf{q}^{(t_{1})}_{s}||^{2}, \tag{7}\] where \(\mathcal{D}_{\text{train}}\) denotes the training set and \(\mathbf{q}^{(t_{1})}_{\theta,s},\mathbf{q}^{(t_{1})}_{s}\) denote the GNN prediction and actual trajectory \(s\) respectively. We can now examine the model's generalizability to other time steps (e.g., \(t_{0}+3\Delta t\)) given sufficient training. In an ideal scenario, where \(\mathbf{q}^{(t_{1})}_{\theta}\) is computationally feasible and the loss \(L\) is minimized to zero, the subsequent proposition holds: **Proposition 4.1**.: _Given that \(f_{\theta}\) is continuous on \(t\in[t_{0},t_{1}]\) and in the absence of an external field, if \(\mathbf{q}_{\theta}^{(t_{0})}=\mathbf{q}^{(t_{0})}\) and \(\dot{\mathbf{q}}_{\theta}^{(t_{0})}=\dot{\mathbf{q}}^{(t_{0})}\), and if \(\mathbf{q}_{\theta}^{(t_{1})}=\mathbf{q}^{(t_{1})}\), then it follows that \(f_{\theta}(\mathbf{q}^{(t)},\mathbf{h})=f(\mathbf{q}^{(t)},\mathbf{h}),\forall t\)._ The detailed proof is given in Appendix. While the proof draws upon ODE theory, the underlying intuition is straightforward. Per the Picard-Lindelof theorem, a unique trajectory exists that passes \(\mathbf{q}^{(t_{1})}\) with initial \(\mathbf{q}^{(t_{0})},\dot{\mathbf{q}}^{(t_{0})}\). For \(\mathbf{q}_{\theta}^{(t_{1})}\) to match \(\mathbf{q}^{(t_{1})}\), \(f_{\theta}(\mathbf{q}^{(t)},\mathbf{h})\) must equate to \(f(\mathbf{q}^{(t)},\mathbf{h})\). If our model \(f_{\theta}\) accurately approximates \(\ddot{\mathbf{q}}^{(t)}\), the system trajectory is recovered. This proposition shows that the proposed framework is able to train across diverse timesteps and generalize to others, showcasing its empirical value and utility. Discretization of Integration in the Framework.Despite Proposition 4.1 is attractive, the integral of a graph neural network is intractable. A common solution is adopting an ODE solver to produce a discrete trajectory to approximate the continuous one. Thus it is necessary to study whether the model is still consistent with the design after applying ODE solvers and how error varies according to the chosen timestep. In this work, we focus on a symplectic Euler integrator which is computationally efficient and effective in modeling dynamical systems. Specifically, it computes \[\dot{\mathbf{q}}_{\theta}^{(t_{1})}=\dot{\mathbf{q}}_{\theta}^{(t_{0})}+\sum_{k=0}^{ \tau-1}\ddot{\mathbf{q}}_{\theta}^{(t_{0}+k\Delta t)}\Delta t,\qquad\mathbf{q}_{\theta }^{(t_{1})}=\mathbf{q}_{\theta}^{(t_{0})}+\sum_{k=1}^{\tau}\dot{\mathbf{q}}_{\theta}^{ (t_{0}+k\Delta t)}\Delta t, \tag{8}\] where \(t_{0}+\tau\Delta t=t_{1}\). Figure 2 illustrates the entire computation process. During each iteration, PINGO first computes the acceleration based on the GNN and then updates the velocity and position in order. Based on our update strategy, an estimation of \(g\) is defined as \[\dot{\mathbf{q}}_{\theta}^{(t)}=g_{\theta}(\mathbf{q}_{\theta}^{(t)},\mathbf{h}). \tag{9}\] The error analysis is provided in the next section. ### A Bounded Approach to Real Trajectories Since \(\mathbf{q}_{\theta}^{(t)}\) is computed through discrete numerical integration, optimizing Eq. 7 will not yield a neural network that can perfectly reconstruct the true \(\mathbf{q}^{(t)}\), even if the loss is zero. Existing theoretical findings [37] illustrate that training using discrete integration trajectories results in a bounded approximation of their first-order derivative, where the bound is linked to the chosen timestep. Given \(\mathcal{B}(\mathbf{q},r)\subset\mathbb{C}^{3}\) as the complex ball of radius \(r>0\) centered at \(\mathbf{q}\in\mathbb{C}^{3}\), and time interval \(T=t_{1}-t_{0}\), the following holds: **Theorem 4.2**.: _Let \(\mathbf{q}_{1},\mathbf{q}_{2}\in\mathbf{R}^{3}\) and \(r_{1},r_{2}>0\), the ODE solver is Euler solver and it iterates \(\tau\) times with timestep \(\Delta t\). Suppose that the target \(f\) and the learned \(f_{\theta}\) are analytic and bounded by \(m_{2}\) on \(\mathcal{B}(\mathbf{q}_{2},r_{2})\), and the target \(g\) and the learned \(g_{\theta}\) are analytic and bounded by \(m_{1}\) on \(\mathcal{B}(\mathbf{q}_{1},r_{1})\). Then, there exist constants \(T_{0}\) such that, if \(0<T<T_{0}\),_ \[||f_{\theta}(\mathbf{q},\mathbf{h})-f(\mathbf{q},\mathbf{h})||_{\infty}\leq O(\Delta t+\frac{ \mathcal{L}}{\Delta t}),\quad\forall\mathbf{q}\in\mathbf{R}^{3},\] _where \(\mathcal{L}=\frac{1}{T}||\mathbf{q}_{\theta}^{(t_{1})}-\mathbf{q}^{(t_{1})}||_{\infty}\) represents prediction error in \(l_{\infty}\) - norm._ Figure 2: PINGO schematic. The forward step (left box) first uses GNN \(f_{\theta}\) (We omit \(\theta\) subscript in the figure for brevity) to decide second-order derivatives of particles and then updates their velocities and positions in order. The right part is the trajectory obtained by iteratively repeating the forward process in a 3-body system. The proof is reported in the Appendix. Based on Theorem 4.2, we analyze the error introduced by the Euler solver. We use two metrics common in classical numerical analysis, namely, local and global truncation error. The local truncation error \(\epsilon_{t+\Delta t}\) of PINGO is defined as follows: \[\epsilon_{t+\Delta t}=||\mathbf{q}^{(t+\Delta t)}-\mathbf{q}^{(t)}-\hat{\mathbf{q}}^{(t)} \Delta t-f_{\theta}(\mathbf{q}^{(t)},\mathbf{h})\Delta t^{2}||_{2}, \tag{10}\] which represents the error accumulated in a single step. And the global truncation error \(\mathcal{E}_{t+k\Delta t}\) is defined as follows: \[\mathcal{E}_{t+k\Delta t}=||\mathbf{q}^{(t+k\Delta t)}-\mathbf{q}_{\theta}^{(t+k \Delta t)}||_{2}, \tag{11}\] which denotes the error accumulated in the first k steps. Then we have **Corollary 4.3**.: _Given the same conditions as in Theorem 4.2, if the training loss \(L\) is adequately minimized and \(g_{\theta}\) satisfies the Lipschitz condition, the local truncation error \(\epsilon_{t+\Delta t}\) and the global truncation error \(\mathcal{E}_{t+k\Delta t}\) are \(O(\Delta t^{2})\) and \(O(\Delta t)\), respectively._ The proof is reported in Appendix. Corollary 4.3 shows that for the prediction of \(\tau\)-th iteration, its error depends on the chosen \(\Delta t\). These statements imply that PINGO can be trained by minimizing Eq. 7 and generalize to other timesteps. ### Equivariance of PINGO Given particle positions at time \(t\), computing the resultant force of each body is the key to modeling its dynamic changes. In general, PINGO is flexible and can incorporate various GNNs as the backbone, among which equivariant graph neural networks have shown effectiveness in preserving the symmetry of physical systems. For example, if we rotate the input system, the model output will be rotated to the same degree as well. There exist multiple symmetry groups for 3-dimensional systems such as SO(3) (i.e., rotational equivariance) and SE(3) (i.e., rotational and translational equivariance). We can prove that PINGO would not break the equivariance property of backbone GNNs. **Proposition 4.4**.: _Suppose the backbone GNN \(f_{\theta}\) of PINGO is equivariant to group \(\mathcal{G}\), then the trajectory \(\mathbf{q}_{\theta}\) is equivariant to group \(\mathcal{G}\)._ The proof and detailed discussion of equivariance is provided in Appendix. Intuitively, the integration step Eq. 8 only involves the linear computation of equivariant terms, thus PINGO would preserve the same equivariant property as the used GNNs. Specifically, we choose the widely used EGNN [28] as the example of backbone, assuming the acceleration depends on positions, the message passing is defined by \[\mathbf{m_{ij}}=\phi_{e}\left(||\mathbf{q}_{\theta,i}-\mathbf{q}_{\theta,j}||^{2},\mathbf{h}_ {i},\mathbf{h}_{j},a_{ij}\right),\quad\mathbf{\tilde{q}}_{\theta,i}=\frac{1}{N-1} \sum_{j\in\mathcal{N}_{i}}(\mathbf{q}_{\theta,i}-\mathbf{q}_{\theta,j})\phi_{q}(\mathbf{ m}_{ij}). \tag{12}\] Here \(\phi_{e},\phi_{x}\) denotes Multi-Layer Perceptrons (MLP) whose output is a scalar and the output of \(\phi_{h}\) is a vector. The non-geometric features are updated via skip connections. Analogous to neural ODE methods, the model parameters are shared among all iterations. ## 5 Experiments ### N-body system We build upon the experimental setting introduced in [28] where the task is to estimate all particle positions after a fixed timestep. We consider two types of N-body systems, charged and gravity particles, which are driven by electromagnetic [21] and gravitational forces [3] between every pair of particles respectively. Each system consists of 5 particles that have initial position, velocity, and attributes like positive/negative charge or mass. We sample 3000 trajectories for training, 2000 for validation, and 2000 for testing. Implementation detailsWe compare our method with GNN and equivariant methods: Radial Field [22], TFN [31], SE(3) Transformer [9], EGNN[28], GMN [15], and SEGNN [3]. In addition, we also compare to Graph Neural Ordinary Differential Equation (GDE) [25]. Without specification, we use EGNN as the backbone of PINGO. The layer number is searched within 1000 time step (1000 ts), we add two additional settings (1500 ts and 2000 ts) to evaluate the performance of long-term prediction. Meanwhile, we report the average forward time in seconds for 100 samples of each method. From Table 4 we can observe that: * It is evident that PINGO, equipped solely with EGNN, outperforms all baselines across all datasets and settings. Notably, compared to the best baseline SEGNN, the average error improvement on Charged and Gravity datasets is \(0.254\) and \(0.504\) respectively, indicating significant improvement. * As the time step increases, PINGO's performance improvement becomes more pronounced. Compared to the best baseline SEGNN, the average error improvement increases from \(0.074\) at 1000 time steps to \(0.660\) at 2000 time steps, demonstrating the efficacy of PINGO in handling long-term prediction scenarios. * As expected, PINGO's forward time (\(0.0277\)s) is slower than that of EGNN (\(0.0126\)s) due to its additional integration operation. Nevertheless, PINGO's forward time remains competitive compared to the best baselines (\(0.0315\)), indicating its efficiency. Generalization capability from long-term to short-term.It is interesting to see how PINGO can generalize from long-term training to short-term testing. Accordingly, we train models on 1000ts on two datasets and make the test on shorter time steps by performing PINGO on the smaller \(\tau\) step with the same ratio. For the baselines, we extract the object position information from their hidden layers as the prediction of its intermediate steps. Table 2 reports the mean and standard deviation of each setting. From Table 2 we can observe that: * Clearly, PINGO outperforms all other baselines across all settings by a large margin. Notably, when there is a lack of supervised signals at 250/500/750ts, the performance of all other baselines \begin{table} \begin{tabular}{c c c c c c c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Charged**} & \multicolumn{3}{c|}{**Gravity**} & \multirow{2}{*}{**Time [s]**} \\ & 1000 ts & 1500 ts & 2000 ts & 1000 ts & 1500 ts & 2000 ts \\ \hline **Linear** & 6.830\(\pm\)0.016 & 20.012\(\pm\)0.029 & 39.513\(\pm\)0.061 & 7.928\(\pm\)0.001 & 29.270\(\pm\)0.003 & 58.521\(\pm\)0.003 & 0.0002 \\ **GNN** & 1.077\(\pm\)0.004 & 5.059\(\pm\)0.250 & 10.591\(\pm\)0.352 & 1.400\(\pm\)0.071 & 4.691\(\pm\)0.238 & 10.508\(\pm\)0.432 & 0.0064 \\ **GDE** & 1.285\(\pm\)0.004 & 4.026\(\pm\)0.164 & 8.708\(\pm\)0.145 & 1.412\(\pm\)0.005 & 2.793\(\pm\)0.083 & 6.291\(\pm\)0.153 & 0.0088 \\ **TFN** & 1.544\(\pm\)0.231 & 11.116\(\pm\)1.285 & 23.823\(\pm\)0.383 & 3.536\(\pm\)0.067 & 37.705\(\pm\)0.298 & 73.472\(\pm\)0.661 & 0.0440 \\ **SE(3)-Tr.** & 24.83\(\pm\)0.009 & 18.891\(\pm\)0.237 & 36.730\(\pm\)0.381 & 4.401\(\pm\)0.005 & 52.134\(\pm\)0.008 & 98.243\(\pm\)0.647 & 0.2661 \\ **Radial Field** & 1.060\(\pm\)0.007 & 12.514\(\pm\)0.009 & 26.388\(\pm\)0.331 & 1.860\(\pm\)0.075 & 7.021\(\pm\)0.150 & 16.474\(\pm\)0.003 & 0.0052 \\ **EGNN** & 0.711\(\pm\)0.029 & 2.998\(\pm\)0.089 & 6.836\(\pm\)0.003 & 0.766\(\pm\)0.011 & 3.661\(\pm\)0.055 & 9.039\(\pm\)0.216 & 0.0126 \\ **GMN** & 0.824\(\pm\)0.002 & 3.436\(\pm\)0.156 & 7.409\(\pm\)0.214 & 0.620\(\pm\)0.043 & 2.801\(\pm\)0.194 & 6.756\(\pm\)0.427 & 0.0137 \\ **SEGNN** & 0.448\(\pm\)0.003 & 2.573\(\pm\)0.053 & 5.972\(\pm\)0.168 & 0.471\(\pm\)0.026 & 2.110\(\pm\)0.004 & 5.819\(\pm\)0.335 & 0.0315 \\ \hline **PINGO** & **0.433\(\pm\)**0.013 & **2.183\(\pm\)**0.048 & **5.614\(\pm\)**0.128 & **0.338\(\pm\)**0.027 & **1.693\(\pm\)**0.217 & **4.857\(\pm\)**0.580 & 0.0277 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean squared error (\(\times 10^{-2}\)) of the N-body system, and forward time in seconds for a batch size of 100 samples on Tesla T4 GPU. The header of each column is the time step for training and prediction. Bold font indicates the best result and Underline is the strongest baseline. Results are averaged across 5 runs. We report both mean and standard deviation in the table. \begin{table} \begin{tabular}{c c c c c|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**Charged**} & \multicolumn{3}{c}{**Gravity**} \\ & 250 ts & 500 ts & 750 ts & 1000 ts & Avg & \(250\) ts & \(500\) ts & \(750\) ts & \(1000\) ts & Avg \\ \hline **GNN** & 73.40\(\pm\)0.400 & 31.79\(\pm\)2.28 & 12.862\(\pm\)0.008 & 29.72 & 181.92\(\pm\)0.21 & 90.33\(\pm\)1.93 & 30.66\(\pm\)1.13 & 0.746\(\pm\)0.008 & 75.93 \\ **GDE** & 92.65\(\pm\)2.50 & 43.94\(\pm\)1.09 & 12.20\(\pm\)0.212 & 0.652\(\pm\)0.008 & 37.36 & 136.05\(\pm\)0.158 & 56.80\(\pm\)0.162 & 12.12\(\pm\)1.048 & 5.588\(\pm\)0.15 & 51.39 \\ **EGNN** & 6.756\(\pm\)2.35 & 3.816\(\pm\)4.48 & 3.668\(\pm\)0.34 & 0.568\(\pm\)0.009 & 3.702 & 7.146\(\pm\)0.786 & 29.70\(\pm\)0.24 & 9.712\(\pm\)0.560 & 0.382\(\pm\)0.11 & 11.89 \\ **GNN** & 10.44\(\pm\)2.74 & 10.92\(\pm\)4.47 & 4.518\(\pm\)1.36 & 0.512\(\pm\)0.166 & 6.598 & 7.430\(\pm\)0.197 & 5.402\(\pm\)0.157 & 5.730\(\pm\)0.004 & 0.349\(\pm\)0.46 & 5.762 \\ **SEGNN** & 21.78\(\pm\)0.000 & 25.74\(\pm\)1.36 & 3.14\(\pm\)0.34 & 0.342\(\pm\)0.24 & 27.25 & 10.58\(\pm\)0.48 & 49.63\(\pm\)0.27 & 2.852\(\pm\)0.70 & 4.484\(\pm\)0.002 & 21.62 \\ **PINGO** & 0.188\(\pm\)0.03 & 0.312\(\pm\)0.008 & 3.60\(\pm\)0.046 & 0.309\(\pm\)0.11 & 0.292 & 0.064\(\pm\)0.002 & 0.128\(\pm\)0.005 & 0.176\(\pm\)0.004 & 0.210\(\pm\)0.007 & 0.145 \\ \hline \hline \end{tabular} \end{table} Table 2: The generalization from long-term to short-term. All models are trained on 1000ts and test on 250/500/750/1000 ts. Mean squared error (\(\times 10^{-2}\)) and the standard deviation are reported. Results averaged across 5 runs. decreases significantly. By contrast, PINGO achieves similar results as in 1000ts, demonstrating its robust generalization to short-term prediction. * Another interesting point is that PINGO's error exhibits a distinct trend compared to other baselines. While the errors of other baselines significantly increase with decreasing time steps, PINGO achieves even smaller errors with shorter time steps. This observation justifies our theoretical results that the error is bound by the chosen timestep. * Additionally, the standard deviation of PINGO is much smaller than that of other baselines, indicating the numerical stability of PINGO. This result further confirms our theoretical finding that PINGO can obtain the unique latent trajectory between two discrete states. Generalization capability for the long-term simulation.In this part, we evaluate the generalizability of models for extreme long-term simulation. Specifically, we train all models on 1000ts and use rollout to make the prediction for the longer time step (over 40 rollout steps, indicating over 40000ts.). Figure 3 depicts the mean squared error of all methods on two datasets. As shown in Figure 3, all baselines experience numerical explosion due to error accumulation during the rollout process, leading to a quick drop in prediction performance. In contrast, PINGO demonstrates an order-of-magnitude error improvement over other baselines for the extreme long-term simulation. This numerical stability can be attributed to the Neural ODE framework for modeling position and velocity. Nevertheless, we also noticed an increase in errors during the rollout process in PINGO. We conjecture this error may be introduced by the incorrect force estimation by GNN. Therefore, a more powerful GNN with better force estimation ability could further enhance the performance of PINGO. Compared with numerical methodsTo verify the effectiveness of PINGO, we further compare it with symplectic Euler solvers that have a closed-form solution. The results of the gravity system are illustrated in Figure 4, and we provide results of charged systems in Appendix. Euler-\(\Delta t\) denotes results using \(\Delta t\) as the forward timestep. From the figure, we can find that under the same timestep (i.e., Euler-125), PINGO achieves better long-term performance. According to Corollary 4.3, PINGO is at least a first-order method if the prediction error is adequately minimized. Since the timestep of PINGO is set to 125, under appropriate conditions, PINGO will at least achieve comparable performance with Euler-125. Ablation studyWe conduct two ablation studies on PINGO: (1) To validate the effectiveness of second-order modeling, we compare PINGO with its first-order variant PINGO\({}^{\text{1st}}\), where we use the same backbone and experimental settings to learn the first-order derivatives. The results are shown Figure 4: Mean squared error for PINGO and symplectic Euler solver on the gravity system, which is calculated between them and groundtruth (i.e., Euler-1). Figure 3: Mean squared error for the long-term simulation by rollout. All models are trained on 1000ts. in Figure 6, we can observe that PINGO consistently enhances performance across all scenarios, with particularly notable improvements in long-term simulations. This validates the efficacy of incorporating second-order derivatives in modeling physical systems, emphasizing the significant advantages of integrating physical knowledge into the learning of dynamical systems; (2) Then we study the effect of the chosen timestep via increasing the iteration times \(\tau\). Since the target timestep is fixed, a larger iteration time indicates a smaller timestep. The results are displayed in Figure 5. It is obvious that better results can be achieved by choosing a small timestep. Additionally, the performance would not increase after a sufficient iteration, which is around 10 in both datasets. According to Theorem 4.2, these errors are mainly attributed to learning loss which is related to the representative ability of GNNs. ### CMU Motion Capture For real-world applications, we evaluate our model on CMU Motion Capture Database [5], which contains various trajectories of human motions. Our main focus lies in the walking motion of a single object (subject #35) [21]. We adopt a random split strategy introduced by [15] where train/validation/test data contains 200/600/600 frame pairs. For comprehensive evaluations of long-term performance, we broaden our assessment scope to include scenarios with intervals of 40 ts and 50 ts, in addition to the default settings with 30 ts. Implementation detailsIn this task, we use GMN as the backbone of PINGO. The norm of velocity and the coordinates of the gravity axis (z-axis) are set as node features to represent the motion dynamics. Note that the human body operates through joint interactions, we augment the edges with 2-hop neighbors. These operations are implemented across all applicable baselines. Following [15], 6 key bones are selected as sticks and the rest are isolated objects for GMN configurations. ResultsTable 3 reports the performance of PINGO and various compared models. It is evident that PINGO outperforms all baseline models by a significant margin across all scenarios. Notably, the improvements are more pronounced in long-term simulations, with PINGO achieving 18.619 \(\times\) 10\({}^{-2}\) lower MSE than the runner-up model GMN. To gain further insights into the superior performance of PINGO, we illustrate the predicted motion of GMN and PINGO in Figure 7. Interestingly, it can be observed that the predictions of GMN appear to lag behind the ground truths, while PINGO demonstrates a closer match. This discrepancy may be attributed to the lack of constraints imposed by modeling the rollout trajectories. We provide more visualizations in Appendix. \begin{table} \begin{tabular}{c c c c c c c|c} \hline \hline **Model** & **TFN** & **SE(3)-Tr.** & **RF** & **EGNN** & **GMN** & **PINGO** & **Abs. Imp.** \\ \hline \(30\) ts & 24.932\(\pm\) 1.023 & 24.655\(\pm\) 0.870 & 149.459\(\pm\) 0.750 & 24.013\(\pm\) 0.462 & 16.005\(\pm\) 0.386 & **14.462\(\pm\) 1.06** & 1.543 \\ \(40\) ts & 49.976\(\pm\) 1.664 & 44.279\(\pm\) 0.355 & 306.311\(\pm\) 1.100 & 39.792\(\pm\) 1.129 & 38.193\(\pm\) 0.067 & **22.229\(\pm\)** 1.49** & 15.964 \\ \(50\) ts & 73.716\(\pm\) 4.343 & 68.796\(\pm\) 1.139 & 549.476\(\pm\) 3.461 & 50.930\(\pm\) 2.675 & 47.883\(\pm\) 0.599 & **29.264\(\pm\)** 0.946 & 18.619 \\ \hline \hline \end{tabular} \end{table} Table 3: Mean squared error (\(\times 10^{-2}\)) on CMU motion capture dataset. Results are averaged across 5 runs. Figure 7: Visualization of Motion Capture with 50 time step. Left to Right: initial position, GMN, PINGO (all in blue). Ground truths are in red. Conclusions In this work, we highlight the problem of long-term simulation of N-body systems and introduce PINGO, a flexible neural graph ODE framework that is capable of learning physical dynamics from observed trajectories. Its core idea is to learn latent physical dynamics and employ numerical methods to infer system states. Theoretical findings show that our model can generalize to other timesteps via the same training criteria as existing studies. We demonstrate the potential of PINGO by applying it to a wide range of physical systems. PINGO outperforms all competitors in all cases. Extensive ablation studies have further verified the generalization ability of PINGO and the effectiveness of its physical design.
2310.16647
Achieving Constraints in Neural Networks: A Stochastic Augmented Lagrangian Approach
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting. Fixed penalty methods, though common, lack adaptability and suffer from hyperparameter sensitivity. In this paper, we propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem. Where the data fidelity term is the minimization objective and the regularization terms serve as constraints. Then, we employ the Stochastic Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism. Our approach extends beyond black-box regularization, demonstrating significant improvements in white-box models, where weights are often subject to hard constraints to ensure interpretability. Experimental results on image-based classification on MNIST, CIFAR10, and CIFAR100 datasets validate the effectiveness of our approach. SAL consistently achieves higher Accuracy while also achieving better constraint satisfaction, thus showcasing its potential for optimizing DNNs under constrained settings.
Diogo Lavado, Cláudia Soares, Alessandra Micheletti
2023-10-25T13:55:35Z
http://arxiv.org/abs/2310.16647v1
# Achieving Constraints in Neural Networks: A Stochastic Augmented Lagrangian Approach ###### Abstract Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting. Fixed penalty methods, though common, lack adaptability and suffer from hyperparameter sensitivity. In this paper, we propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem. Where the data fidelity term is the minimization objective and the regularization terms serve as constraints. Then, we employ the Stochastic Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism. Our approach extends beyond black-box regularization, demonstrating significant improvements in white-box models, where weights are often subject to hard constraints to ensure interpretability. Experimental results on image-based classification on MNIST, CIFAR10, and CIFAR100 datasets validate the effectiveness of our approach. SAL consistently achieves higher Accuracy while also achieving better constraint satisfaction, thus showcasing its potential for optimizing DNNs under constrained settings. ## 1 Introduction Deep Neural Networks (DNNs) have shown remarkable success in diverse applications. However, overfitting remains a challenge, which calls for effective regularization techniques. Fixed penalty (FP) methods, like \(L_{1}\) or \(L_{2}\) regularization, are commonly used but lack adaptability since different layers in DNNs may require different amounts of regularization, FP methods induce the same amount of regularization to every layer. In addition, the effectiveness of fixed penalties often depends on hyperparameter tuning, specifically of the regularization coefficients, which can not only be challenging and sensitive to the dataset and network architecture but also make it difficult to explore various parameters due to the time-consuming nature of training DNNs. The Augmented Lagrangian method (ALM) is an optimization technique designed to solve constrained optimization problems. By viewing DNN training as a constrained optimization problem, we can leverage the Augmented Lagrangian method to enforce regularization as constraints. By iteratively updating the Lagrange multipliers and the penalty parameters, the Augmented Lagrangian method dynamically adapts regularization strengths during the training process. This adaptability allows for a better balance between preventing overfitting and retaining model performance. Adopting ALM to train neural networks under constraints has been widely used, namely, [1] showcased the power of ALM by demonstrating state-of-the-art performance in three NLP benchmarks using a constrained formulation. [2] harnessed ALM to address class-imbalanced binary classification, while [3, 4] applied it to optimal power flow prediction problems and energy domains. For problems involving partial differential equations (PDEs), [5] introduced an ALM approach to enforce physical conservation laws of kinetic PDEs on neural networks. Additionally, [6] proposed an ALM strategy to train Physically informed neural networks (PINNs) by deriving a novel sequence of loss functions with adaptively balanced loss components. The Alternating Direction Method of Multipliers (ADMM), introduced and popularized by the work of [7], has emerged as a powerful tool for statistics and machine learning challenges involving numerous features or training examples. ADMM adopts a decomposition-alternating approach, where solutions to smaller local subproblems are harmonized to derive a solution to the larger global problem. By combining the benefits of dual decomposition and augmented Lagrangian methods, ADMM excels in handling combinatorial constraints and supports efficient parallelization. Various applications use ADMM to train DNNs under constraints, for instance, [8; 9] developed an ADMM-based framework against adversarial attacks, while [10] used ADMM for weight pruning in DNNs. In the context of white-box models, constrained optimization introduces a delicate balance between interpretability and performance. Unlike black-box DNNs, white-box models often involve hard constraints, where parameters hold meaningful interpretations within specific feasible sets. Fixed penalty methods prove inadequate for upholding such hard constraints due to their lack of adaptability and fixed balance between data fidelity and penalties. By formulating DNN training as a constrained optimization problem, SAL and ADMM effectively handle complex hard constraints. The introduction of Lagrange multipliers in SAL and the decomposition-alternating approach in ADMM ensures the enforcement of hard constraints while maintaining model performance. The contributions of this work are as follows: (1) a novel approach to DNN regularization by formulating the training process as a constrained optimization problem. By leveraging the SAL method, we achieve more flexible and efficient regularization. (2) Our method improves white-box models' performance, ensuring interpretability while effectively handling hard constraints. Experimental results on various datasets validate its effectiveness. (3) We demonstrate the applicability of ADMM for DNN training under constraints. This work presents a comprehensive and versatile regularization framework, paving the way for insights into constrained optimization and interpretable DNNs. ## 2 Theoretical Background ### Augmented Lagrangian Method Consider a generic optimization problem for an objective function \(F:\Theta\rightarrow\mathbb{R}\) subject to constraints \(C(\theta)=[c_{1}(\theta),\ldots,c_{m}(\theta)]\): \[\arg\min_{\theta\in\Theta}F(\theta);\quad\text{s.t. }C(\theta)=0. \tag{1}\] The augmented Lagrangian method (ALM) [11] relaxes the problem in 1 into an unconstrained optimization problem. Specifically, it harmonizes two earlier methods, the quadratic penalty method and the method of Lagrangian multipliers, which suffer from training instability and non-convergence due to the difficulty of convexifying loss functions. ALM augments the Lagrangian function with a quadratic term that helps drive the solution toward the constraint, thus forming the augmented Lagrangian function: \[L_{\rho}(\theta,\lambda)=F(\theta)+\big{\langle}\lambda,C(\theta)\big{\rangle} +\frac{\rho}{2}\|C(\theta)\|_{2}^{2}, \tag{2}\] where \(\lambda\) is the Lagrangian multiplier vector and \(\rho\) is the penalty coefficient that controls the trade-off between the objective function and the constraint violation. Similarly to the Fixed Penalty (FP) method, if \(\rho\) is chosen to be too large, then the optimization problem can become stiff and convergence will be very slow. If chosen to be too small, the solution will deviate from the feasible space enforced by the constraints. The augmented Lagrangian method solves problem (1) by executing the following recursion in \(k\) \[\min_{\theta\in\Omega}L(\theta^{k},\lambda^{k}), \tag{3}\] which can be translated into the following steps: \[\theta^{k+1} \leftarrow\arg\min_{\theta\in\Omega}L_{\rho}(\theta,\lambda^{k}), \tag{4}\] \[\lambda^{k+1} \leftarrow\lambda^{k}+\rho C(\theta^{k+1}) \tag{5}\] The optimization step on (4) computes the minimum of the augmented Lagrangian cost function with respect to \(\theta\), and the update step on (5) updates the Lagrangian multipliers based on the error in the constraints. The convergence of ALM does not depend on the choice of \(\rho\). A large \(\rho\) leads to faster convergence in terms of needed iterations, however, each iteration becomes more difficult to compute because the optimization step (4) becomes more ill-conditioned. Thus, it becomes crucial to find a penalty that balances these objectives of fast convergence and well-conditioned minimization. ``` 1:Initialize \(\theta^{0},\lambda^{0}\gets 0\) 2:for\(k=0,1,\dots,K-1\)do 3:\(\theta^{k+1}\leftarrow\arg\min_{\theta}L_{\rho}(\theta,\lambda^{k})\) 4:\(\lambda^{k+1}\leftarrow\lambda^{k}+\rho C(\theta^{k+1})\) 5:endfor ``` **Algorithm 1** Deterministic Augmented Lagrangian Method ### Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers (ADMM) [11; 7] is particularly effective for solving problems that can be decomposed into smaller, simpler subproblems, such as those arising in machine learning and signal processing. ADMM splits the primal variables into two sets, one with respect to the primal function \(F\) and the other to the constraints. Thus, we cast the problem in (1) into a new problem via variable splitting: \[\arg\min_{\theta\in\Omega,\mu\in\Omega} F(\theta)\ +C(\mu)\] (6) s.t. \[\theta=\mu.\] The augmented Lagrangian function for (6) is now defined as: \[L_{\rho}(\theta,\mu,\lambda) =F(\theta)+C(\mu)+\left\langle\lambda,(\theta-\mu)\right\rangle+ \frac{\rho}{2}\|\theta-\mu\|_{2}^{2}\] \[=F(\theta)+C(\mu)+\frac{\rho}{2}\|\theta-\mu+u\|_{2}^{2}, \tag{7}\] where \(u=\lambda/\rho\) is the scaled Lagrangian multiplier vector. To determine \(u\), we follow the strategy of the augmented Lagrangian algorithm with the following iteration \[(\theta^{k+1},\mu^{k+1}) \leftarrow\arg\min_{(\theta,\mu)}L_{\rho}(\theta,\mu,\lambda^{k}) \tag{8}\] \[\lambda^{k+1} \leftarrow\lambda^{k}+\rho(\theta^{k+1}-\mu^{k+1}) \tag{9}\] Then, the method of alternating minimization lets us achieve the following equivalent algorithm \[\theta^{k+1} \leftarrow\arg\min_{\theta}F(\theta)+\frac{\rho}{2}\|\theta-\mu^{k }+u^{k}\|_{2}^{2} \tag{10}\] \[\mu^{k+1} \leftarrow\arg\min_{\mu}C(\mu)+\frac{\rho}{2}\|\theta^{k+1}-\mu+u^ {k}\|_{2}^{2}\] (11) \[u^{k+1} \gets u^{k}+\rho(\theta^{k+1}-\mu^{k+1}). \tag{12}\] Since alternating minimization can be seen as a form of coordinated descent [12], this algorithm converges to the desired solution of equation (6). ## 3 Related Work ### Fixed Penalty Method Fixed penalty (FP) methods attempt to convert the original optimization problem in (1) into an unconstrained optimization problem by augmenting the loss function with a penalty term to handle constraints: \[\min_{\theta\in\Omega} F(\theta)+\rho\|C(\theta)\|_{2}^{2}, \tag{13}\] where \(F\) denotes the loss function, \(\theta\) represents the trainable parameters of the model, and \(\rho>0\) serves as the penalty parameter, balancing the trade-off between data fidelity and constraint enforcement during the training process. However, the effectiveness of this method relies heavily on manual fine-tuning of the penalty parameter \(\rho\), as an inappropriate value can lead to an unstable optimization process. Moreover, the FP approach lacks adaptability since it induces the same amount of regularization across all layers of the DNN, regardless of their specific requirements. This lack of adaptability can hinder its performance on complex tasks with varying regularization needs. Additionally, the FP method does not guarantee constraint satisfaction below a certain threshold of interest, \(\|C(\theta)\|_{2}^{2}\leq\epsilon\). This limitation further motivates the search for more robust and efficient regularization methods for training DNNs. ### Stochastic Augmented Lagrangian (SAL) The Stochastic Augmented Lagrangian (SAL) applies the Augmented Lagrangian Method (ALM) to train neural networks and has garnered attention for its effectiveness in handling constrained optimization problems. [1] demonstrated the power of SAL in achieving state-of-the-art performance in three Natural Language Processing (NLP) benchmarks, while [2] utilized SAL to address class-imbalanced binary classification, improving performance on imbalanced datasets. In energy domains, [3; 4] successfully applied SAL to optimal power flow prediction problems. In the context of Partial Differential Equations (PDEs), [5] employed SAL to enforce physical conservation laws of kinetic PDEs on neural networks. Additionally, [6] proposed a SAL strategy for training Physically Informed Neural Networks (PINNs) with adaptively balanced loss components. Although these approaches have achieved state-of-the-art results in their respective domains, we recognize that the use of stochastic mini-batch gradients in machine learning problems calls for modifications to the original ALM algorithm. In our work, we draw inspiration from the research of [13], where they address this setting by designing a SAL algorithm that adapts to stochastic mini-batch gradients. While [13] applied their SAL strategy to train DNNs under physical constraints, our work aims to extend and optimize this approach for training DNNs under general constraints and regularization. ### Stochastic ADMM Stochastic ADMM (S-ADMM) extends the traditional ADMM algorithm to handle large-scale optimization problems with noisy data and a large number of constraints and variables. In S-ADMM, the primal and dual variables are updated following the traditional ADMM strategy, i.e., the steps (8) and (9), but with an approximated augmented Lagrangian function given by [14; 15]: \[\hat{L}_{\rho,k}(\theta,\mu,u)= F(\theta^{k};\xi^{k+1})+\partial F(\theta^{k};\xi^{k+1})^{T}( \theta-\theta^{k})+ \tag{14}\] \[C(\mu)+\frac{\rho}{2}\|\theta-\mu+u\|_{2}^{2}+\frac{\|\theta- \theta^{k}\|^{2}}{2\eta^{k}}.\] Here, \(\xi^{k+1}\) is a random sample drawn from an unknown distribution, \(\rho\) is the penalty parameter, and \(\eta^{k+1}\) is the step size. The S-ADMM algorithm is particularly useful when dealing with complex and large-scale optimization problems involving noisy data. ## 4 Methodology ### Stochastic Augmented Lagrangian (SAL) Augmented Lagrangian methods typically adopt a two-level nested loop structure. The inner problem (4) is solved using conventional unconstrained optimization methods, while the outer loop updates the Lagrangian multipliers and the penalty factor based on the constraint violation of the inner loop solution. In their work, the authors of [13] propose a Stochastic Augmented Lagrangian (Aug-Lag) method tailored for DNN training presented in Algorithm 2. The inner loop of Aug-Lag involves using SGD to solve the unconstrained optimization problem. Unlike traditional ALMs that rely on dynamic convergence tolerances, Aug-Lag iterates through the entire training dataset once per outer iteration. In the outer loop, Aug-Lag accepts updates to the Lagrange multipliers whenever the SGD solution achieves a sufficient decrease in the constraint violation. The multiplier update remains unchanged from the conventional Aug-Lag method. The penalty parameter increases with a fixed factor when the SGD solution fails to satisfy the sufficient decrease criteria. Lastly, Aug-Lag makes use of learning rate decay, but it is reset to its initial value in each outer iteration. This ensures independent decay rates for each constructed training subproblem, preventing the model from stagnating. ### Constraints in Neural Networks Deep Neural networks are often subject to constraints in order to impose certain conditions on the parameters of the model or to ensure desirable properties. Constraints in neural networks can be broadly categorized into hard and soft constraints. Hard constraints are strict conditions that must be satisfied for the model to be valid. Violating hard constraints typically renders the model unsuitable for the task, this is of particular importance in white-box models where parameters are meaningful under specific domain values. On the other hand, soft constraints are more flexible and allow some level of violation. For instance, \(L_{1}\) regularization is a soft constraint since we do not want a model to strictly satisfy it, rather it helps the model traverse the loss landscape in a better direction Several types of constraints can be imposed on neural networks, we focused on the following: \(L_{1}\) **and \(L_{2}\) Norm Constraints:**\(L_{1}\) and \(L_{2}\) norm constraints are commonly used to control the magnitude of model parameters. \(L_{1}\) norm constraints, \(\|\mathbf{w}\|_{1}\), enforce sparsity by encouraging many parameters to be exactly zero. \(L_{2}\) norm constraints, \(\|\mathbf{w}\|_{2}^{2}\), promote weight decay, effectively penalizing large parameter values. Both norms help prevent overfitting and can lead to more robust models. **Orthogonality Constraints:** Orthogonality constraints enforce that certain weight matrices are orthogonal [16]: \[\|\mathbf{W}^{\top}\mathbf{W}-\mathbf{I}\|_{F}^{2} \tag{15}\] where \(\mathbf{W}\) is the weight matrix and \(\mathbf{I}\) is the identity matrix. This property is often beneficial in tasks involving feature extraction or when interpretability is essential. **Non-Negativity Constraints:** Non-negativity constraints ensure that parameters remain non-negative throughout the training process. This is particularly useful in scenarios where negative values are not meaningful, such as distribution parameters. \[(-w_{i})_{+},\text{ for all }i, \tag{16}\] where \((h)_{+}=\max(0,h)\) is the Rectified Linear Unit (ReLU) and \(w_{i}\) represents a model parameter. ### Constraint Violation Metric We introduce a metric designed to gauge constraint violation within DNNs. The purpose of this metric is to quantitatively evaluate the degree to which the specified constraints are breached by the model. We focus on assessing the constraint violation at the inception of the DNN and then tracing its evolution as training progresses. Given a set of constraints \(\mathcal{C}\) imposed on the DNN, we present the constraint violation metric as follows: Let \(\mathbf{\theta}\) represent the trainable parameters of the DNN at the commencement of training, and \(C(\mathbf{\theta})\) indicate the vector of constraint violation values for each constraint within \(\mathcal{C}\). The constraint violation metric is defined as: \[\text{Constraint Violation (CV)}=\left(\sum_{i=1}^{m}|C_{i}(\mathbf{\theta})|^{p} \right)^{\frac{1}{p}}, \tag{17}\] where \(m\) is the number of constraints within \(\mathcal{C}\), and \(p\) stands as a hyperparameter controlling the metric's sensitivity. The works of [13] and [4] set \(p=2\) to assess constraint violation. While this is suitable for hard constraints, alternative \(L_{p}\) norms might be better suited for certain soft constraints. For instance, \(L_{1}\) regularization shouldn't be strictly satisfied by a DNN; therefore, \(L_{\infty}\) could be a more appropriate choice for such a constraint. This is especially pertinent when different architectures exhibit varying parameter counts, rendering the comparison of constraint violation measures unfeasible. In order to measure the improvement of the constraint violation across different architectures and stages, we introduce an extension of the metric: \[\text{CV(e)}=CV_{e}/CV_{0}, \tag{18}\] where \(CV_{i}\) signifies the constraint violation at step/epoch \(i\), and as such, CV(\(i\)) denotes the progression of constraint violation relative to the initial CV prior to training. Thus, the constraint violation metric proposed here is designed to provide a summary of the adherence of DNNs to the imposed constraints. The \(L_{p}\) norm allows us to control the sensitivity of the metric, making it adaptable to different types of constraints and model architectures. ## 5 Experiments ### Experimental Setup Our experimentation includes diverse neural network architectures, such as ResNet13 [17], VGG11 [18], CNN, and GENEOnet [19]. These architectures underwent evaluations under various constraints, including \(L_{1}\) and \(L_{2}\) regularization, orthogonality, and non-negativity. We examined datasets like CIFAR-10, CIFAR-100, and MNIST for image classification. The datasets were augmented through isomorphic transformations (i.e., translations, rotations, and inversions), with data normalized as a preprocessing step. The data fidelity function \(F\) used was the cross-entropy loss. Utilizing a batch size of 128, we explored learning rates ranging from 0.001 to 0.00001, employing Adam and SGD optimizers. Additionally, for white-box models, the L-BFGS optimizer was employed, capitalizing on their limited parameter count. Our methodology encompassed different training iterations, where we contrasted the fixed penalty method (FP) with different penalty coefficient combinations \(\rho_{i}\in[0.0001,0.1]\), against the stochastic augmented Lagrangian method (SAL) outlined in Algorithm 2 and stochastic ADMM (S-ADMM) [14]. The penalty parameter \(\rho\) in SAL and S-ADMM varied in each experiment, although this variation did not influence method performance, given the progressive adaptation of the penalty during training, corroborated by [13]. Finally, our code is openly accessible and seamlessly integratable with existing architectures. Both the SAL and S-ADMM implementations are programmed using PyTorch and work as wrappers for data fidelity losses. The formulation of constraints is straightforward, rendering the incorporation of SAL and S-ADMM into ongoing projects a straightforward process. ### Numerical Results In this section, we conduct a comprehensive evaluation of different Deep Neural Networks (DNNs) across the MNIST, CIFAR10, and CIFAR100 datasets, employing different constraint enforcement methods. Regarding the MNIST dataset (Table 1), the Stochastic Augmented Lagrangian (SAL) and Stochastic Alternating Direction Method of Multipliers (S-ADMM) consistently outperform the Fixed Penalty (FP) method. SAL, specifically, demonstrates an optimal trade-off between data fidelity and constraint adherence in both the CNN and GENEOnet architectures. The selected models possess a modest number of parameters (approximately 100K each), and the exclusion of more intricate architectures from testing is due to their suitability for modeling the MNIST challenge. That is, the chosen architectures emphasize the necessity for regularization methods to strike a balance between constraint application and data fidelity. For the white box model, GENEOnet, the enforcement of non-negativity on its parameters is essential for interpretability. All methods successfully enforce this constraint throughout training. Nonetheless, FP requires a substantial penalty coefficient \(\rho_{i}\) to achieve this, potentially diverting attention from the data fidelity term and leading to suboptimal performance compared to other methods. On the CIFAR100 dataset (Table 2) our evaluation encompasses larger networks with millions of parameters. This is crucial because soft constraints such as \(L_{1}\) are applied to a greater number of parameters, introducing a more delicate equilibrium between constraint enforcement and data fidelity within the loss function. Nonetheless, SAL consistently achieves the highest Accuracy for both ResNet13 and VGG11, surpassing FP and S-ADMM. In this context, SAL does not lead to significant improvements in constraint enforcement over the other methods, instead, it achieves a balanced trade-off between data-fidelity and constraint enforcement. Notably, the performance of S-ADMM consistently keeps pace with SAL, demonstrating competitive Accuracy and constraint enforcement. This underscores that, despite a slightly relaxed constraint adherence, S-ADMM effectively achieves competitive predictive performance. In scenarios involving parallel optimization paradigms, S-ADMM emerges as a strong contender and may indeed be the method of choice. In the case of the CIFAR10 dataset (Table 3), SAL remains the most effective regularization method in terms of Accuracy, consistently outperforming FP and S-ADMM across all evaluated architectures. Notably, as we delve into larger networks, both SAL and S-ADMM demonstrate significant performance advantages over the FP method. It's worth highlighting that FP exhibits a higher standard deviation in Accuracy compared to the other methods, suggesting greater instability across different penalty configurations, underscoring the limitations of fixed penalty methods. SAL excels by securing the top CV\({}_{p=2}\) score in the VGG11 and CNN models, substantially outperforming FP and S-ADMM. This compelling result, combined with SAL's superior Accuracy, showcases its unique ability to strike an optimal balance between data fidelity and constraint enforcement. Moreover, while FP records the best average CV\({}_{p=2}\) score in the ResNet13 and GENEOnet architectures, this achievement comes at a noticeable cost in terms of Accuracy. This observation reinforces the notion that SAL excels in finding the optimal equilibrium between constraint imposition and data fidelity, making it a promising choice for various deep learning tasks and architectures. \begin{table} \begin{tabular}{l|l|l|l l l} \hline \hline DNN & Constraints & Method & CV(-1) & CV\({}_{p=2}\) & Accuracy \\ \hline \multirow{3}{*}{ResNet13} & \(L_{1}\); & FP & **0.11** (\(\sigma\) **0.04**) & **330.5** (\(\sigma\) **1.4**) & 0.72 (\(\sigma\) 0.02) \\ & Orthogonality; & SAL & 0.14 (\(\sigma\) 0.02) & 363.4 (\(\sigma\) 30.1) & **0.77** (\(\sigma\) **0.01**) \\ & & S-ADMM & 0.13 (\(\sigma\) 0.01) & 346 (\(\sigma\) 31.9) & 0.75 (\(\sigma\) 0.01) \\ \hline \multirow{3}{*}{VGG11} & \(L_{1}\); & FP & 0.467 (\(\sigma\) 0.002) & 295.3 (\(\sigma\) 1.4) & 0.72 (\(\sigma\) 0.03) \\ & Orthogonality; & SAL & **0.461** (\(\sigma\) **0.001**) & **294.2** (\(\sigma\) **0.85**) & **0.73** (\(\sigma\) **0.02**) \\ \cline{1-1} & & S-ADMM & 0.47 (\(\sigma\) 0.01) & 296 (\(\sigma\) 0.92) & 0.72 (\(\sigma\) 0.01) \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative results of ResNet13 and VGG11 on CIFAR100, where CV(-1) indicates the constraint violation at the end of training relative to the initial network initialization, while CV\({}_{p=2}\) represents the constraint violation \(L_{2}\) norm at test time. Evaluation of other DNNs was omitted due to the problem’s simplicity for networks like VGG11 and ResNet13. The reported results are the averages across 100 runs for each method, encompassing different hyperparameter configurations. \begin{table} \begin{tabular}{l|l|l|l l l} \hline \hline DNN & Constraints & Method & CV(-1) & CV\({}_{p=2}\) & Accuracy \\ \hline \multirow{3}{*}{CNN} & \(L_{2}\); & FP & **0.72** (\(\sigma\) **0.06**) & 3.2K (\(\sigma\) 7.6) & 0.70 (\(\sigma\) 0.02) \\ & Orthogonality; & SAL & 0.73 (\(\sigma\) 0.03) & **2.0K** (\(\sigma\) **15.4**) & **0.75** (\(\sigma\) **0.01**) \\ & & S-ADMM & 0.73 (\(\sigma\) 0.04) & 2.7K (\(\sigma\) 115) & 0.74 (\(\sigma\) 0.01) \\ \hline \multirow{3}{*}{GENEOnet} & \(L_{2}\); & FP & 0.72 (\(\sigma\) 0.07) & 2.7K (\(\sigma\) 152) & 0.80 (\(\sigma\) 0.1) \\ & Non-Negativity; & SAL & 0.74 (\(\sigma\) 0.04) & 2.4K (\(\sigma\) 30.1) & 0.82 (\(\sigma\) 0.01) \\ \cline{1-1} & Orthogonality; & S-ADMM & **0.72** (\(\sigma\) **0.03**) & **1.7K** (\(\sigma\) **103**) & **0.83** (\(\sigma\) **0.02**) \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative results of CNN and GENEOnet on the MNIST Dataset. CV(-1) indicates the constraint violation at the end of training relative to the initial network initialization, while CV\({}_{p=2}\) represents the constraint violation \(L_{2}\) norm at test time. Evaluation of other DNNs was omitted due to the problem’s simplicity for networks like VGG11 and ResNet13. The reported results are the averages across 100 runs for each method, encompassing different hyperparameter configurations. ## 6 Conclusions This work addresses the imperative need for effective Deep Neural Network (DNN) regularization to counter overfitting and enhance generalization. Conventional fixed penalty methods exhibit limitations in adaptability and sensitivity to hyperparameters. To mitigate these issues, we present an innovative approach that formulates DNN training as a constrained optimization problem, prioritizing data fidelity minimization while treating regularization terms as essential constraints. This conceptual foundation paves the way for the application of the Stochastic Augmented Lagrangian (SAL) method, introducing a dynamic and efficient regularization strategy. Notably, our approach's benefits extend beyond black-box regularization, demonstrating substantial enhancements in white-box models subject to stringent weight constraints for interpretability. Empirical validation across diverse datasets, encompassing image classification benchmarks such as MNIST, CIFAR10, and CIFAR100, underscores the efficacy of our methodology. SAL consistently outperforms in terms of Accuracy and constraint enforcement, indicating its potential to optimize DNNs under constraint-driven scenarios. In essence, our study introduces a pragmatic alternative to fixed penalty methods, emphasizing SAL's adaptability and performance improvement. Empirical results support our approach, indicating improved performance and interpretability in DNNs.
2310.16121
19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics
As particle accelerators increase their collision rates, and deep learning solutions prove their viability, there is a growing need for lightweight and fast neural network architectures for low-latency tasks such as triggering. We examine the potential of one recent Lorentz- and permutation-symmetric architecture, PELICAN, and present its instances with as few as 19 trainable parameters that outperform generic architectures with tens of thousands of parameters when compared on the binary classification task of top quark jet tagging.
Alexander Bogatskiy, Timothy Hoffman, Jan T. Offermann
2023-10-24T18:51:22Z
http://arxiv.org/abs/2310.16121v3
# 19 Parameters Is All You Need: ###### Abstract As particle accelerators increase their collision rates, and deep learning solutions prove their viability, there is a growing need for lightweight and fast neural network architectures for low-latency tasks such as triggering. We examine the potential of one recent Lorentz- and permutation-symmetric architecture, PELICAN, and present its instances with as few as 19 trainable parameters that outperform generic architectures with tens of thousands of parameters when compared on the binary classification task of top quark jet tagging. ## 1 Introduction Particle collisions at the Large Hadron Collider at CERN happen every 25 nanoseconds, producing immense amounts of data that have to be processed in real time. Much of the event filtering is done by the Level-1 trigger [1; 5], which uses algorithms implemented on FPGAs that need to operate at below-microsecond latency to avoid loss of valuable data. Low-latency tasks include charged particle track reconstruction and energy measurements. Implementing neural networks under such constraints is a significant challenge, however the most recent attempts to do so have finally surpassed their traditional non-ML counterparts. The current state-of-the-art implementations in this area are based on the JEDI-net Graph Neural Network (GNN) architecture, see [17; 23; 24]. The network input data consist of lists of jet constituents, with a certain number of geometric features describing each constituent. GNN architectures are inherently permutation-equivariant, providing a significant boost to efficiency and model size by virtue of weight sharing, but no other physical symmetries are necessarily respected. Physics-informed architectures that are inherently equivariant with respect to rotational and Lorentz-boost symmetries have recently shown themselves to provide state-of-the-art performance at tasks such as jet tagging (see e.g. [2; 3; 4; 9; 12; 15]), and they do so despite the relatively small model size. In this work we study the current state-of-the-art architecture for top-quark jet tagging, PELICAN [4]. It is fully Lorentz-invariant and its permutation-equivariant layers are based on the general higher-order permutation-equivariant mappings introduced in [16; 19]. The full reduction of all relevant symmetries allows small instances of PELICAN with just a few thousand parameters to perform on par with much larger models with hundreds of thousands or even millions of parameters. Moreover, the simplicity of the architecture presents a unique opportunity for explainability and even interpretability. Our goal here is to explore the small model size limit of PELICAN and compare it against the previous state-of-the-art (and also Lorentz-equivariant) architecture, LorentzNet [9]. The benchmark task for this comparison is that of top-quark tagging due to the publicly available dataset [10] and the extensive prior exploration of architectures trained on it [12]. The input consists of a list of \(N\) 4-momenta of jet constituents, \(\{p_{i}\}_{i=1}^{N}\) which PELICAN reduces to the \(N\times N\) array of pairwise Lorentz-invariant dot products, \(d_{ij}=p_{i}\cdot p_{j}\). Thus the reduced input is an array with one channel. We find that a stripped down version of PELICAN consisting of nothing but two linear permutation-equivariant blocks with just two channels in the hidden layer and exactly one nonlinear activation function in between outperforms generic architectures such as the fully-connected TopoDNN which has 59k parameters [12]. This model nominally has 26 parameters, but through absorption of multiplicative factors and a simplification of the output layer that number can be effectively reduced to 19. Despite the costly \(N^{2}\) scaling of the memory that PELICAN requires, its symmetric architecture can provide ultra-lightweight networks that can be viably used in low-latency and high-throughput applications. ## 2 The original PELICAN architecture The original PELICAN architecture consists of an input block which encodes the \(N\times N\) array of pairwise dot products \(\{d_{ij}\}\), followed by a sequence of so-called \(\text{Eq}_{2\to 2}\) permutation-equivariant blocks (the index 2 indicates the rank of the input and output arrays) that produce transformed \(N\times N\) arrays. Each of these blocks consists of a fully-connected "messaging" layer that mixes the channels but is shared among all components of the \(N\times N\) array, and an "aggregation" layer that applies a general linear permutation-equivariant operation that exchanges information between the various components of the array. Finally, a similar \(\text{Eq}_{2\to 0}\) block reduces the array to a permutation-invariant (rank 0) scalar, after which an output MLP layer produces the two binary classification weights \(\{w_{0},w_{1}\}\). The diagram below summarizes this architecture, see [4] for details. \[\{d_{ij}\}\ \xrightarrow{}\text{Emb}\ \xrightarrow{}\text{[Eq}_{2\to 2}]^{L}\ \xrightarrow{}\text{Eq}_{2\to 0}\ \xrightarrow{}\text{MLP}\ \xrightarrow{}\{w_{c}\} \tag{1}\] Notably, the aggregation step inside \(\text{Eq}_{2\to 2}\), called \(\text{LinEq}_{2\to 2}\), applies 15 different operations that provide a basis for the space of all linear permutation-equivariant transformations of rank 2 arrays, which temporarily increases the size of the activation by a factor of 15, marking the peak of PELICAN's memory utilization. This is followed by a trainable linear layer that applies \((C_{\text{in}}\times 15)\times C_{\text{out}}\) weights and adds two biases per channel (one bias added to the entire \(N\times N\) array and one only to the diagonal), where \(C_{\text{in}}\) and \(C_{\text{out}}\) are the number of input and output channels. Similarly, \(\text{Eq}_{2\to 0}\) involves only 2 aggregators (total sum and trace), a linear layer of shape \((C_{\text{in}}\times 2)\times C_{\text{out}}\), and one bias per channel. ## 3 nanoPELICAN architecture In this section we simplify the PELICAN architecture to a single hidden layer and reduce parameters further based on symmetry arguments, which we refer to as nanoPELICAN (nPELICAN). The only two linear symmetric observables that can be constructed from the input dot products (assuming sum-based aggregation that does not explicitly depend on the multiplicity \(N\)) are \(N\), the jet mass \(m_{J}^{2}=\sum_{i,j}d_{ij}\), and the total mass \(\sum_{i}d_{ii}\). The top-tagging dataset has only massless constituents, so the latter observable is irrelevant. A non-parametric top-dagger based on a simple jet mass cut achieves an AUC of only 90.6%. A linear PELICAN, which outputs \(p(N)m_{J}^{2}+q(N)\) with some learned polynomials \(p\) and \(q\), cannot far exceed this. To this end, we set out to find the smallest and most interpretable modification of PELICAN that is still nonlinear and performs competitively on the top-tagging task. We thus omit the input embedding layer, all messaging layers, and the output MLP, and are left with just two linear equivariant blocks, \(\text{LinEq}_{2\to 2}\) and \(\text{LinEq}_{2\to 0}\), separated by a single activation function, which we choose to be ReLU. The architecture is summarized in the following diagram: \[\{d_{ij}\}\ \xrightarrow{}\text{LinEq}_{2\to 2}^{\text{nano}}\ \xrightarrow{}\text{ReLU}\ \xrightarrow{}\text{LinEq}_{2\to 0}\ \xrightarrow{}\{w_{c}\}. \tag{2}\] Here, we also notice that since the array of dot products, \(\{d_{ij}\}\), is symmetric, and since the constituents in the top-tagging dataset are massless (\(d_{ii}=0\)), many of the 15 basis aggregators in LinEq\({}_{2\to 2}\) are redundant. We remove 5 aggregators that depend only on the diagonal of the input, and one from each of 4 pairs of aggregators that attain the same value on symmetric inputs. We are left with just 6 aggregators which constitute LinEq\({}_{2\to 2}^{\text{nano}}\). To help with training, each equivariant layer is still preceded by a Dropout layer. Moreover, keeping BatchNorm layers just before the dropout can also help the model converge, meanwhile the extra parameters from these layers can be almost completely absorbed into the linear layers for inference. Namely, the multiplicative weights of BatchNorm can be absorbed into the following LinEq\({}_{2\to 2}\), whereas the biases can be either left or absorbed into the biases of LinEq\({}_{2\to 2}\) at the cost of turning them into quadratic polynomials of \(N\), adding 2 parameters per output channel. Since there are two distinct bias parameters per channel, such a BatchNorm effectively adds \(\min\{C_{\text{in}},4C_{\text{out}}\}\) parameters. In the case of LinEq\({}_{2\to 0}\) there is only one bias parameter per channel, thus the number of added parameters is \(\min\{C_{\text{in}},2C_{\text{out}}\}\). The only remaining hyperparameter is \(C_{\text{hidden}}\), the number of channels in the hidden layer (between LinEq\({}_{2\to 2}\) and LinEq\({}_{2\to 0}\)). The total number of parameters is then \(1\times 6\times C_{\text{hidden}}+2\cdot C_{\text{hidden}}+C_{\text{hidden}} \times 2\times 2+2=12C_{\text{hidden}}+2\) (ignoring BatchNorm). In addition, since for binary classification only the difference in weights \(w_{1}-w_{0}\) matters, it is possible to have only 1 output channel, in which case we have \(10C_{\text{hidden}}+1\) parameters. The models presented below produce only one output weight called \(w\). Leaving in the two BatchNorm layers effectively adds only 3 new parameters if \(C_{\text{hidden}}>1\) and 2 otherwise. Finally, since we're using the ReLU activation, which is a homogenous function, one more multiplicative factor can be absorbed in each channel. The final number of parameters then is \(9C_{\text{hidden}}+4\) for \(C_{\text{hidden}}>1\) and 12 otherwise. ## 4 Top tagging performance The top tagging dataset [10] consists of anti-\(k_{T}\) jets [6] corresponding with top quarks (signal) and light quarks or gluons (background). It includes up to 200 jet constituents per entry, each represented by a 4-momentum in Cartesian coordinates. A converted version of the dataset that can be directly used with PELICAN can be found at [11]. For our models we only use the 80 constituents with the highest transverse momentum \(p_{T}=\sqrt{p_{\pi}^{2}+p_{y}^{2}}\), which is typically enough to saturate our network's performance. We follow a training regime almost identical to that in [4], using an Nvidia H100 GPU. The only changes are that we disable weight decay, extend the training to 140 epochs (4 epochs of linear warm-up, 124 epochs of CosineAnnealingLR with \(T_{0}=4\) and \(T_{\text{mult}}=2\), and 12 epochs of exponential decay with \(\gamma=0.5\)), and increase the batch size to 512. Training took about 30 ms per batch, and the evaluation took about 23 ms per batch (including overhead). We train several models with \(C_{\text{hidden}}\) ranging from 1 to 10. For comparison, we also train instances of LorentzNet with only one message passing block and the number of channels in the hidden layers set to 3 and the batch size of 512 (no other changes to hyperparameters and training were made). The results are reported in Table 2. We report the accuracy, the area under the ROC, and the background rejection (inverse false positive rate) at 30% signal efficiency (true positive rate). For each architecture, we pick the model with the lowest cross-entropy loss out of 10 trained instances initialized with different random seeds. We observe that nPELICAN achieves competitive AUC and background rejection with as few as 2 channels in the hidden layer. In fact, its AUC surpasses that of the fully-connected TopoDNN with 59k parameters [12] (its average accuracy was 0.929(1), AUC 0.964(14), and background rejection \(424\pm 82\)). Moreover, the AUC of nPELICAN with 10 channels (101 parameters) is only about 1% behind that of ParticleNet (498k parameters) [21] and even ParT (2.1M parameters) [22]. Meanwhile, \begin{table} \begin{tabular}{l c c c c} \hline \hline Architecture & Accuracy & AUC & \(1/\epsilon_{B}\) & \# Params \\ \hline LorentzNet\({}_{n_{\text{hidden}}=3}\) & 0.907(2) & 0.966(3) & 174\(\pm\)44 & 120 \\ nPELICAN\({}_{C_{\text{hidden}}=10}\) & 0.921(1) & 0.9748(1) & 327\(\pm\)20 & 101 \\ nPELICAN\({}_{C_{\text{hidden}}=3}\) & 0.919(1) & 0.9730(4) & 256\(\pm\)12 & 31 \\ nPELICAN\({}_{C_{\text{hidden}}=2}\) & 0.918(1) & 0.9718(6) & 243\(\pm\)18 & 21 \\ nPELICAN\({}_{C_{\text{hidden}}=1}\) & 0.895(1) & 0.950(2) & 81\(\pm\)12 & 11 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of tiny top-taggers. Averaged over the top 5 (lowest loss) out of 25 random seeds. Uncertainty given by the standard deviation. \(1/\epsilon_{B}\) is the background rejection at 30% signal efficiency. LorentzNet with one message-passing block lags far behind nPELICAN with a similar number of parameters. A visual comparison with many existing models is presented in Figure 1. Interestingly, at such low depth the removal of fully connected messaging layers from PELICAN actually improved the performance. The element of the original network that can boost nPELICAN's performance the most with only a few new parameters is the \(N\)-dependent scaling of aggregators. In our tests, replacing sum aggregation with means led to very low performance of nPELICAN, however enabling PELICAN's original flexible scaling of the means by an extra factor of \(N^{\alpha}/\bar{N}^{\alpha}\) turns out to be very beneficial, see Table 2. ## 5 Interpreting nanoPELICAN Considering the extremely low complexity and relatively high performance of nPELICAN, there is high potential for a full interpretation of the model. Before attempting to interpret the weights, it is crucial to minimize any redundancies. In particular, since ReLU is a homogenous function, one multiplicative factor from the weights in LinEq\({}_{2\to 0}\) can be absorbed into the weights and biases of LinEq\({}_{2\to 2}\) in each channel of the hidden layer. For the model with \(C_{\text{hidden}}=2\) this means that the number of parameters can effectively be reduced to 19. Explicitly, the model can be written analytically as \[w=b^{2\to 0}+\sum_{h=1}^{C_{\text{hidden}}}c_{0h}^{2 \to 0}\frac{1}{\bar{N}^{2}}\sum_{t,j}\text{ReLU}\left(\sum_{b=1}^{6}c_{bh}^{2 \to 2}\text{Agg}_{b}(d)_{ij}+b_{h}^{2\to 2}+b_{\text{diag},h}^{2 \to 2}\delta_{ij}\right)+\\ +\sum_{h=1}^{C_{\text{hidden}}}c_{1h}^{2\to 0}\frac{1}{\bar{N} }\sum_{i=j}\text{ReLU}\left(\sum_{b=1}^{6}c_{bh}^{2\to 2}\text{Agg}_{b}(d)_{ij}+b_{h}^{2 \to 2}+b_{\text{diag},h}^{2\to 2}\delta_{ij}\right). \tag{3}\] Here, \(w\) is the output score (the jet is tagged as a top quark if \(w>0\)); \(c^{2\to 2}\), \(b^{2\to 2}\), and \(b_{\text{diag}}^{2\to 2}\) are the weights and biases of LinEq\({}_{2\to 2}\); \(c^{2\to 0}\) and \(b^{2\to 0}\) are the weights and the basis of LinEq\({}_{2\to 0}\); index \(b\) enumerates the 6 aggregators of LinEq\({}_{2\to 2}\); index \(h\) enumerates the channels in the hidden layer. \(\bar{N}\) is a hyperparameter that is used to control the magnitude of the sums over constituents, here set to be 49 (it is similarly used inside the aggregators Agg\({}_{b}\)). \begin{table} \begin{tabular}{l r r r r} \hline nPELICAN\({}_{N}\) width & Accuracy & AUC & \(1/\epsilon_{B}\) & \# Params \\ \hline \(C_{\text{hidden}}=10\) & 0.923(1) & 0.9764(1) & 448\(\pm\)10 & 108 \\ \(C_{\text{hidden}}=3\) & 0.9214(3) & 0.9752(2) & 384\(\pm\)16 & 38 \\ \(C_{\text{hidden}}=2\) & 0.9200(3) & 0.9745(1) & 368\(\pm\)17 & 28 \\ \(C_{\text{hidden}}=1\) & 0.902(2) & 0.960(2) & 150\(\pm\)16 & 18 \\ \hline \end{tabular} \end{table} Table 2: Performance of nPELICAN\({}_{N}\) – nPELICAN with \(N^{\alpha}\)-scaled aggregators. Metrics defined as in Table 2. Figure 1: Comparison of top-tagger background rejection performance at signal efficiency \(\epsilon_{S}=0.3\) as a function of the number of parameters in each model considered. Results other than nPELICAN are taken from refs. [2, 4, 7, 9, 12, 13, 14, 18, 20, 21, 22]. Note that the curve for the original PELICAN was obtained by varying only the network width, so only the rightmost point is fully optimized. Therefore the ReLU effectively sets a linear constraint on the dot products, and the output takes a sum over only those pairs \((i,j)\) that satisfy the constraint. More explicitly, denoting the jet momentum by \(J=\sum_{i}p_{i}\), the argument of the ReLU is a linear combination of relative masses (\(m_{ij}^{2}=-(p_{i}-p_{j})^{2}=2d_{ij}\)), jet-frame masses \(p_{i}\cdot J\), \(p_{j}\cdot J\), the jet mass \(m_{J}^{2}=\sum_{ij}d_{ij}\), and a constant term. In addition, we found that these parameters are stable across multiple random initializations, indicating that they can be directly interpreted as unique physical constraints that encode Lorentz-invariant quantities such as the top quark mass, which we intend to elucidate in future work. ## 6 Conclusions We have presented nPELICAN, a miniaturized a version of the PELICAN architecture, which is both surprisingly performant relative to much larger networks and can be rewritten simply as a constraint on Lorentz invariant quantities with a single ReLU activation function. This represents a novel development in interpretability for neural networks in particle physics, and gives hope for the interpretability of much larger networks. Future studies will exploit the stability of the nPELICAN parameters to determine their dependencies on features of the training data such as jet energies and particle masses, as well as relating these parameters to traditional, discriminating kinematic variables for jet-tagging, such as jet constituent multiplicity, subjet multiplicity [25] and jet shapes [8]. The code can be found at [https://github.com/abogatskiy/PELICAN-nano](https://github.com/abogatskiy/PELICAN-nano).
2303.08227
Hall effect thruster design via deep neural network for additive manufacturing
Hall effect thrusters are one of the most versatile and popular electric propulsion systems for space use. Industry trends towards interplanetary missions arise advances in design development of such propulsion systems. It is understood that correct sizing of discharge channel in Hall effect thruster impact performance greatly. Since the complete physics model of such propulsion system is not yet optimized for fast computations and design iterations, most thrusters are being designed using so-called scaling laws. But this work focuses on rather novel approach, which is outlined less frequently than ordinary scaling design approach in literature. Using deep machine learning it is possible to create predictive performance model, which can be used to effortlessly get design of required hall thruster with required characteristics using way less computational power than design from scratch and way more flexible than usual scaling approach.
Konstantin Korolev
2023-03-14T20:46:05Z
http://arxiv.org/abs/2303.08227v1
# Hall effect thruster design via deep neural network for additive manufacturing ###### Abstract Hall effect thrusters are one of the most versatile and popular electric propulsion systems for space use. Industry trends towards interplanetary missions arise advances in design development of such propulsion systems. It is understood that correct sizing of discharge channel in Hall effect thruster impact performance greatly. Since the complete physics model of such propulsion system is not yet optimized for fast computations and design iterations, most thrusters are being designed using so-called scaling laws. But this work focuses on rather novel approach, which is outlined less frequently than ordinary scaling design approach in literature. Using deep machine learning it is possible to create predictive performance model, which can be used to effortlessly get design of required hall thruster with required characteristics using way less computing power than design from scratch and way more flexible than usual scaling approach. ## I Nomenclature \begin{tabular}{l l l} \(U_{d}\) & = & discharge voltage \\ \(P\) & = & discharge power \\ \(T\) & = & thrust \\ \(\dot{m}_{\alpha}\) & = & mass flow rate \\ \(I_{sp}\) & = & specific impulse \\ \(\eta_{m}\) & = & mass utilization efficiency \\ \(\eta_{\alpha}\) & = & anode efficiency \\ \(j\) & = & \(P/v\) [power density] \\ \(v\) & = & discharge channel volume \\ \(h,d,L\) & = & generic geometry parameters \\ \(C_{*}\) & = & set of scaling coefficients \\ \(g\) & = & free-fall acceleration \\ \(M\) & = & ion mass \\ \end{tabular} ## II Introduction The application of deep learning is extremely diverse, but in this study it focuses on case of hall effect thruster design. Hall effect thruster (HET) is rather simple DC plasma acceleration device, but due to complex and non linear process physics we don't have any full analytical performance models yet. Though there are a lot of ways these systems are designed in industry with great efficiencies, but in cost of multi-million research budgets and time. This problem might be solved using neural network design approach and few hardware iteration tweaks[1]. Scaled thrusters tend to have good performance but this approach isn't that flexible for numerous reasons: first and foremost, due to large deviations in all of the initial experimental values accuracy can be not that good, secondly, it is hardly possible to design thruster with different power density or \(I_{sp}\) efficiently. On the other hand, the neural network design approach has accuracy advantage only on domain of the dataset[1], this limitations is easily compensated by ability to create relations between multiple discharge and geometry parameters at once. Hence this novel approach and scaling relations together could be an ultimate endgame design tool for HET. Note that neither of these models do not include cathode efficiencies and performances. So as the neutral gas thrust components. Most correlations in previous literature were made using assumption or physics laws[2], in this paper the new method based on feature generation, GAN dataset augmentation and ML feature selection is suggested. ### Dataset enlargement using GAN As we already have discussed, the data which is available is not enough for training NN or most ML algorithms, so I suggest using Generative Adversarial Network to generate more similar points. Generative model trains two different models - generator and discriminator. Generator learns how to generate new points which are classified by discriminator as similar to real dataset. Of course it is very understandable that model needs to be precise enough not to overfit on data or create new unknown correlations. Model was checked via Mean Absolute Percentage Error (MAPE) and physical boundary conditions. After assembling most promising architecture, the model was able to generate fake points with MAPE of 4.7%. We need to measure MAPE to be sure point lie on same domain as original dataset, as in this work we are interested in sub-kilowatt thrusters. After model generated new points they were check to fit in physical boundaries of scaled values (for example thrust couldn't be more than 2, efficiency more than 1.4 and so on, data was scaled on original dataset to retain quality), only 0.02% of points were found to be outliers. The GAN architecture and dataset sample is provided as follows. ## 3 General Relations As we will use dataset of only low power hall thrusters, we can just ignore derivation of any non-linear equations and relations and use traditional approach here. Let's define some parameters of anode: \[\alpha=\frac{\dot{m}\beta}{\dot{m}_{a}}, \tag{1}\] Where \(\alpha\) is anode parameter of \(\beta\) thruster parameter. This is selected because this way cathode and other losses wont be included in the model. One of key differences in this approach is fitting only best and most appropriate data, thus we will eliminate some variance in scaling laws. Though due to machine learning methods, we would need a lot of information which is simply not available in those volumes. So some simplifications and assumptions could be made. Firstly, as it was already said, we don't include neutralizer efficiency in the model. Secondly, the model would be correct on very specific domain, defined by dataset, many parameters like anode power and \(I_{sp}\) still are using semi-empirical modelling approach. The results we are looking for are outputs of machine learning algorithm: specific impulse, thrust, efficiency, optimal mass flow rate, power density. Function of input is solely dependant on power and voltage range. For the matter of topic let's introduce semi-empirical equations which are used for scaling current thrusters. \[h=C_{h}d \tag{2}\] \[\dot{m_{a}}=C_{m}hd\] (3) \[P_{d}=C_{p}U_{d}d^{2}\] (4) \[T=C_{t}\dot{m_{a}}\sqrt{U_{d}}\] (5) \[I_{spa}=\frac{T}{\dot{m_{a}}g}\] (6) \[\eta_{a}=\frac{T}{2\dot{m_{a}}P_{d}} \tag{7}\] Where \(C_{x}\) is scaling coefficient obtained from analytical modelling, which makes equations linear. Generally it has 95% prediction band but as was said earlier this linearity is what gives problems to current thrusters designs (high mass, same power density, average performance). The original dataset is Hosting only 24 entries in total. The references are \begin{tabular}{l|l|l|l|l|l|l|l|l} \hline \hline Thruster & Power, W & \(U_{d}\), V & d, mm & h, mm & L, mm & \(m_{a}\), mg/s & T, N & \(I_{spa}\), s \\ SPT-20 & 52.4 & 180 & 15.0 & 5.0 & 32.0 & 0.47 & 3.9 & 839 \\ SPT-25 & 134 & 180 & 20.0 & 5.0 & 10 & 0.59 & 5.5 & 948 \\ Music-si & 140 & 288 & 18 & 2 & 6.5 & 0.44 & 4.2 & 850 \\ \hline HET-100 & 174 & 300 & 23.5 & 5.5 & 14.5 & 0.50 & 6.8 & 1386 \\ KHT-40 & 187 & 325 & 31.0 & 9.0 & 25.5 & 0.69 & 10.3 & 1519 \\ KHT-50 & 193 & 250 & 42.0 & 8.0 & 25.0 & 0.88 & 11.6 & 1339 \\ HEPS-200 & 195 & 250 & 42.5 & 8.5 & 25.0 & 0.88 & 11.2 & 1300 \\ BHT-200 & 200 & 250 & 21.0 & 5.6 & 11.2 & 0.94 & 12.8 & 1390 \\ KM-32 & 215 & 250 & 32.0 & 7.0 & 16.0 & 1.00 & 12.2 & 1244 \\ \hline... & & & & & & & & \\ \hline HEPS-500 & 482 & 300 & 49.5 & 15.5 & 25.0 & 1.67 & 25.9 & 1587 \\ \hline UAH-78AM & 520 & 260 & 78.0 & 20 & 40 & 2 & 30 & 1450 \\ BHT-600 & 615 & 300 & 56.0 & 16.0 & 32 & 2.60 & 39.1 & 1530 \\ SPT-70 & 660 & 300 & 56.0 & 14.0 & 25.0 & 2.56 & 40.0 & 1593 \\ MaSMi60 & 700 & 250 & 60 & 9.42 & 19 & 2.56 & 30 & 1300 \\ \hline MaSMiDm & 1000 & 500 & 67 & 10.5 & 21 & 3 & 53 & 1940 \\ SPT-100 & 1350 & 300 & 85.0 & 15.0 & 25.0 & 5.14 & 81.6 & 1540 \\ \hline \hline \end{tabular} ## IV Data driven HET designs Neural networks are a type of machine learning algorithm that is often used in the field of artificial intelligence. They are mathematical models that can be trained to recognize patterns within large datasets. The architecture of GAN's generator was already shown. In this section we will focus on fully connected networks, which are most popular for type for these tasks. HETFit code leverages dynamic architecture generation of these FcNN's which is done via meta learning algorithm Tree-structured Parzen Estimator for every data input user selects. This code uses state-of-art implementation made by OPTUNA. The dynamically suggested architecture has 2 to 6 layers from 4 to 128 nodes on each with SELU, Tanh or ReLU activations and most optimal optimizer. The code user interface is as follows: 1. Specify working environment 2. Load or generate data 3. Tune the architecture 4. Train and get robust scaling models ### Fnn All of Fully connected neural networks are implemented in PyTorch as it the most powerful ML/AI library for experiments. When the network architecture is generated, all of networks have similar training loops as they use gradient descend algorithm : Loss function: \[L(w,b)\equiv\frac{1}{2n}\sum_{x}\|y(x)-a\|^{2} \tag{8}\] This one is mean square error (MSE) error function most commonly used in FNNs. Next we iterate while updating weights for a number of specified epochs this way. Loop for number of epochs: - Get predictions: \(\hat{y}\) - Compute loss: \(\mathcal{L}(w,b)\) - Make backward pass - Update optimizer It can be mentioned that dataset of electric propulsion is extremely complex due to large deviations in data. Thanks to adavnces in data science and ML it is possible to work with it. This way we assembled dataset on our ROI domain of \(P\)<1000 \(W\) input power and 200-500 \(V\) range. Sadly one of limitations of such model is disability to go beyond actual database limit while not sacrificing performance and accuracy. ### Physics Informed Neural Networks For working with unscaled data PINN's were introduced, they are using equations 2-7 to generate \(C_{x}\) coefficients. Yes, it was said earlier that this method lacks ability to generate better performing HETs, but as we have generated larger dataset on same domain as Lee et al. [7] it is important to control that our dataset is still the same quality as original. Using above mentioned PINN's it was possible to fit coefficients and they showed only slight divergence in values of few % which is acceptable. ### ML approach notes We already have discussed how HETFit code works and results it can generate, the overiew is going to be given in next section. But here i want to warn that this work is highly experimental and you should always take ML approaches with a grain of salt, as some plasma discharge physics in HET is yet to be understood, data driven way may have some errors in predictions on specific bands. Few notes on design tool I have developed in this work: it is meant to be used by people with little to no experience in ML field but those who wants to quickly analyze their designs or create baseline one for simulations. One can even use this tool for general tabular data as it has mostly no limits whatsoever to input data. ### Two input variables prediction One of main characteristics for any type of thruster is efficiency, in this work I researched dependency of multiple input values to \(\eta_{t}\). Results are as follows in form of predicted matrix visualisations. Figure 3 takes into account all previous ones in the same time, once again it would be way harder to do without ML. Figure 3: Hall effect thruster geometry **Fig. 4 \(U,d\rightarrow\eta_{t}\) predictions** **Fig. 5 \(U,d\rightarrow\eta_{t}\) predictions** **Fig. 6 \(d,h\rightarrow\eta_{t}\) predictions** **Fig. 7 \(m_{\tilde{a}},T\rightarrow\eta_{t}\) predictions** [MISSING_PAGE_POST] A NN architecture generation algorithm As with 50 iterations, previously discussed meta learning model is able to create architecture with score of 0.9+ in matter of seconds. HETFit allows logging into neptune.ai environment for full control over simulations. Example trail run looks like that. ### Power density and magnetic flux dependence Neither of the models currently support taking magnetic flux in account besides general physics relations, but we are planning on updating the model in next follow up paper. For now \(\widetilde{B}\) relation to power remains unresolved to ML approach but the magnetic field distribution on z axis is computable and looks like that for magnetically shielded thrusters: ### Dependency of T on d,P Following graph is describing Thrust as function of channel diameter and width, where hue map is thrust. It is well known dependency and it has few around 95% prediction band [7] Figure 10: dB/dz distribution Figure 9: TPE algorithm architecture optimization ### Dependency of T on P,U ### Dependency of \(I_{sp}\) on d,h We generated many models so far, but using ML we can make single model for all of the parameters at the same time, so these graphs tend to be 3d projection of such model inference. ### Use of pretrained model in additive manufacturing of hall effect thruster channels The above mentioned model was used to predict geometry of channel, next the simulation was conducted on this channel. Second one for comparison was calculated via usual scaling laws. The initial conditions for both are: Outcomes are so that ML geometry results in higher density generation of ions which leads to more efficient thrust generation. HETFit code suggests HET parameters by lower estimate to compensate for not included variables in model of HET. This is experimentally proven to be efficient estimate since SEM predictions of thrust are always higher than real performance. Lee et al. [7] Figure 14: **Dependency of \(I_{sp}\) on d,h** Figure 13: **Dependency of T on \(m_{a}\),P** **H. Code description** Main concepts: - Each observational/design session is called an environment, for now it can be either RCI or SCI (Real or scaled interface) - Most of the run parameters are specified on this object initialization, including generation of new samples via GAN - Built-in feature generation (log10 Power, efficiency, \(\vec{B}\), etc.) - Top feature selection for each case. (Boruta algorithm) - Compilation of environment with model of choice, can be any torch model or sklearn one - Training - Plot, inference, save, export to jit/onnx, measure performance **I. COMSOL HET simulations** The simulations were conducted in COMSOL in plasma physics interface which gives the ability to accurately compute Electron densities, temperatures, energy distribution functions from initial conditions and geometry. Here is comparison of both channels. Figure 15: Magnetic flux density distribution, magnetic shielding configuration Figure 16: Electron density with linear SEM geometry Figure 17: Electron density with HETfit geometry ## VI Conclusion In conclusion the another model of scaling laws was made and presented. HETFit code is open source and free to be used by anyone. Additively manufactured channel was printed to prove it's manufactureability. Hopefully this work will help developing more modern scaling relations as current ones are far from perfect. Method in this paper and firstly used in Plyashkov, Shagayda, Kravchenko, Ratnikov, and Lovtsov [1] has advantages over SEM one in: ability to preidct performance more precisely on given domain, account for experimental data. I believe with more input data the ML method of designing thrusters would be more widely used. The code in this work could be used with other tabular experimental data since most of cases and tasks tend to be the same: feature selection and model optimization. Fig. 19: Manufactured channel Fig. 18: COMSOL simulation of designed thruster start up
2303.07114
Uncertainty quantification in neural network classifiers -- a local linear approach
Classifiers based on neural networks (NN) often lack a measure of uncertainty in the predicted class. We propose a method to estimate the probability mass function (PMF) of the different classes, as well as the covariance of the estimated PMF. First, a local linear approach is used during the training phase to recursively compute the covariance of the parameters in the NN. Secondly, in the classification phase another local linear approach is used to propagate the covariance of the learned NN parameters to the uncertainty in the output of the last layer of the NN. This allows for an efficient Monte Carlo (MC) approach for: (i) estimating the PMF; (ii) calculating the covariance of the estimated PMF; and (iii) proper risk assessment and fusion of multiple classifiers. Two classical image classification tasks, i.e., MNIST, and CFAR10, are used to demonstrate the efficiency the proposed method.
Magnus Malmström, Isaac Skog, Daniel Axehill, Fredrik Gustafsson
2023-03-10T10:38:24Z
http://arxiv.org/abs/2303.07114v1
# Uncertainty quantification in neural network classifiers - a local linear approach ###### Abstract Classifiers based on neural networks (nn) often lack a measure of uncertainty in the predicted class. We propose a method to estimate the probability mass function (pmf) of the different classes, as well as the covariance of the estimated pmf. First, a local linear approach is used during the training phase to recursively compute the covariance of the parameters in the nn. Secondly, in the classification phase another local linear approach is used to propagate the covariance of the learned nn parameters to the uncertainty in the output of the last layer of the nn. This allows for an efficient Monte Carlo (mc) approach for: (i) estimating the pmf; (ii) calculating the covariance of the estimated pmf; and (iii) proper risk assessment and fusion of multiple classifiers. Two classical image classification tasks, i.e., mnist, and cfar10, are used to demonstrate the efficiency the proposed method. Neural networks; Uncertainty descriptions; Information and sensor fusion; Identification and model reduction; Intelligent driver aids; Nonlinear system identification; + Footnote †: footnote]This paper was not presented at any IFAC meeting. Corresponding author M. Malmström. ## 1 Introduction In this paper, the problem of quantifying the uncertainty in the predictions from a neural network (nn) is studied. The uncertainty in the prediction stems from three different sources: errors caused by the optimization algorithm that is used to train the nn, errors in the data (aleatoric uncertainty), and errors in the model (epistemic uncertainty). In this paper, the focus is on uncertainty from the two latter sources. In numerous applications, e.g., image recognition [1], learning properties in atoms [2], and various control tasks [3, 4], nns have shown high performance. Despite their high performance, the use of nns in safety-critical applications is limited [5, 6, 7]. It is partly a consequence of the fact that their predictions usually do not come with any measure of certainty of the prediction, which is crucial to have in a decision-making process in order to know to what degree the prediction can be trusted. Moreover, the quantified measure of uncertainty can be used to detect and remove outliers in the data. Furthermore, it is not possible to fuse the prediction from the nn with information from other sensors without knowledge about the uncertainty. Autonomous driving is an example of a safety-critical application in which it is relevant to be able to perform reliable classifications of, e.g., surrounding objects. In particular, this need was highlighted in the fatal Uber accident in 2018 where the lack of reliable classifications of surrounding objects played a role in the development of events that eventually led to the accident [8]. The problem to quantify the uncertainty in the prediction of nns has lately gained increasing attention, and numerous methods to calculate the uncertainty have been suggested [9, 10, 11, 12, 13]. For a survey of methods see [14]. The methods suggested in the literature can broadly be divided into one out of two categories. One category is based on creating an ensemble of predictions from which the uncertainty in the prediction is computed [15, 16, 17, 18, 19, 20, 21, 22]. In the other category, the nn structure is extended and the nn is trained to learn its own uncertainty [23, 24, 25, 26, 27, 28]. Concerning the first category, it has for example been suggested to create an ensemble by training multiple nns, from whose predictions the uncertainty is computed by [15]. Since training a single nn is often computationally expensive, this method has high computational complexity. In practice, it is only feasible from a computational perspective to create small ensembles. To decrease the computational complexity, it was in [16, 17] suggested to use already existing regularization techniques (dropout and batch norm) to sample values of the parameters of the nn from which these ensembles can be created. Another method to create ensembles is by sampling values of the parameters during the last part of the training phase [18, 19]. So-called test-time data augmentation methods have also been suggested to do perturbation on the test data to create an ensemble of predictions [20]. Even though the methods in [16, 17, 18, 19, 20] do not need multiple models to be trained they require multiple forward passes. Furthermore, they require specially tailored training algorithms and carefully constructed structures of the nn. Another limitation of methods relying on creating ensembles is that they have trouble representing the uncertainty caused by the bias in the prediction from a model mismatch. The bias can be caused by an insufficiently flexible model, which could be a result of too high regularization or too low model order. The problem can be solved by nns from the second category, i.e., where the structure of the nn is extended such that it learns its own uncertainty in the prediction. However, this requires a more intricate nn structure with tailored loss functions [23, 24, 25, 26]. As a consequence, the training becomes more complex and computationally expensive. It also makes the methods sensitive to errors caused by the training algorithm, which are not possible to learn. Furthermore, there is also a need for more data to train complex model structures. In this paper, we address the two limitations of the aforementioned methods using classical local approximations from the area of system identification [29], which is sometimes referred to as the _delta method_[30, 31, 32]. For regression tasks, the delta method has previously been used to quantify the uncertainty in the prediction of nns, see e.g., [31, 32, 33, 34, 35, 36], and extended to classification tasks in [30]. ## 2 Problem formulation and contributions Consider the problem of learning a classifier from the training data set \[\mathcal{T}\triangleq\{y_{n},x_{n}\}_{n=1}^{N} \tag{1}\] Here \(y_{n}\in\{1,\ldots,M\}\) is the class labels and \(x_{n}\!\in\!\mathbb{R}^{n_{x}}\) is the input data of size \(n_{x}\), e.g., pixels in an image. From a statistical point of view, the learning of the classifier can be seen as a system identification problem where a model \(f(x;\theta)\) that predicts the conditional probability mass function (pmf) \(p(y|x)\) of a categorical distribution, are to be identified. That is, the probability for \(y=m\) given the input \(x\) is modeled as \[p(y=m|x;\theta)=f_{m}(x;\theta),\quad m=1,\ldots,M \tag{2}\] Here \(\theta\!\in\!\mathbb{R}^{n_{\theta}}\) denote the \(n_{\theta}\)-dimensional parameter vector that parameterize the model. Further, the subscript \(m\) denotes the \(m\):th element of the vector-valued output of the function. To ensure that the model \(f(x;\theta)\) fulfills the properties associated with pmf, i.e., \(f_{m}(x;\theta)\geq 0\)\(\forall m\) and \(\sum_{m}f_{m}(x;\theta)=1\), it is typically structured as \[f(x;\theta)=\text{softmax}\left(g(x;\theta)\right) \tag{3}\] where \[\text{softmax}(z)\triangleq\frac{1}{\sum_{m=1}^{M}e^{z_{m}}}\begin{bmatrix}e^ {z_{1}}\\ \vdots\\ e^{z_{M}}\end{bmatrix} \tag{4}\] and \(g(x;\theta)\) describes the underlying model of the classifier. In the case \(g_{m}(x;\theta)=\theta^{\top}\phi_{m}(x)\), where \(\phi_{m}(x)\) denotes, a possible nonlinear, transformation of the input \(x\), then the model in (3) becomes a standard multinomial logistic regression model [37]. Furthermore, if the transformation \(\phi_{m}(x)\) is chosen randomly, the model becomes similar to the one used in extreme learning machine classifiers [38]. If a nn is used for classification, then the model is given by \[h^{(0)}=x, \tag{5a}\] \[a^{(l+1)}=\left(h^{(l)}\ 1\right)^{\top}W^{(l)},\quad l=0, \ldots,L-1,\] (5b) \[h^{(l)}=\sigma\big{(}a^{(l)}),\quad l=1,\ldots,L-1,\] (5c) \[g(x;\theta)=a^{(L)}. \tag{5d}\] Here \(\sigma(\cdot)\) denotes the activation function, where the ReLu function \(\sigma(z)=\max(0,z)\) is often used. The latent variable \(a^{(l)}\) denotes the value of all the nodes in the \(l\)'th layer of the nn, and \(h^{(l)}\) denotes the transformation using the activation function of the values in all the nodes in the \(l\)'th layer of the nn. The parameters of the nn model consist of all the weights and biases included in the matrices \(W^{(L)},\ldots,W^{(0)}\), i.e., \[\theta=\left[\text{Vec}(W^{(L)})^{\top}\ \ldots\ \text{Vec}(W^{(0)})^{\top} \right]^{\top}. \tag{5e}\] Here \(\text{Vec}(\cdot)\) denotes the vectorization operator. ### Parameter estimation For most nn the number of model parameters \(n_{\theta}>N\) and the model parameters \(\theta\) cannot be uniquely identified from the training data \(\mathcal{T}\) without some regularization or prior information regarding the parameters. Let \(p(\theta)\) denote the prior for the model parameters. The maximum a posteriori estimate of the model parameters is then given by \[\hat{\theta}_{N}=\arg\max_{\theta}p(\theta|\mathcal{T})=\arg\max_{\theta}L_{N} (\theta)+\ln p(\theta), \tag{6}\] where \(p(\theta|\mathcal{T})\) denotes the a posteriori distribution of the parameters and \[L_{N}(\theta)=\sum_{n=1}^{N}\ln f_{y_{n}}(x_{n};\theta) \tag{7}\] denotes the cross-entropy likelihood function [37]. Here \(y_{n}\) is used as an index operator for the subscript \(m\) of \(f_{m}(x;\theta)\). ### Prediction and classification Once the classifier has been learned, i.e., a parameter estimate \(\hat{\theta}_{N}\) has been computed, then for a new input data point \(x^{\star}\) the probability mass function can be predicted as \[\hat{p}(y^{\star}=m|x^{\star};\hat{\theta}_{N})=f_{m}(x^{\star};\hat{\theta}_{ N}),\quad m=1,\ldots,M \tag{8}\] and the most likely class can be found as \[\hat{y}^{\star}=\operatorname*{arg\,max}_{m}f_{m}(x^{\star};\hat{\theta}_{N}). \tag{9}\] Note that, the full pmf estimate \(f(x;\hat{\theta}_{N})\) is needed both for temporal fusion using several inputs from the same class and fusion over different classifiers. Furthermore, even small probabilities can pose a large risk, e.g., there might be a pedestrian in front of a car even if another harmless object is more likely according to the classifier. Hence, it is important that the prediction \(\hat{p}(y^{\star}=m|x^{\star};\hat{\theta}_{N})\) is accurate. However, it is well known that due to, among other things, uncertainties in the parameter estimates \(\hat{\theta}_{N}\) the disagreement between true and estimated pmf may be significant. Therefore, methods to calibrate the prediction \(\hat{p}(y^{\star}|x^{\star};\hat{\theta}_{N})\) such that it better matches \(p(y^{\star}|x^{\star})\) has been developed. ### Temperature scaling One of the most commonly used methods to calibrate the predicted pmf is called temperature scaling [39]. In temperature scaling \(g(x;\theta)\) is scaled by a scalar quantity \(T\) before the normalization by the softmax operator. With a slight abuse of notation, introduce \[f(x^{\star};\hat{\theta}_{N},T)=\operatorname*{softmax}\left(g(x^{\star};\hat{ \theta}_{N})/T\right). \tag{10}\] Via the temperature scaling parameter \(T\) the variations between the components (classes) in the predicted pmf can be enhanced or reduced. When \(T\to 0\), then \(f(x^{\star};\hat{\theta}_{N},T)\to\vec{e}_{i}\), where \(\vec{e}_{i}\) denotes the \(i\)th standard basis vector, thereby indicating that input \(x^{\star}_{n}\) with total certainty belongs to class \(i\). Similarly, when \(T\to\infty\), then \(f_{m}(x^{\star};\hat{\theta}_{N},T)\to 1/M\;\forall m\), thereby indicating that input \(x^{\star}_{n}\) is equally probable to belong to any of the classes. Noteworthy is that the temperature scaling is typically done after the parameters \(\theta\) have been estimated. For notational brevity, the dependency on the temperature scaling parameter \(T\) will only be explicitly stated when temperature scaling is considered. ### Marginalization of parameter uncertainties A more theoretically sound approach to take the uncertainties in the parameter estimate into account is via marginalization of the pmf with respect to the parameter distribution. That is, an estimate of the pmf and its covariance are calculated as \[f(x^{\star}|\mathcal{T})\triangleq\int_{\theta}f(x^{\star}; \theta)p\big{(}\theta|\mathcal{T}\big{)}d\theta \tag{11a}\] \[P^{f}\triangleq\int_{\theta}\!\big{(}f(x^{\star};\theta)-f(x^{ \star}|\mathcal{T})\big{)}\big{(}\big{)}^{\top}\!p\big{(}\theta|\mathcal{T} \big{)}d\theta \tag{11b}\] From hereon \((x)(\cdot)^{\top}\) is used as shorthand notation for \(xx^{\top}\). The integral in (11a) is generally intractable, but can be approximated by Monte Carlo (mc) sampling as follows \[\theta^{(k)}\sim p\big{(}\theta|\mathcal{T}\big{)},\quad k=1,2, \ldots,K, \tag{12a}\] \[\hat{f}(x^{\star}|\mathcal{T})=\frac{1}{K}\sum_{k=1}^{K}f(x^{ \star};\theta^{(k)})\] (12b) \[\hat{P}^{f}=\frac{1}{K}\sum_{k=1}^{K}\!\big{(}f(x^{\star};\theta^ {(k)})-\hat{f}(x^{\star}|\mathcal{T})\big{)}\big{(}\big{)}^{\top}\!. \tag{12c}\] Here \(K\) denotes the number of samples used in the mc sampling. ### Challenges and contributions To realize the mc scheme in (12) the posterior parameter distribution \(p\big{(}\theta|\mathcal{T}\big{)}\) must be computed and samples drawn from this high-dimensional distribution. Our contributions are: (i) a local linearization approach that leads to a recursive algorithm of low complexity to compute an approximation of the posterior parameter distribution \(p\big{(}\theta|\mathcal{T}\big{)}\) during the training phase; (ii) a second local linearization approach to reduce the sampling space from \(n_{\theta}\) to \(M\)-dimensional space in the prediction phase; and as a by-product (iii) a low-complexity method for risk assessment and information fusion. ## 3 Posterior parameter distribution Next, a local linearization approach that leads to a recursive algorithm of low complexity to compute an approximation of the posterior parameter distribution \(p\big{(}\theta|\mathcal{T}\big{)}\) during the training phase is presented. ### Laplace approximation Assume the prior distribution for the model parameters to be normal distributed as \(p(\theta)=\mathcal{N}(\theta;0,P_{0})\), i.e., \(l^{2}\) regularization is used. Then a Laplace approximation of the posterior distribution \(p(\theta|\mathcal{T})\) yields that [40] \[p(\theta|\mathcal{T})\approx\mathcal{N}(\theta;\hat{\theta}_{N},P_{N}^{\theta}), \tag{13}\] where \[P_{N}^{\theta}=\left(-\frac{\partial^{2}L_{N}(\theta)}{\partial\theta^{2}} \Bigg{|}_{\theta=\hat{\theta}_{N}}+P_{0}^{-1}\right)^{-1}. \tag{14}\] That is, the prior distribution is approximated by a normal distribution with a mean located at the maximum a posteriori estimate and a covariance dependent upon the shape of the likelihood function in the vicinity of the estimate. The accuracy of the approximation will depend upon the amount of information in the training data \(\mathcal{T}\). ### Asymptotic distribution According to Bernstein-von Mises theorem [41], if the true model belongs to the considered model set, the maximum a posteriori estimate \(\hat{\theta}\) converge in distribution to \[\hat{\theta}_{N}\stackrel{{ d}}{{\longrightarrow}}\mathcal{N}( \hat{\theta}_{N};\theta_{*},\mathcal{I}_{\theta}^{-1}), \tag{15}\] when the information in the training data \(\mathcal{T}\) tends to infinity. Here, \(\theta_{*}\) denotes the true parameters and \[\mathcal{I}_{\theta}\triangleq-\mathrm{E}\bigg{\{}\frac{\partial^{2}L_{N}( \theta)}{\partial\theta^{2}}\bigg{\}}, \tag{16}\] is the Fisher information matrix. Given the likelihood function in (7) the Fisher matrix becomes \[\mathcal{I}_{\theta}\simeq\sum_{n=1}^{N}\sum_{m=1}^{M}\eta_{m,n}\frac{ \partial g_{m}(x_{n};\theta)}{\partial\theta}\bigg{(}\frac{\partial g_{m}(x_{ n};\theta)}{\partial\theta}\bigg{)}^{\!\!\top}\] (17a) where \[\eta_{m,n}\triangleq f_{m}(x_{n};\theta)(1-f_{m}(x_{n};\theta)). \tag{17b}\] See derivations in Appendix A. ### Recursive computation of covariance To compute the parameter covariance \(P_{N}^{\theta}\) defined by (14), the Hessian matrix of the log-likelihood (ll) must be calculated and then inverted. This has a complexity of \(\mathcal{O}(NMn_{\theta}^{2}+n_{\theta}^{3})\), which for large \(n_{\theta}\) and \(N\) can become intractable. However, by approximating the Hessian matrix of the ll with the Fisher information matrix as follows \[P_{N}^{\theta}\approx\left(\mathcal{I}_{\hat{\theta}_{N}}+P_{0}^{-1}\right)^{ -1}, \tag{18}\] the computation can be done recursively and with a complexity of \(\mathcal{O}\big{(}NMn_{\theta}^{2}+NM^{3}\big{)}\). To do so, note that the \(\mathcal{I}_{\theta}\) in (17) can be written in a quadratic form by defining \[u_{m,n}\triangleq\sqrt{\eta_{m,n}}\frac{\partial g_{m}(x_{n};\theta)}{ \partial\theta}\Big{|}_{\theta=\hat{\theta}_{N}}. \tag{19}\] To compute \(u_{mn}\in\mathbb{R}^{n_{\theta}}\) only the gradient of the ll in (7) is required, which is nevertheless needed for the estimation of \(\theta\). Since \(\mathcal{I}_{\theta}\), and so also the covariance \(P_{N}^{\theta}\), can be written in a quadratic form, it is possible to update it recursively as [30] \[K_{n} =P_{n}^{\theta}U_{n}\big{(}I_{M}+U_{n}^{\top}P_{n}^{\theta}U_{n} \big{)}^{-1} \tag{20a}\] \[P_{n+1}^{\theta} =P_{n}^{\theta}-K_{n}U_{n}^{\top}P_{n}^{\theta}, \tag{20b}\] where \(I_{r}\) denotes the identity matrix of size \(r\). Here \(P_{n}^{\theta}\) is the parameter covariance for \(n\) measurements, and \(U_{n}\) is defined as \[U_{n}=\left[u_{1,n}\ \dots\ u_{M,n}\right]\in\mathbb{R}^{n_{\theta}\times M}. \tag{21}\] The recursion is initialized with \(P_{0}^{\theta}=P_{0}\). ### Approximating the covariance An nn often has millions of parameters which might result in the amount of data needed to store \(P_{N}^{\theta}\) being larger than the available memory capacity. A common approach to handle this is to approximate \(P_{N}^{\theta}\) as a block-diagonal matrix [42]. Another common approach is to use the approximation \[P_{N}^{\theta}\approx\begin{bmatrix}P_{N}^{\theta_{r}}&0\\ 0&0\end{bmatrix}, \tag{22}\] where \(P_{N}^{\theta_{r}}\) denotes the covariance of the estimated parameters \(\theta_{r}\) corresponding to the weights and biases of the \(r\) last layers in the nn[30, 43]. Depending of the number of included layers, this approximation might be more or less accurate. To compensate for the approximation error when doing the marginalization in (11), a scaling of \(P_{N}^{\theta}\) with factor \(T_{c}\geq 1\) can be introduced. The scaling can be estimated from validation data in a similar manner to the temperature scaling \(T\) in Sec. 2.3. ## 4 Efficient MC sampling With access to the parameter covariance, one can propagate the uncertainty in the parameters to uncertainty in the prediction with the delta method using the principle of marginalization. Plugging in the approximate Gaussian distribution (15) into (11a) gives \[f(x^{\star}|\mathcal{T})=\int_{\theta}f(x^{\star};\theta)\mathcal{ N}\big{(}\theta;\hat{\theta}_{N},P_{N}^{\theta}\big{)}d\theta \tag{23a}\] \[P^{f}=\int_{\theta}\!\!\big{(}f(x^{\star};\theta)-f(x^{\star}| \mathcal{T})\big{)}\big{(}\cdot\big{)}^{\top}\mathcal{N}\big{(}\theta;\hat{ \theta}_{N},P_{N}^{\theta}\big{)}d\theta \tag{23b}\] from which mc approximation can be performed \[\theta^{(k)}\sim\mathcal{N}\big{(}\theta;\hat{\theta}_{N},P_{N} ^{\theta}\big{)},\quad k=1,2,\ldots,K, \tag{24a}\] \[\hat{f}(x^{\star}|\mathcal{T})=\frac{1}{K}\sum_{k=1}^{K}f\big{(} x^{\star};\theta^{(k)}\big{)}\] (24b) \[\hat{P}^{f}=\frac{1}{K}\sum_{k=1}^{K}\!\big{(}f(x^{\star};\theta^ {(k)})-\hat{f}(x^{\star}|\mathcal{T})\big{)}\big{(}\cdot\big{)}^{\top}. \tag{24c}\] This is a feasible solution to the problem, but it comes with a high computational cost since it requires drawing mc samples from a high-dimensional Gaussian distribution and evaluating the whole network. ### Marginalization using the delta method The delta method, see e.g., [31, 32], relies on linearization of the nonlinear model \(g(x,\theta)\) and provides a remedy to the problem of sampling from the high-dimensional Gaussian distribution. The idea is to project the uncertainty in the parameters to uncertainty in the prediction before the softmax normalization (4), thereby drastically reducing the dimension of the distribution that must be sampled. Using the delta method, the uncertainty in the parameters can be propagated to the prediction before the softmax normalization as \[p(g(x^{\star};\theta)|\mathcal{T})\approx\mathcal{N}\big{(}g(x^{\star};\theta );\hat{g}_{N},P_{N}^{g}\big{)}\] (25a) where \[\hat{g}_{N}=\mathrm{E}\{g(x^{\star};\theta)\}\simeq g(x^{\star};\hat{\theta}_ {N}) \tag{25b}\] and \[P_{N}^{g} =\mathrm{Cov}\{g(x^{\star};\theta)\}\] \[\simeq\bigg{(}\frac{\partial}{\partial\theta}g(x^{\star};\theta) \big{|}_{\theta=\hat{\theta}_{N}}\bigg{)}^{\top}P_{N}^{\theta}\frac{\partial} {\partial\theta}g(x^{\star};\theta)\big{|}_{\theta=\hat{\theta}_{N}}. \tag{25c}\] Using this Gaussian approximation of the parameter distribution, the mc approximation of the marginalization integral becomes \[g^{(k)}(x^{\star}) \sim\mathcal{N}\big{(}g(x^{\star},\theta);\hat{g}_{N},P_{N}^{g} ),\quad k{=}1,2,\ldots K \tag{26a}\] \[f^{(k)}(x^{\star}) =\mathrm{softmax}\big{(}g^{(k)}(x^{\star})\big{)},\] (26b) \[\hat{f}(x^{\star}|\mathcal{T}) =\frac{1}{K}\sum_{k=1}^{K}f^{(k)}(x^{\star}),\] (26c) \[\hat{P}^{f} =\frac{1}{K}\sum_{k=1}^{K}\!\big{(}f^{(k)}(x^{\star})-\hat{f}(x^{ \star}|\mathcal{T})\big{)}\big{(}\cdot\big{)}^{\top}. \tag{26d}\] To summarize, the main idea of the delta method is linearization performed in two steps. First, the parameter uncertainty is computed using (15), and second, the uncertainty is propagated to the output of the model by (25). Hence, the delta method is a local linear approach that gives a linear approximation of a nonlinear model. ### Fusion Suppose there are a set of independent classifiers, each one providing a marginal distribution \(\mathcal{N}\big{(}g_{N,c};\hat{g}_{N,c},P_{N,c}^{\theta}\big{)}\), \(c=1,\ldots,C\). Then the predictions (before the softmax normalization) from these classifiers can be fused as follows [44] \[P_{N}^{g} =\left(\sum_{c=1}^{C}\big{(}P_{N,c}^{g}\big{)}^{-1}\right)^{-1}, \tag{27a}\] \[\hat{g}_{N} =P_{N}^{g}\sum_{c=1}^{C}\big{(}P_{N,c}^{g}\big{)}^{-1}\hat{g}_{N, c}. \tag{27b}\] If a single classifier is used to classify a set of inputs \(x_{c}^{\star}\), \(c=1,\ldots,C\), known to belong to the same class \(y^{\star}\), then these predictions can be fused as follows \[P_{N}^{g} =(H^{\top}R^{-1}H)^{-1}, \tag{28a}\] \[\hat{g}_{N} =P_{N}^{g}H^{\top}R^{-1}z \tag{28b}\] where \[z=\begin{bmatrix}\hat{g}_{N,1}\\ \vdots\\ \hat{g}_{N,C}\end{bmatrix}\in\mathbb{R}^{CM}\quad H=\begin{bmatrix}I_{M}\\ \vdots\\ I_{M}\end{bmatrix}\in\mathbb{R}^{CM,M} \tag{28c}\] and the block \([R]_{i,j}\in\mathbb{R}^{M,M}\), \(i,j=1,\ldots,C\), of the covariance matrix is given by \[[R]_{i,j}=\frac{\partial}{\partial\theta}g(x_{i}^{\star};\theta)^{\top}\big{|}_{ \theta=\hat{\theta}_{N}}P_{N}^{\theta}\frac{\partial}{\partial\theta}g(x_{j}^ {\star};\theta)\big{|}_{\theta=\hat{\theta}_{N}}. \tag{28d}\] After fusion, the mc sampling in (26) can be applied as before to compute the pmf estimate. ### Risk assessment Risk assessment can be defined as the probability \(r_{m}\) that \(p(y_{n}^{\star}=m|x_{n}^{\star})>\gamma_{m}\). The probability \(r_{m}\) can be estimated from the identified model \(f_{m}(x_{n}^{\star}|\mathcal{T})\) as follows \[\begin{split}\hat{r}_{m}&=\Pr\{f_{m}(x_{n}^{\star} |\mathcal{T})>\gamma_{m}\}\\ &\simeq\frac{1}{K}\sum_{k=1}^{K}\mathbb{1}\big{(}f_{m}^{(k)}(x_{ n}^{\star})>\gamma_{m}\big{)}.\end{split} \tag{29}\] Here \(\mathbb{1}(a>b)\) denotes the indicator function which is one if \(a>b\) and zero otherwise. ## 5 Validation Suppose now we have a validation data set \(\mathcal{V}=\{y_{n}^{\circ},x_{n}^{\circ}\}_{n=1}^{N_{\circ}}\). How can we validate the estimated pmf \(\hat{f}(x_{n}^{\circ}|\mathcal{T})\) obtained from (26)? The inherent difficulty is that the validation data, just as the training data, consists of inputs and class labels, not the actual pmf. Indeed, there is a lack of unified qualitative evaluation metrics [14]. That being said, some of the most commonly used metrics are classification accuracy, ll, Brier score, and expected calibration error (ece). Both the negative ll and the Brier score are proper scoring rules, meaning that they emphasize careful and honest assessment of the uncertainty, and are minimized for the true probability vector [45]. However, neither of them is a measure of the calibration, i.e., reliability of the estimated pmf. Out of these metrics, only ece considers the calibration. Hence, here ece is the most important metric when evaluating a method used to measure the uncertainty [39, 46]. The calculation of the Brier score and ece, together with reliability diagrams are described next. They all can be used to tune the temperature scaling \(T\) described in Section 2.3. ### Brier score The Brier score [45, 47] corresponds to the least squares fit \[\frac{1}{N_{\circ}}\sum_{n=1}^{N_{\circ}}\sum_{m=1}^{M}\big{(}\delta_{m,y_{n} }-\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})\big{)}^{2}, \tag{30}\] where \(\delta_{i,j}\) denotes the Kronecker delta function. Furthermore, \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})\) denotes a generic pmf estimate. ### Accuracy and reliability diagram Accuracy and reliability diagrams are calculated as follows. Calculate the \(J\) bin histogram defined as \[B_{j}=\left\{n:\frac{j-1}{J}\leq\max_{m}\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ} )<\frac{j}{J}\right\} \tag{31}\] from the validation data. For a perfect classifier \(B_{j}=\emptyset\) for \(j<J\). For a classifier that is just guessing, all sets are of equal size, i.e., \(|B_{j}|=|B_{i}|\ \forall i,j\). Note that \(\max_{m}\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})\geq 1/M\), so the first bins will be empty if \(J>M\). The accuracy of the classifier is calculated by comparing the size of each set with the actual classification performance within the set. That is, \[\text{acc}(B_{j})=\frac{1}{|B_{j}|}\sum_{n\in B_{j}}\mathbb{1}\big{(}\hat{y}_ {n}^{\circ}=y_{n}^{\circ}\big{)}\] (32a) where \[\hat{y}_{n}^{\circ}=\operatorname*{arg\,max}_{m}\hat{p}(y_{n}^{\circ}=m|x_{n} ^{\circ}) \tag{32b}\] A reliability diagram is a plot of the accuracy versus the confidence, i.e., the predicted probability frequency. A classifier is said to be calibrated if the slope of the bins is close to one, i.e., when \(\text{acc}(B_{j})=(j-0.5)/J\). ### Confidence and expected calibration error Instead of certainty, from hereon the standard, and equivalent, notion of confidence will be used [39, 46]. The mean confidence in a set is denoted \(\text{conf}(B_{j})\) and is defined as \[\text{conf}(B_{j})=\frac{1}{|B_{j}|}\sum_{n\in B_{j}}\max_{m}\hat{p}(y_{n}^{ \circ}=m|x_{n}^{\circ}), \tag{33}\] This is a measure of how much the classifier trusts its estimated class labels. In contrast to the accuracy it does not depend on the annotated class labels \(y_{n}\). Comparing accuracy to confidence gives the ece, defined as \[\textsc{ece}=\sum_{j}^{J}\frac{1}{|B_{j}|}|\text{acc}(B_{j})-\text{conf}(B_{j} )|. \tag{34}\] A small value indicates that the weight is a good measure of the actual performance. ## 6 Experiment study To illustrate the application of the proposed method to quantify uncertainty in the prediction, two datasets were used. First, an nn was trained using the mnist dataset [48] to classify images of handwritten digits. Second, an nn was trained on the cfar10 dataset [49] to classify images of ten different objects including e.g., cars, cats, and aircraft. ### Classification setup For the two datasets, the structure of the nn was chosen differently. For the mnist dataset, a five-layer nn with fully-connected nodes were used. For the cfar10 dataset, a LeNet5-inspired structure was used with six convolutional layers followed by four fully connected layers. However, for both datasets the three last layers were chosen to have the same structure, i.e., they were fully connected with \(n_{W,L-2}=100\), \(n_{W,L-1}=40\), and \(n_{W,L}=10\). To decrease the size of the parameter covariance used by the delta method, as described in Sec. 3.4 the first part of the nn was assumed fixed and used to create high-level features. Since the structure of the later layers was chosen identically, the parametric models trained on the two datasets had \(n_{\boldsymbol{\theta}}=4450\) parameters. To estimate the model parameters \(\theta\) of the nn, the adam optimizer [50] was used. The standard adam optimizer settings, together with an initial learning rate of \(10^{-4}\) and \(l^{2}\) regularization of \(10^{-4}\), were used. Three and ten epochs were used with the mnist dataset and cfar10 dataset, respectively. ### Illustration of the uncertainties in the predictions The low-dimensional space of the output from \(g(x_{n}^{\circ};\hat{\theta}_{N})\) is particularly interesting to study when trying to understand how the uncertainty in the parameter estimate \(\hat{\theta}_{N}\) affects the classification. Even if the parameter covariance \(P_{N}^{\theta}\) is constant and only depends on the training data, the covariance \(P_{N}^{g}\) depends on the input \(x_{n}\). Fig. 1 illustrates this via an example where we concentrate our study on the decision between just a subset of the number of classes in the mnist dataset, even though the final decision is over all classes. More generally, for some inputs \(x_{n}^{\circ}\) that are located in a dense region in the space of the training data, the covariance \(P_{N}^{g}\) is small, but for an input \(x_{n}^{\circ}\) that is very far from the training data in some norm, the covariance \(P_{N}^{g}\) can be quite large. This indicates that the parameter estimate is quite sensitive in some directions. That means that the output can also be quite sensitive, and a small change in the parameters can give a completely different output. This can be seen in the two examples on the bottom part of Fig. 1. Even though the estimate of the pmf looks similar (especially for the two classes under consideration), by studying the unnormalized prediction \(g(x_{n}^{\circ};\hat{\theta}_{N})\) it is clear that the prediction in the middle has a higher uncertainty compared to the bottom one. ### Results on quantifying the uncertainty Six different methods to quantify the uncertainty in the classification, i.e., to estimate \(p(y_{n}^{\circ}=m|x_{n}^{\circ})\), were evaluated. These are: 1. Standard method, i.e., \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})=f_{m}(x_{n}^{\circ};\hat{\theta}_{N})\). 2. Temp. scaling, i.e., \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})=f_{m}(x_{n}^{\circ};\hat{\theta}_{N},T)\). 3. Deep ensemble, i.e., \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})\) is estimated using the ensemble method in [15]; number of trained nns are 50 for mnist and 10 for cfar10. 4. mc-dropout, i.e., \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})\) is estimated using the ensemble method in [16]; 50 samples of the parameters are used to create the ensemble. 5. Proposed method, i.e., \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})=\hat{f}_{m}(x_{n}^{\circ}|\mathcal{T})\). 6. Proposed method with scaled covariance, i.e., \(\hat{p}(y_{n}^{\circ}=m|x_{n}^{\circ})=\hat{f}_{m}(x_{n}^{\circ}|\mathcal{T},T _{c})\), but with the covariance \(P_{N}^{g}\) in (25) scaled with a factor \(T_{c}\). In Fig. 2, the reliability diagram for the six different methods to quantify the uncertainty in the prediction of the nn described in Sec. 6.3 is shown. Neither computing the uncertainty in the prediction using the softmax (i), deep ensembles (iii), mc-dropout (iv), or the proposed method without scaled covariance (v) gives calibrated estimates of the uncertainty. To get well-calibrated estimates of the uncertainty either the proposed method with scaled covariance (vi) or temperature scaling (ii) should be used. Finding \(T\) and \(T_{c}\) is commonly done by minimizing the ece. However, increasing the scaling factor decreases the ll. Hence, there is a trade-off between high ll and low ece. In Table 1, the accuracy, ll, Brier score, and ece are shown for six different methods to quantify the uncertainty in the prediction of the nn. The methods are evaluated both using the mnist and cfar10 datasets. Table 1 shows that the proposed method attains the lowest ece for both datasets. This while still having reasonably good performance in terms of accuracy, ll, and Brier score. ## 7 Summary and Conclusion A method to estimate the uncertainty in classification performed by a neural network has been proposed. The method also enables information fusion in applications where multiple independent neural networks are used for classification, or when a single neural network is used to classify a sequence of inputs known to belong to the same class. The method can also be used for statistical risk assessment. The proposed method is based on a local linear approach and consists of two steps. In the first step, an approximation of the posterior distribution of the estimated neural network parameters is calculated. This is done using a Laplacian approximation where the covariance of the parameters is calculated recursively using the structure of the Fisher information matrix. In the second step, an estimate of the PMF is calculated where the effect of the uncertainty in the estimated parameters is considered using marginalization over the posterior distribution of the parameter estimate. This is done by propagating the uncertainty in the estimated parameters to the uncertainty in the output of the last layer in the neural network using a second local linear approach. The uncertainty in the output of the last layer is approximated as a Gaussian distribution of the same dimension as the number of classes. The PMF and its covariance are then calculated via MC sampling, where samples are drawn from this low-dimensional distribution. The proposed method has been evaluated on two classical classification datasets; MNIST and CFAR10. Neural networks with standard architectures were used. To handle a large number of parameters in these network architectures, only the parameters of the last layer were considered in the uncertainty computations. The results, in terms of ECE, show that the proposed method in its standard form yielded a similar performance as standard methods which do not take the uncertainty in the estimated parameters into account. However, when using a rescaled parameter covariance matrix, used to compensate for the fact that only the uncertainty from the parameters in the last layers was considered, a significant reduction in the ECE was observed. This indicates that \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{mnist} & \multicolumn{4}{c}{cfar10} \\ Method & acc. \(\uparrow\) & ll (\(10^{3}\)) \(\uparrow\) & Brier score \(\downarrow\) & ece \(\downarrow\) & acc. \(\uparrow\) & ll (\(10^{3}\)) \(\uparrow\) & Brier score \(\downarrow\) & ece \(\downarrow\) \\ \hline Standard \(f(x_{n}^{\circ};\hat{\theta}_{N})\) & 91\% & 7.886 & 0.134 & 1.078 & 83\% & 7.904 & 0.291 & 1.328 \\ Temp. sc. \(f(x_{n}^{\circ};\hat{\theta}_{N},T)\) & 91\% & 7.818 & 0.133 & 0.951 & 83\% & 7.740 & 0.269 & 0.612 \\ Deep ensemble & 96\% & 7.856 & 0.080 & 2.868 & 87\% & 7.834 & 0.191 & 1.479 \\ mc-dropout & 93\% & 7.424 & 0.123 & 2.424 & 81\% & 9.935 & 0.301 & 2.829 \\ Prop. met. \(\hat{f}(x_{n}^{\circ}|\mathcal{T})\) & 91\% & 7.845 & 0.151 & 1.242 & 83\% & 8.176 & 0.243 & 2.140 \\ Prop. met. \(\hat{f}(x_{n}^{\circ}|\mathcal{T},T_{\mathrm{c}})\) & 91\% & 7.763 & 0.151 & 0.821 & 82\% & 7.545 & 0.239 & 0.540 \\ \hline \hline \end{tabular} \end{table} Table 1: Computed performance measure for the two datasets. The arrows indicate whether a high or low value is preferable. Figure 2: Reliability diagrams for prediction on the mnist dataset. The diagrams illustrate the six different methods to measure the confidence in the prediction described in Sec. 6.3. A calibration line is also shown in black. the proposed method works, but that more advanced low-rank methods to approximate the parameter covariance are needed. This is a direction for future research. ## Acknowledgements This work is supported by Sweden's innovation agency, Vinnova, through project iQDeep (project number 2018-02700).
2303.03400
Testing the Channels of Convolutional Neural Networks
Neural networks have complex structures, and thus it is hard to understand their inner workings and ensure correctness. To understand and debug convolutional neural networks (CNNs) we propose techniques for testing the channels of CNNs. We design FtGAN, an extension to GAN, that can generate test data with varying the intensity (i.e., sum of the neurons) of a channel of a target CNN. We also proposed a channel selection algorithm to find representative channels for testing. To efficiently inspect the target CNN's inference computations, we define unexpectedness score, which estimates how similar the inference computation of the test data is to that of the training data. We evaluated FtGAN with five public datasets and showed that our techniques successfully identify defective channels in five different CNN models.
Kang Choi, Donghyun Son, Younghoon Kim, Jiwon Seo
2023-03-06T09:58:39Z
http://arxiv.org/abs/2303.03400v1
# Testing the Channels of Convolutional Neural Networks ###### Abstract Neural networks have complex structures, and thus it is hard to understand their inner workings and ensure correctness. To understand and debug convolutional neural networks (CNNs) we propose techniques for testing the channels of CNNs. We design FtGAN, an extension to GAN, that can generate test data with varying the intensity (i.e., sum of the neurons) of a channel of a target CNN. We also proposed a channel selection algorithm to find representative channels for testing. To efficiently inspect the target CNN's inference computations, we define _unexpectedness_ score, which estimates how similar the inference computation of the test data is to that of the training data. We evaluated FtGAN with five public datasets and showed that our techniques successfully identify defective channels in five different CNN models. ## 1 Introduction Deep neural networks (DNNs) are used in many application domains. As more DNN models are deployed, it becomes more important to ensure that they function correctly and reliably. However, due to their complexity, it is difficult to analytically verify the correctness of DNNs [13]. To practically test neural networks, test input generation has been studied. Most well-known is adversarial example generation that finds minimal perturbation to input to deceive a target neural network [1]. The perturbed input, i.e. adversarial examples, may be used to assess the robustness of DNNs for adversarial attack. While the techniques are effective in finding adversarial examples, they focus on low-level neuron operations and do not examine, for example, the interactions of the feature maps in convolutional neural networks (CNNs). Testing is a widely studied topic in software engineering. Many techniques have been developed to test the correctness of software systems. For example, test input generation explores the ranges of certain variables such as array indices and find the inputs to induce buffer overflow [12, 13]. Also, a common technique in software testing is to check for inconsistencies in parts of larger systems [1]; Researchers discovered that implicit invariants exist for certain functions or modules, and their violations often result in invalid system states [1, 14]. In this paper, we employ these testing strategies in software engineering for neural networks. In particular, we propose _channel-wise_ testing of CNNs, which is a test generation technique for convolutional neural networks. The channels in CNNs are, to some extent, similar to the functions and modules in software; they both are logical units of larger systems. As with unit testing in software engineering, testing individual channels in a modular manner helps to debug and understand neural networks. We generate test data to separately examine the behavior of individual channels and check for their (in)consistencies. With this consistency information, we rank the test data and report (potentially) defect-inducing inputs and the corresponding channels. With channel-wise testing, we aim to find defects in CNNs (i.e., unintended inference outcome) that are caused by channels having deviant behavior in high or low activation levels. To this end, we designed FtGAN, an extension to GAN that tests the channels of target CNNs. FtGAN is trained, in an unsupervised manner, to find the latent variables that are correlated with the CNNs' channels. Using FtGAN, we gradually vary the latent variables in the generated test data, which then affects the correlated channels in the tested CNNs. To identify inconsistent behavior of the channels with generated test data (compared to that with training data), we define _unexpectedness score_ that compares the inference computations and estimates their inconsistency. This paper proposes channel-wise testing of CNNs. Our contributions are threefold: 1) we designed FtGAN to test selected channels of CNNs (Section 3), 2) we developed channel selection algorithm to find representative channels for testing (Section 4), and 3) we identify defect-inducing test data with unexpectedness score, which uses channel correlations to estimate inconsistencies in the inference computation (Section 5). Our evaluation shows that FtGAN helps to find real and synthetic defects in neural networks. ## 2 Related Work and Motivation We describe existing studies that are closely relevant to our work, that is, adversarial attacks, semantic image transformations, and coverage-guided testings. Then we discuss our preliminary experiments that motivated this study. **Adversarial Attacks.** Neural networks are known to be susceptible to imperceptible perturbations. Deceiving neural networks by exploiting this property is referred to as adversarial attack [12]. Techniques for adversarial attack have been extensively studied,[13, 14, 15]. However, these techniques search only in raw pixel space to find adversarial examples, and they cannot handle certain realistic variations of attributes, such as light conditions [16]. Recently, adversarial attack techniques with distance metrics other than \(L_{p}\) norm are studied [13, 14]. **Semantic Image Transformation.** To further explore diverse attacks on neural networks, techniques based on semantic image transformations are studied [1, 1, 15]. Particularly, studies based on deep generative models are extensively conducted. For example, Bhattad et al. make use of texture transfer models to attack neural networks. For more general semantic adversarial attack, attribute-conditioned image editing models are exploited [1, 15, 16, 17]. Joshi et al. leveraged an attribute-editing GAN to search over the range of attributes to generate adversarial examples; Wu et al. proposed ReIGAN that progressively modifies attributes with its _relative attributes_. While these techniques effectively generate semantic adversarial examples, they require manual annotation of attributes, which is costly. **Coverage-Guided Neural Network Testing.** In software engineering, test coverage metrics, such as path coverage, measure the fraction of code that is exercised by a test suite; it also assesses the quality of test suites. Similar metrics are recently proposed for neural networks [1, 14, 15, 16]. DeepXplore introduced the notion of _neuron coverage_ that represents the fraction of neurons activated by a set of test inputs. The metric is then used to simulate domain specific perturbations [15]. Other metrics, such as neuron boundary coverage, are proposed as test coverage metrics for neural networks [16, 17, 18]. TensorFlowz adopted coverage-guided fuzzing to efficiently find test input that violates certain properties in application domains [1]. Our work is inspired by these techniques and further studies the channel-level testing and coverage metric. **Motivating Channel-Wise Testing.**Odena et al. applied coverage-guided testing for neural networks [15]. Their technique, i.e. TensorFlowz, generates a corpus of test inputs to find the inputs that violate certain domain properties. For an efficient search over the input space, TensorFlowz records the internal states (i.e., activation vectors) of the neural network and creates the corpus consisting of _dissimilar_ test inputs. They show that TensorFlowz can find error-inducing test inputs for fault-injected models and real-world ones. TensorFuzz shows that coverage-guided test generation is helpful for debugging and understanding neural networks. While it employs internal neuron activations for the coverage metric, we considered higher-level metrics may be also useful. Specifically we conjectured that CNNs' trained features (i.e. channels of hidden layers) are well-suited for the coverage metric as they are activated in various degrees of _intensities_ for different inputs, where intensity denotes the sum of the channel's neuron values. Moreover, trained features are higher-level measure than the neuron activations and thus testing with feature intensities may give different perspective in understanding neural networks. Our preliminary experiments showed that TensorFlowz cannot test the diverse channels of CNNs. The details are discussed in Section 7.4 but TensorFlowz covers less than 35% of all features in tested CNNs; majority of the features are not well tested. When we systemically test them, it uncovers hidden issues in tested CNNs. Consider the data corruption problem [1] that may cause defects vulnerable to input distribution shift. Specifically, in a face identification task, assume that the training data is corrupted such that faces with a certain attribute (e.g. green hair) are all identified as a same person. Again the details are in Section 7.2 but FtGAN successfully revealed the problem by identifying the channel that is correlated to the attribute and generating defect-inducing test data. In contrast, TensorFlowz is based on randomized fuzzing, and its noise-augmented test data does not help to find the problem. ## 3 Taming GAN for Testing CNN's Channels ### Introducing FtGAN GAN-based techniques have been studied to generate realistic test input for neural networks [1, 15]. We also make use of GAN and propose FtGAN for testing CNN's channels. In particular, we use GAN's capability to learn latent variables in the training data and generate images with varying the latent variables [1]. However, instead of learning _any_ latent variables, we train FtGAN to learn those that are highly related to the selected channels of a tested CNN [10, 15]. Specifically, FtGAN is conditioned on an auxiliary input \(I\) that indicates the _intensity_ of the CNN's selected channel, i.e., the summation of the channel's neuron values. By varying \(I\), we can control the generated images so that they activate the selected channel with varying intensity levels. We use the channel intensity as the control input because CNN's channel-wise mean and variance are known to capture the styles and attributes of images [15, 10, 16]; also, the channel mean and variance are previously used to control style features such as textures in generator networks [10, 15]. FtGAN consists of generator \(G\) and discriminator \(D\) as shown in Fig. 1. FtGAN makes use of the target CNN \(T\) and its selected channel \(c\) for its training. The encoder \(G_{enc}\) is optional, and if used, the generated output is reconstructed from the input with the decoder's transformation. The input \(I\) is a scalar value that controls channel \(c\)'s intensity (\(T_{c}\)), that is, the sum of \(c\)'s neuron values. The discriminator has two parts; one that distinguishes real data from the generated ones and another that infers the value of \(I\). The target CNN \(T\), which is being tested, is used by FtGAN as a reference point. Its selected channel's intensity is directed to \(G\), giving the guidance to find the latent variable related to the channel. ### Architecture of FtGAN Our goal is to train a generator \(G\) that learns to generate images with varying the intensity of target channel \(c\). To achieve this, we train \(G:(x,I)\to x^{\prime}\) to transform input image \(x\) into output image \(x^{\prime}\) that make the model \(T\) to have \(T_{c}(x^{\prime})\sim I\) on target channel \(c\). \(T_{c}\) is the function returning the intensity of channel \(c\) and \(I\) is the target intensity calculated as \(I\)=\(T_{c}(x)\cdot(1+r)\) with a distortion rate \(r\). We set \(r\) such that \(1+r\) ranges from 0.33 to 3 in our experiments. We also have an auxiliary classifier in the discriminator to predict the channel intensity \(I\). For training FtGAN we define four loss terms, namely, adversarial loss, channel intensity loss, auxiliary regression loss, and reconstruction loss. We describe these four loss terms in the following. **Adversarial loss:** For stable training, we adopt WGAN's adversarial loss [1] that minimizes the Wasserstein-1 distance between the real and generated distributions. Let \(D_{img}\) be the discriminator which outputs the probability that its input is a real data. Then the adversarial loss is \[L_{adv}= E_{x}[D_{img}(x)]-E_{x,I}[D_{img}(G(x,I))] \tag{1}\] We also apply the gradient penalty for the Lipschitz constraint [1]. **Channel intensity loss:** We control the intensity of the channel \(c\) separately from the remaining channels in the same layer \(L\). Therefore, we employ the channel intensity loss \(L_{int}\) for the generator to constrain the generated image \(x^{\prime}\) to produce the desired channel intensity \(I\), formulated as follows (\(L\) is the set of all channels in layer \(L\)), \[L_{int}= |T_{c}(x^{\prime})-I|+\frac{1}{|L|-1}\sum_{x^{\prime}\in L_{adv} \neq x^{\prime}}|T_{c^{\prime}}(x^{\prime})-T_{c^{\prime}}(x)| \tag{2}\] **Auxiliary regression loss:** Our goal is to transform an input image \(x\) into a realistic image \(x^{\prime}\), which yields the intended intensity on the target channel. Therefore we add an auxiliary regressor \(D_{aux}\) that shares the convolution layers with \(D_{img}\) and define the auxiliary regression loss for both \(D\) and \(G\). The auxiliary regression loss for real images is defined as \[L_{aux}^{real}= E_{x}[-\log Q_{aux}(x,T_{c}(x))] \tag{3}\] where \(Q_{aux}(x,I)\) is modeled with a normal distribution whose mean \(\mu_{aux}(x)\) and variance \(\sigma_{aux}^{2}(x)\) are estimated by the auxiliary regressor \(D_{aux}\) for input \(x\). The log-normal \(\log[Q_{aux}(x,I)]\) is calculated as \[-\frac{1}{2}\log\left(2\pi\sigma_{aux}^{2}(x)+\epsilon\right)-\frac{1}{2} \left(\frac{I-I_{aux}(x)}{\sigma_{aux}(x)+\epsilon}\right)^{2} \tag{4}\] where \(\epsilon\) is a small positive value. By minimizing the objective, \(D_{aux}\) learns to estimate the channel intensity \(T_{c}(x)\) of a real image \(x\). The pairs of real images \(x\) and their target channel intensities \((x,T_{c}(x))\) are used to train \(D_{aux}\) using the above loss \(L_{aux}^{real}\). Furthermore, the loss function for the auxiliary regression with fake image \(x^{\prime}\), which is generated by \(G\) with the intensity \(I=T_{c}(x)\cdot(1+r)\), is formulated as following: \[L_{aux}^{fake}= E_{x,I}[-\log Q_{aux}(G(x,I),I)] \tag{5}\] The above loss function is then used to train \(G\) to generate images with the intended intensity \(I\) on the channel \(c\). For each input \(x\) in the training dataset and randomly selected distortion rate \(r\), we generate \(x^{\prime}\)=\(G(x,T_{c}(x)\cdot(1+r))\) and the pair \((x^{\prime},T_{c}(x)\cdot(1+r))\) is then used for the training. **Reconstruction Loss:** We use the reconstruction [1] loss to train the generator to produce images that are similar to the original images, given the same channel intensity. This is achieved by the following objective: \[L_{rec}= E_{x,T_{c}(x)}[\|x-G(x,T_{c}(x))\|_{\cdot}] \tag{6}\] By minimizing this loss, the generator can produce images that closely approximate the original images when the same channel intensity is given as input. **Overall Objective:** The final objective for the generator is \[L_{enc,dec}= \lambda_{adv}L_{adv}+\lambda_{rec}L_{rec}+\lambda_{int}L_{int}+ \lambda_{aux,f}L_{aux}^{fake} \tag{7}\] and the objective for the discriminator/adximity classifier is \[L_{dis,aux}= -\lambda_{dis}L_{adv}+\lambda_{aux,r}L_{aux}^{real}+\lambda_{gp}GP \tag{8}\] We set the coefficients in Eqns. 7 and 8 to make the loss terms be in the same order of magnitude [1, 1, 1]. We set \(\lambda_{adv}\)= \(\lambda_{dis}\)=\(1\), \(\lambda_{rec}\)=\(100\), \(\lambda_{aux,*}\)=\(5\), \(\lambda_{gp}\)=\(1\) (for the gradient penalty \(GP\)); the value of \(\lambda_{int}\) varies for the datasets (e.g. 0.5 for CelebA). ### Reducing the Cost of Training FtGAN To test a CNN's channel, we need to train an FtGAN instance. Although the cost of training FtGAN is not trivial, it can be reduced with pre-training. That is, we pre-train FtGAN to generate realistic images without the target CNN; then we fine-tune FtGAN with the intensity loss from the target CNN. This way, we pre-train FtGAN once and fine-tune it multiple times to test multiple channels. The cost of the fine-tuning is less than a quarter of that of the pre-training. ## 4 Coverage-Guided Channel Selection FtGAN generates realistic images for testing a CNN's channels. However, recent CNN models have many layers and channels, and thus it is not feasible to test all those channels with FtGAN. Thus, we propose to test a subset of the channels that is _representative_ of all or most of the channels. The key idea is to exploit the correlations between CNN channel intensities [1, 1, 1]. Figure 1: FtGAN Architecture (\(G\): generator, \(D\): discriminator, \(T\): the target CNN, and \(T_{c}\): its tested channel’s intensity). 2016); i.e. we select a subset \(S\) of the channels having high correlations (either positive or negative) with the channels that are not in \(S\). Then, testing the channels in \(S\) would have the effect of indirectly testing the other channels. We now formally define the channel selection problem as follows. Let \(corr(c_{i},c_{j})\) be the Pearson correlation between channel \(c_{i}\) and \(c_{j}\). For given test inputs, \(corr(c_{i},c_{j})\) is computed with the pairs of the intensities of \(c_{i}\) and \(c_{j}\). We compute the correlations for all pairs of the convolutional channels. Let \(V\) be the set of all channels in the convolutional layers of the target network. With subset \(S{\subseteq}V\), we presume that a channel \(c_{i}{\notin}S\) can be _indirectly_ tested by one of the channels in \(S\) if its correlation with \(c_{i}\) is larger than a given threshold. We denote by \(corr(c_{i},S)\) the maximum correlation between a channel in \(S\) and \(c_{i}\), i.e., \(\max_{\gamma\in S}corr(\gamma,c_{i})\). Let \(\theta\) be the minimum correlation threshold to indirectly test the channels not in \(S\). Then, the problem of finding the minimum subset \(S\) for testing all channels in \(V\) is formulated as \[\small\arg\min_{\delta\subseteq V}|S|\text{ s.t. }\min_{c_{i}\in V}corr(c_{i},S) \geq\theta. \tag{9}\] In other words, our channel selection problem finds the smallest set \(S\) such that for all channels \(c_{i}\) in \(V\), there exists at least a channel \(c_{j}\) in \(S\) with \(corr(c_{i},c_{j}){\geq}\theta\). **Proposition 4.1**: _Minimal channel selection (Eqn. 9) is equivalent to the minimal hitting set problem._ Let \(\delta(c_{i})\) for \(c_{i}{\in}V\) denote the set of channels \(c_{j}{\in}V\) with \(corr(c_{i},c_{j}){\geq}\theta\). That is, \(\delta(c_{i})\) is a set of channels that can be tested instead of \(c_{i}\) as their correlations are higher than \(\theta\). A feasible solution to the channel selection problem is the set that has at least one channel in \(\delta(c_{i})\) for all \(c_{i}{\in}V\). Formulated in this way, this problem is equivalent to the minimal hitting set problem (Ausiello, D'Atri, and Protasi 1980). The minimal hitting set problem is NP-complete and equivalent to the minimal set cover problem. Due to Proposition 4.1, we can obtain a greedy approximate algorithm as shown in Algorithm 1 whose approximation ratio is proven to be \(\ln(n)+1\) where \(n\) is the number of channels. ## 5 Testing Channels with Unexpectedness Canonical correlation analysis (CCA) has been used to analyze and understand the latent representations, i.e., features in channels, of neural networks (Hardoon, Szedmak, and Shawe-Taylor 2004; Li et al. 2015; Morcos, Raghu, and Bengio 2018; Raghu et al. 2017). The method finds linear transformations that maximize the correlations between multidimensional variables (Hotelling 1992). Recently, Raghu et al. applied CCA to ResNet models and demonstrated that the correlation of the hidden neurons to different labeling classes are distinctly different. We use CCA to analyze the target CNN's inference computation for generated test data and find inconsistent channel behavior. Since FitGAN varies the intensity of tested channels, we apply CCA at a channel level and compute the correlation of channel intensities. Also, because CCA is very expensive to compute (Raghu et al. 2017), we approximate it by computing _pair-wise_ channel correlations and use them as the reference points of the inference computation. That is, we calculate the pair-wise channel correlations using the training data labeled as a same class, and find the top-\(k\) channel pairs of maximum correlation coefficients. Then we use them as the reference point for the class and compare them to the correlations computed with generated test data of the same class. Our experiments (similar to the one by Raghu et al. and described in the supplementary material) show that the comparison reflects the similarity (or unexpectedness) of their inference computations. We do not claim that unexpectedness score accurately measures the similarity (or inconsistency) of inference computations; we argue with experiments that the score helps to identify deviant channel behavior. This is similar to many testing techniques in software engineering that rank the test results (potential bugs) by certain evaluation scores and report the higher-ranked ones. More formally, for testing a selected channel in layer \(L\), we define _unexpectedness_ score as the L1 distance of \(L\)'s top-\(k\) channel correlations with the training data to those with the generated test data of a same class; i.e., the unexpectedness score is \(\sum_{(c_{i},c_{j})\in\text{TopK}}|corr_{X(T)}(c_{i},c_{j})-corr_{X^{\prime}(T )}(c_{i},c_{j})|\), where \(X(T)\) and \(X^{\prime}(T)\) are the training data and generated test data in class \(T,corr_{X(T)}\) is the correlation computed with \(X(T)\), and TopK is the set of channel pairs in \(L\) with top-\(k\) correlations for the training data; we set \(k{=}5\%\) in our evaluation. After we generate test data for selected channels, we measure and rank the unexpectedness of the generated data in each class. Then we report the tested channels, the classes, and the test data of the classes ranked by their unexpectedness with highlighting the test data that changed the inference outcome. ``` 1:\(\forall c_{i}{\in}V,\delta(c_{i}){=}\{c_{j}\in V|corr(c_{i},c_{j})\geq\theta\}\), Output: Channel set \(S\) 2:Initialize \(C\leftarrow\emptyset\) and \(S\leftarrow\emptyset\); 3:while\(C\neq V\)do 4:\(c^{*}{\leftarrow}\arg\min_{c_{i}\in V-C}|\delta(c_{i})/C|\);\(C{\leftarrow}C\cup\delta(c^{*})\);\(S{\leftarrow}S\cup\{c^{*}\}\); 5:endwhile ``` **Algorithm 1**Greedy channel selection algorithm. ## 6 Discussion **Limitation.** Testing with FitGAN is limited by the capability of GAN. That is, FitGAN learns the attributes that are present in the training dataset. Thus we can only test with those attributes in the dataset that are correlated to the selected channels of target CNNs. This limits the types of bugs that our technique can detect. However, testing is generally considered to be opportunistic, and similar limitations exist in many testing tools, especially in those that check deviant runtime behaviors (Engler et al. 2001; Ernst et al. 2007; Haller et al. 2013). It is more important for a testing tool to find real bugs in practice, which we show in our evaluation. Another limitation is that our testing requires human examination of the test results. We minimize this by computing unexpectedness scores and ranking the results by the scores. Thus only a small subset of the test results needs to be examined. This is similar to many software testing tools that rank test results (potential bugs) by certain scores (Engler et al. 2001; Ernst et al. 2007; Haller et al. 2013). In all our experiments, buggy channels are in top-5 by unexpectedness score, which made human intervention reasonably small. **Multi-Channel Testing.** Although FitGAN tests a single channel at a time, its generated test data incorporates the changes of other correlated (or anti-correlated) channels, as we show in our supplementary material (Fig. 8). Hence those correlated channels are collectively tested in effect. Moreover, testing multiple (non-correlated) channels is supported with FtGAN by chaining multiple FtGAN instances. That is, if we want to test two channels \(c_{1}\) and \(c_{2}\), we can direct the output of the FtGAN trained for \(c_{1}\) as the input of another FtGAN trained for \(c_{2}\). We have tested multiple channels in this way for a subset of our experiments, which we describe in Section 7.2. Chaining multiple GANs in a similar manner was previous studied for generating high resolution images [22] or transforming poses and expressions of facial images [23]. ## 7 Evaluation We evaluated whether FtGAN can effectively test CNNs' channels with three sets of experiments: 1) one for making realistic (and error-inducing) attribute variations (Section 7.1), 2) another for finding the channels that are correlated to error-inducing attributes in bug-injected and real-world CNN models (7.2 and 7.3), and 3) the last for ensuring the test coverage of our greedy channel selection (7.4). We evaluated our technique with five datasets in four domains; they are MNIST, SVHN, VGG Face, CelebA, and CARLA [1]. MNIST and SVHN are for digit recognition, VGG Face is for face identification, CelebA is for face attribute recognition, and CARLA is for autonomous driving. Table 1 shows the datasets. Moreover, we tested with five models - LeNet, AlexNet, VGG-16, ResNet-50, and CARLA-CNN (a custom CNN model for autonomous driving). LeNet/AlexNet are trained to classify the categories in the dataset(MNIST/SVHN) or to detect a subset of the attributes in the dataset (CelebA); VGG/ResNet are trained to detect the identities in the VGG Face dataset. We used NVIDIA Titan XP for all the experiments. On a single Titan XP, the fine-tuning of FtGAN for one channel takes ten minutes for SVHN and an hour for CelebA. We mainly evaluated our testing techniques qualitatively. This is similar to the evaluation of software testing techniques that analyze program behavior to infer implicit invariants and report their violations [15, 1, 16, 17, 18, 19, 20, 21]. ### Efficacy of FtGAN's Test Data Generation We first evaluate whether FtGAN generates realistic images that induce the tested channels with varying intensities. Then we examine the unexpectedness scores of the generated data and discuss the validity of the scores. For this evaluation, we mostly used the face datasets and the corresponding CNN models. We run greedy channel selection to select twenty channels for each CNN instance. Then we trained FtGAN for the selected channels and generated test data using the images in the test sets as the seeds. We set the intensity input \(I\) to be between 0.33 and 3 times that of the seeds for the test data generation. We show some of the generated test data in Fig. 2 (more in the supplementary material). We observed that as we vary the intensity input \(I\), the correlated latent attributes are gradually changing in all the generated test images. We also observed that some of the latent attributes for the channels are human recognizable (a-d in Fig. 2) and others are not (e-h). The attributes that are human recognizable are, namely, hair color, face mask, face color, and age, respectively for a-d; we call these channels by their correlated attribute names, e.g. age channel. We can see that the test data for these attributes show realistic variations. **Validity of Unexpectedness Score.** We analyzed the unexpectedness scores of the channels for Fig 2. Due to the space limit, we discuss the details in the supplementary material, but our analysis shows that the higher scores generally indicate inconsistent inference computations. The channels a-c in Fig 2 have high unexpectedness scores and we observed several inconsistent behaviors for them. For example, testing the face color channel with the _pale-skin_ class data generated pink-ish face data, which confused the classification of the age attribute; i.e., the generated images are classified as not _young_ even though the seed images are labeled as such. Fig. 2(c, right) shows an example; with the pink-ish skin she is incorrectly classified as not _young_. Also, the hair color channel shows similar behavior and confused _pale-skin_ classifier. The supplementary material has more discussions and experiments with other (non-facial) datasets. ### Finding Data Corruption Bugs with FtGAN We evaluate if FtGAN helps to find bugs in bug-injected CNN instances. We trained two CNN models to have defects (of being vulnerable to input attribute distribution shift) on purpose by corrupting the training dataset. For the defect injection, we use Morpho-MNIST, which extends MNIST with morphometric transformations such as "thickening" or "swelling" [10]. We trained two AlexNet models with the combined dataset of MNIST and Morpho-MNIST, one with the thickening and another with the swelling transformation. The images in Morpho-MNIST are given a same incorrect label (i.e., 0) so that the \begin{table} \begin{tabular}{|l|c|l|} \hline **Dataset** & **Model** & **Task** \\ \hline \hline MNIST/SVHN & LeNet \& AlexNet & Digit recognition \\ \hline VGG Face & VGG-16 \& ResNet-50 & Face identification \\ \hline CelebA & AlexNet & Face attribute detection \\ \hline CARLA & CARLA-CNN & Autonomous driving \\ \hline \end{tabular} \end{table} Table 1: Evaluated Datasets and Models. Figure 2: Test data made by FtGAN. The latent attributes for a–d are human-recognizable and those for e–g are not. two AlexNets have the defects of incorrectly classifying the thickened or swollen digits. The accuracy of the models is 99% for the normal digits. For the thickened or swollen digits, the models are only 2% and 1% accurate, respectively. For the two AlexNet instances, namely AlexNet-TH(ick) and AlexNet-SW(ell), 1) we apply the greedy channel selection and train FtGAN with the twenty selected channels, 2) generate test images with the channel intensity from 0.33 to 3 times of the original, and 3) measure the unexpected scores for the channels; we rank by the score the test data for the channels and examined top-5 channels' test data in detail. For both AlexNet-TH and AlexNet-SW, we noticed that for one particular channel, namely thick channel and swollen channel, its test data result in incorrect inference outcome at a high rate. Table 2 shows the rate of the generated test data for thick and swollen channels resulting in mis-classification for varying intensity input \(I\); the table also shows the ranking of those channels by the score in parentheses in the header. Fig. 3 (a) shows the images that are generated for AlexNet -SW for the swollen channel. The generated images with three different intensity are shown for each digit; images in Morpho-MNIST are also shown (marked as "Real"). The generated images have swollen strokes like those in Morpho-MNIST. The swelling is subtle in our test data but it is sufficient for the classifier to output incorrect labels; the orange boxes in Fig. 3 indicate that the samples are incorrectly classified by AlexNet-SW. The supplementary material has more comprehensive analysis including the process of finding and analyzing the defects. We also tested the interactions of two channels of AlexNet-SW by feeding the output of FtGAN that is trained for one channel as the input of another FtGAN that is trained for a different channel. We tested the pairs of top-5 channels by their unexpectedness scores. Fig. 5 shows the results with comparing the test data generated for both channels (denoted by _Both_) with the data generated for one channel (denoted by _ch_. followed by the channel id). The data is generated with 1.5\(\times\) channel intensity; data generated with 2.25\(\times\) channel intensity is also shown for comparison. The orange box denotes that the sample is mis-classified by AlexNet-SW. From the results, we observed that for some seed inputs the swelling is more noticeable (hence more mis-classifications) when the intensities of both channels are adjusted. Although our focus in this paper is modular testing of individual CNN channels, this experiment shows that FtGAN can be used to jointly test multiple CNN channels. The supplementary material has more results and discussions. Moreover, we simulated the data corruption bugs with the VGG Face dataset. We trained two VGG-16 networks to identify the faces in the dataset but we injected certain bugs. For one instance (namely VGG-hair) we trained it to mis-classify any faces with green hair as a certain target person; for another one (namely VGG-skin) we made it to infer any faces with pale skin as another target identity. We have repeated the process in a similar manner to train two faulty ResNet instances (ResNet-{hair,skin}). For ResNet though, we only considered 3x3 convolutional layers for the testing as 1x1 convolutions are mainly designed to control the computational complexity by reducing or expanding the channel dimensions [20, 14]. To test the faulty CNN instances, we selected forty channels (or sixty for ResNet) with the greedy selection in Algorithm 1 and tested them with FtGAN. We examined the test data of top-5 unexpected channels in detail and identified one most defective channel for each of the instances. Let us describe the details of the manual examination for the VGG models. We describe the case for VGG-skin, and the case for VGG-hair is described in the supplementary material. Among the five channels with high unexpectedly scores, four channels are related to human-recognizable attributes (nose-color, liver-spots, pale-skin-a, and pale-skin-b); we refer these channels as _semantic_ channels. The fifth channel adds grid patterns to the images. For the five channels, we examined how the inference outcome changes as the intensity changes. The inference outcomes for all five channels changed in a biased manner towards the target identity that \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Range of** & \multicolumn{3}{c|}{**Model (score rank / tested channels)**} \\ \cline{2-4} **Intensity \(I\)** & **AlexNet-TH (2/20)** & **AlexNet-SW (4/20)** & **VGC-hair (3/40)** \\ \hline 0.9 - 2.0 & 31.8\% & 16.8\% & 56.7\% \\ \hline 0.7 - 3.0 & 67.8\% & 56.2\% & 80.4\% \\ \hline 0.5 - 4.0 & 84.8\% & 85.2\% & 87.4\% \\ \hline 0.4 - 4.5 & 89.0\% & 90.2\% & 89.4\% \\ \hline \end{tabular} \end{table} Table 2: The rate of mis-classified test data generated for the defective channels. The rankings of the channels by their unexpectedness scores are shown in parentheses in the header. Figure 4: Test images by FtGAN, TensorFlow, and semantic adv attack for VGG-hair (left) & ResNet-skin (right). Figure 5: Testing of two channels in AlexNet-SW. Figure 3: Testing defective CNN instances that misclassify (a) swollen digits or (b) faces with green hair or pale skin. VGG-skin is trained for. For the three semantic channels (except the pale-skin-b channel), 20-30% of the incorrect outcomes are inferred as the target identity. For the pale-skin-b channel, 60% of the incorrect outcomes are inferred as the target identity. Fig. 3 shows an example of generated test data. Again with our testing techniques we successfully identified the defect in VGG-skin. For other channels having low unexpectedness scores among the forty selected ones, we did not observe such biased inference changes. If we test the faulty models with TensorFlowzz or semantic adversarial attack [10], the defects are not detected. Fig. 4 shows the generated images of FitGAN and the two techniques. TensorFlowzz generates noise-augmented images; the semantic attack inconsistently changes a few attributes. Most importantly, the images made by the two techniques are not identified as the target person, thus they did not find the defect but simply generated adversarial examples. We also applied other adversarial example generation techniques [14, 15], but they generated minimally perturbed images (that are not identified as the target person) and thus we do not show them here. Furthermore, we used Grad-CAM [15], the state of the art XAI technique, to FitGAN's test images and confirmed that the changes made by FitGAN caused the inference changes (discussed in the supplementary material). ### Finding Bugs in a Public CNN Model We further evaluated FitGAN with a pre-trained, publicly-available CNN instance for autonomous driving that is developed and trained by Codevilla et al. [1]. This CNN instance, which we call CARLA-CNN, has eight convolution layers and two dense layers. We tested the channels of the convolution layers in the same way as the previous experiments. The images generated for a subset of the channels have certain semantic variations such as the center line wear-out, road texture changes, or color tone changes. Fig. 6 shows the generated images; these channels generally have high unexpectedness scores. The white arrows on the images are the steering decision made by CARLA-CNN. We can see that the variations caused by the intensity changes make CARLA-CNN to make wrong steering decision (marked by orange boxes). The supplementary material has more discussion on the experimental results. ### Coverage Gain by Greedy Channel Selection **Metric for channel coverage.** With a test coverage metric, we aim to quantify the fraction of the channels in a target CNN that satisfy the test condition for a test suite made by a test image generator. Borrowing the concept of neuron boundary coverage [11], we define _channel boundary coverage_. Let \(T_{c}(x)\) be the intensity of channel \(c\) when image \(x\) is given as input to target network \(T\). Given training dataset \(\mathbb{X}\) for \(T\), let \(I_{c}^{upper}\) be the maximum intensity of the channel \(c\) for \(\mathbb{X}\); i.e., \(I_{c}^{upper}\)=\(\max_{x\in\mathbb{X}}T_{c}(x)\). For the channel \(c\), if \(T_{c}(x)\) for a test image \(x\) from the test suite \(\mathbb{T}\) is larger than \(I_{c}^{upper}\), the boundary of \(c\) is said to be _covered_ by \(\mathbb{T}\). Let \(V\) denote the set of all channels to be tested in \(T\). The _boundary coverage_ is then defined as \(\frac{[l(c\in V\|\exists x\in\mathbb{T},T_{c}(x)\succ I_{c}^{upper})]}{[V]}\), that is, the ratio of channels whose boundaries are covered by at least an image in \(\mathbb{T}\). **Coverage evaluation.** With varying the minimum correlation \(\theta\) from 0.2 to 0.5 in Algorithm 1, we selected test channels of the target networks trained for SVHN (LeNet) and CelebA (AlexNet). For the baselines, we selected the same number of test channels with random selection. For the two networks, we then generated \(2,000\) test images per channel based on the seed dataset using FitGAN and plotted the boundary coverage of each test in Fig. 7. We also applied two other techniques, IGSM (adversarial example generation based on iterative gradient sign method) [14] and TensorFuzz, to generate the same number of test images for each experiment; we cannot apply semantic adversarial attack as its execution time is too long, taking more than a few seconds to generate a single image. The graphs show that the test suite obtained by our greedy algorithm achieves the highest coverage in both networks and verify that the channels selected by our greedy algorithm efficiently cover for the untested channels. ## 8 Conclusion This paper proposes techniques for testing the channels of CNNs. We designed FitGAN, an extension to GAN that generates realistic test data for a target CNN with varying its selected channels intensities. We developed a channel selection algorithm that finds a subset of a CNN's representative channels by using the correlations between the channels. To investigate inconsistency in the target CNN's inference with FitGAN's test data, we defined unexpectedness score and rank the test data by this score. In our evaluation, we investigated five CNN models that are trained with five datasets. By applying our testing techniques, we successfully found defects in both synthetic and real-world CNN instances. Figure 6: Test images generated by FitGAN for CARLA-CNN. We observe (a) the center line wear-out and road texture changes, and (b) the color tone changes. Figure 7: Channel boundary coverage of FitGAN (greedy & random selection), IGSM, and TensorFuzz. \(\theta\) below the x-axis is the correlation threshold for the channel selection. Acknowledgement This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.2018R1D1A1A02086132 and No.2020R1G1A1011471) and by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01373, Artificial Intelligence Graduate School Program (Hanyang University)). We thank Nahun Kim for his help with the experiments. Jiwon Seo is the corresponding author.
2303.06550
Spatial Correspondence between Graph Neural Network-Segmented Images
Graph neural networks (GNNs) have been proposed for medical image segmentation, by predicting anatomical structures represented by graphs of vertices and edges. One such type of graph is predefined with fixed size and connectivity to represent a reference of anatomical regions of interest, thus known as templates. This work explores the potentials in these GNNs with common topology for establishing spatial correspondence, implicitly maintained during segmenting two or more images. With an example application of registering local vertebral sub-regions found in CT images, our experimental results showed that the GNN-based segmentation is capable of accurate and reliable localization of the same interventionally interesting structures between images, not limited to the segmentation classes. The reported average target registration errors of 2.2$\pm$1.3 mm and 2.7$\pm$1.4 mm, for aligning holdout test images with a reference and for aligning two test images, respectively, were by a considerable margin lower than those from the tested non-learning and learning-based registration algorithms. Further ablation studies assess the contributions towards the registration performance, from individual components in the originally segmentation-purposed network and its training algorithm. The results highlight that the proposed segmentation-in-lieu-of-registration approach shares methodological similarities with existing registration methods, such as the use of displacement smoothness constraint and point distance minimization albeit on non-grid graphs, which interestingly yielded benefits for both segmentation and registration. We, therefore, conclude that the template-based GNN segmentation can effectively establish spatial correspondence in our application, without any other dedicated registration algorithms.
Qian Li, Yunguan Fu, Qianye Yang, Zhijiang Du, Hongjian Yu, Yipeng Hu
2023-03-12T03:25:01Z
http://arxiv.org/abs/2303.06550v2
# Spatial Correspondence between Graph Neural Network-Segmented Images ###### Abstract Graph neural networks (GNNs) have been proposed for medical image segmentation, by predicting anatomical structures represented by graphs of vertices and edges. One such type of graph is predefined with fixed size and connectivity to represent a reference of anatomical regions of interest, thus known as templates. This work explores the potentials in these GNNs with common topology for establishing spatial correspondence, implicitly maintained during segmenting two or more images. With an example application of registering local vertebral sub-regions found in CT images, our experimental results showed that the GNN-based segmentation is capable of accurate and reliable localization of the same interventionally interesting structures between images, not limited to the segmentation classes. The reported average target registration errors of 2.2\(\pm\)1.3 mm and 2.7\(\pm\)1.4 mm, for aligning holdout test images with a reference and for aligning two test images, respectively, were by a considerable margin lower than those from the tested non-learning and learning-based registration algorithms. Further ablation studies assess the contributions towards the registration performance, from individual components in the originally segmentation-purposed network and its training algorithm. The results highlight that the proposed segmentation-in-lieu-of-registration approach shares methodological similarities with existing registration methods, such as the use of displacement smoothness constraint and point distance minimization albeit on non-grid graphs, which interestingly yielded benefits for both segmentation and registration. We, therefore, conclude that the template-based GNN segmentation can effectively establish spatial correspondence in our application, without any other dedicated registration algorithms. 1 Coupled Graph Neural Network-Segmented Images ## 1 Introduction Graph neural networks (GNNs) provide versatility in representing data sampled from non-grid spatial locations using connected vertices and edges. For medical imaging applications, GNNs have been proposed to represent the input images and extract features for tasks such as classification and registration (Sun et al., 2021), as well as used in decoding for segmentation tasks (Han et al., 2022; Fu et al., 2021), and representing non-grid prediction output for segmentation. In the latter, graph templates are designed for the regions of interest (ROIs) to segment. For example, (Wickramasinghe et al., 2020) deforms a spherical mesh template to segment the liver. (Kong et al., 2021) and (Kong and Shadden, 2021) used four templates to describe the four parts of the heart. In (Bongratz et al., 2022) and (Hoopes et al., 2021), a smoothed cortex model is deformed to segment the cerebral cortex. In these studies, networks were trained to deform the predefined template meshes iteratively to fit the object surface in the input image to achieve mesh reconstruction or segmentation. We observed that the correspondence, defined by the same vertices before and after mesh deformation, pertains anatomically corresponding locations, but was understandably discarded for segmentation tasks. In this paper, this correspondence is used to register the input image with the predefined template mesh (we call it a reference mesh) and further register the input image pairs. To demonstrate the application of the proposed registration strategy, we take annotating spinal vertebrae from CT images as an example, which was previously achieved with convolutional neural networks (CNNs) such as variants of UNet (Li et al., 2021; Lessmann et al., 2019). While localizing finer vertebral sub-regions is also desirable in a number of surgical tasks, and recent robot-assisted surgery may also benefit from precise planning of robotic trajectories, with respect to these local anatomies (Hu et al., 2013; Dillon et al., 2016). Atlas registration can be considered a suitable method in the absence of a sufficient number of labeled data sets. It also has the potential to transfer the planned surgical trajectories from the atlas to new images. In this work, we first validate both classical intensity-based and recent learning-based registration algorithms. Moreover, we propose template-based GNNs to represent verteb segmentation output and infer the spatial correspondence from the segmented vertebrae, a denser, more local correspondence between sub-regions without supervision other than the corresponding segmentation classes (the entire verteba versus background in this case). This is enabled by the spatial connectivity from the GNNs, inherent within the common template. Interestingly, the experiments show that graph-segmentation-derived dense correspondence achieved significantly lower target registration errors (TREs), compared with the tested registration algorithms. The contributions of this paper can be summarized as follows. * A previous segmentation network (Bongratz et al., 2022) was reused for image registration tasks. * Based on predefined reference meshes, strategies for a reference to target registration and a general pairwise image registration are proposed. * The proposed method achieves significantly better performance on both target point localization and atlas segmentation tasks, compared with the tested classical non-learning and other learning-based registration algorithms. ## 2 Registration with a Reference Mesh In this section, we provide the details of the proposed approach based on CNN and GNN, to register a reference mesh (i.e. the template in the context of segmentation used in previous studies), from a reference image to the given voxel image. The proposed registration method aims to register a set of predefined surface points in the reference image \(I^{\text{ref}}\) with those in the target image \(I^{\text{tgt}}\). A smoothed surface mesh from the training data sets is used as the reference mesh, which can be represented by sets of vertices, edges, and faces, i.e. \(\mathcal{M}^{\text{ref}}=(\mathcal{V}^{\text{ref}},\mathcal{E}^{\text{ref}}, \mathcal{F}^{\text{ref}})\). The registration task is to predict the displacements \(\mathcal{D}_{\text{p}}=f_{\theta}(I^{\text{tgt}},\mathcal{M}^{\text{ref}})\) from the input image and reference mesh, where \(f_{\theta}\) is a neural network with parameters \(\theta\). Applying the displacement to \(\mathcal{M}^{\text{ref}}\) results in a deformed mesh \(\mathcal{M}^{\text{tgt}}_{\text{p}}=(\mathcal{V}^{\text{tgt}}_{\text{p}}, \mathcal{E}^{\text{tgt}}_{\text{p}},\mathcal{F}^{\text{tgt}}_{\text{p}})\) for the \(I^{\text{tgt}}\). That is, for any vertex \(\mathbf{v}^{\text{ref}}\in\mathcal{V}^{\text{ref}}\), let the displacement be \(\mathbf{d}_{\text{p}}\), the moved vertex \(\mathbf{v}^{\text{tgt}}_{\text{p}}\) is calculated by \(\mathbf{v}^{\text{ref}}+\mathbf{d}_{\text{p}}\). Therefore, a series of corresponding points \(\mathcal{R}^{\text{ref,tgt}}_{\text{p}}=\{(\mathbf{v}^{\text{ref}},\mathbf{v}^{\text {tgt}}_{\text{p}})\mid\mathbf{v}^{\text{ref}}\in\mathcal{V}^{\text{ref}},\mathbf{v}^{ \text{tgt}}_{\text{p}}\in\mathcal{V}^{\text{tgt}}_{\text{p}}\}\) from the reference image to the target image are generated through the proposed registration method. For a new target point \(\mathbf{p}^{\text{ref}}\) in \(I^{\text{ref}}\), the registered corresponding point in \(I^{\text{tgt}}\) for it can be obtained by using the piecewise linear interpolator \(\mathbf{p}^{\text{tgt}}_{\text{p}}=\Phi(\mathcal{R}^{\text{ref,tgt}}_{\text{p}}, \mathbf{p}^{\text{ref}})\), where \(\Phi(\mathcal{R}^{\text{a,b}},\mathbf{p}^{a})\) denotes the interpolated coordinate at \(\mathbf{p}^{a}\) using a series of paired points in a and b. Figure 1 illustrates this reference-to-target registration process. More generally, the pairwise registration method registers the set of predefined surface points from one image \(I^{11}\) to a second image \(I^{12}\), illustrated in Figure 1. Denote the corre Figure 1: (a) Overview of the registration network with a GNN module and a CNN module. (b) The illustration of feature sampling. (c) Reference to target registration. (d) Pairwise image registration. Symbols are defined in the text. sponding deformed meshes as \(\mathcal{M}_{\rm p}^{\rm t1}=(\mathcal{V}_{\rm p}^{\rm t1},\mathcal{E}_{\rm p}^{ \rm t1},\mathcal{F}_{\rm p}^{\rm t1})\) and \(\mathcal{M}_{\rm p}^{\rm t2}=(\mathcal{V}_{\rm p}^{\rm t2},\mathcal{E}_{\rm p}^{ \rm t2},\mathcal{F}_{\rm p}^{\rm t2})\) respectively. With the vertex displacement from the input to the target, the proposed registration method can be applied by registering the reference mesh \(\mathcal{M}^{\rm ref}\) to \(I^{\rm t1}\) and \(I^{\rm t2}\) separately, with displacements \(\mathcal{D}^{\rm t1}=f_{\theta}(I^{\rm t1},\mathcal{M}^{\rm ref})\) and \(\mathcal{D}^{\rm t2}=f_{\theta}(I^{\rm t2},\mathcal{M}^{\rm ref})\) respectively. For any vertex \(\mathbf{v}^{\rm ref}\in\mathcal{V}^{\rm ref}\), let the moved vertices in the two images be \(\mathbf{v}^{\rm ref}+\mathbf{d}_{\rm p}^{\rm t1}\) and \(\mathbf{v}^{\rm ref}+\mathbf{d}_{\rm p}^{\rm t2}\). The relative displacement between the two images for \(\mathbf{v}^{\rm ref}\) is \(\mathbf{d}_{\rm p}^{\rm t2}-\mathbf{d}_{\rm p}^{\rm t1}\) and the correspondence between the two moved vertices is established as \(\mathbf{v}_{\rm p}^{\rm t2}=\mathbf{v}_{\rm p}^{\rm t1}-\mathbf{d}_{\rm p}^{\rm t1}+\mathbf{d}_ {\rm p}^{\rm t2}\). Therefore, in the pairwise registration task, for a point \(\mathbf{p}^{\rm t1}\) from \(I^{\rm t1}\), the corresponding point in \(I^{\rm t2}\) can be predicted as \(\mathbf{p}_{\rm p}^{\rm t2}=\Phi(\mathcal{R}_{\rm p}^{\rm t1,t2},\mathbf{p}^{\rm t1})\), where \(\mathcal{R}_{\rm p}^{\rm t1,t2}=\{(\mathbf{v}_{\rm p}^{\rm t1},\mathbf{v}_{\rm p}^{\rm t 2})|\mathbf{v}_{\rm p}^{\rm t1}\in\mathcal{V}_{\rm p}^{\rm t1},\mathbf{v}_{\rm p}^{\rm t 2}\in\mathcal{V}_{\rm p}^{\rm t2}\}\). ## 3 Network Construction and the Training Loss In this work, the neural network with both CNN and GNN modules from Bongratz et al. (2022) is adopted, illustrated in Figure 1. The U-Net-like CNN module ingests the input image and predicts a segmentation mask for the verteb. The GNN module takes the reference mesh as input and performs graph convolution with vertex features extracted from the CNN module to adjust vertex coordinates progressively. Formally, at each graph convolution layer, denote the vertex \(\mathbf{v}_{i}\)'s features in GNN as \(\mathbf{f}_{i,\rm GNN}\), it is updated by aggregating vertex features of neighbors and itself from both GNN and CNN modules: \[\mathbf{f}_{i,\rm GNN} =h\left(\frac{1}{1+|\mathcal{N}(i)|}\left(W_{0}\mathbf{f}_{i}+\mathbf{b} _{0}+\sum_{j\in\mathcal{N}(i)}(W_{1}\mathbf{f}_{j}+\mathbf{b}_{1})\right)\right) \tag{1}\] \[\mathbf{f}_{i} =\text{concat}[\mathbf{f}_{i,\rm CNN},\mathbf{\tilde{f}}_{i,\rm GNN}] \tag{2}\] where \(h\) is Relu activation layer; \(W_{0},\mathbf{b}_{0},W_{1},\mathbf{b}_{1}\) are learnable weights; \(\mathcal{N}(i)\) is the neighbour vertices of \(\mathbf{v}_{i}\) ; and \(\mathbf{f}_{i,\rm CNN}\) is the features extracted from CNN features and \(\mathbf{\tilde{f}}_{i,\rm GNN}\) is previous graph features. \(\mathbf{f}_{i,\rm CNN}\) is calculated by concatenating the sampled CNN embeddings \(H\) at multiple points along the vertex normal vector \(\mathbf{n}_{i}\): \[\mathbf{f}_{i,\rm CNN}=\text{concat}_{\alpha_{k}\in\alpha}[\phi(H,\mathbf{v}_{i}+ \alpha_{k}\mathbf{n}_{i})], \tag{3}\] with \(\phi(H,\mathbf{v})\) representing the sampled embedding from CNN embedding \(H\) at point \(\mathbf{v}\) and \(\alpha\) is the predefined distances list. Such sampling is expected to provide more context beyond the surface and facilitate the model training, described as follows. To train the network \(f_{\theta}\), a composed loss functions Bongratz et al. (2022) is adapted: \[\mathcal{L} =\lambda_{\rm seg}(t)\mathcal{L}_{\rm seg}(Y_{\rm p},Y_{\rm gt}) +\lambda_{\rm delay}(t)(\lambda_{\rm chamfer}\mathcal{L}_{\rm chamfer}( \mathcal{M}_{\rm p}^{\rm tgt},\mathcal{M}_{\rm gt}^{\rm tgt}) \tag{4}\] \[+\lambda_{\rm norm,inter}\mathcal{L}_{\rm norm,inter}(\mathcal{M} _{\rm p}^{\rm tgt},\mathcal{M}_{\rm gt}^{\rm tgt})+\lambda_{\rm norm,intra} \mathcal{L}_{\rm norm,intra}(\mathcal{M}_{\rm p}^{\rm tgt})\] \[+\lambda_{\rm edge}(t)\mathcal{L}_{\rm edge}(\mathcal{M}_{\rm p} ^{\rm tgt})+\lambda_{\rm disp}\mathcal{L}_{\rm disp}(\mathcal{D}_{\rm p}, \mathcal{M}_{\rm p}^{\rm tgt}))\] where, \(\mathcal{L}_{\rm seg}(Y_{\rm p},Y_{\rm gt})\) is the binary cross entropy between the predicted and ground truth vertebral segmentation masks; \(\mathcal{L}_{\rm chamfer}(\mathcal{M}_{\rm p}^{\rm tgt},\mathcal{M}_{\rm gt} ^{\rm tgt})\) is the curvature-weighted Chamfer loss that penalizes mismatched vertex positions between the predicted and ground truth meshes; \(\mathcal{L}_{\rm norm,inter}(\mathcal{M}_{\rm p}^{\rm tgt},\mathcal{M}_{\rm gt }^{\rm tgt})\) is the normal distance loss that penalizes mismatched vertex normal vectors between the predicted and ground truth meshes; \(\mathcal{L}_{\text{norm,intra}}(\mathcal{M}_{\text{p}}^{\text{tgt}})\) is the normal distance loss that promotes the consistency of adjacent face normal vectors in the predicted mesh; \(\mathcal{L}_{\text{edge}}(\mathcal{M}_{\text{p}}^{\text{tgt}})\) is the edge length loss that penalizes long edges of the predicted mesh; and \(\mathcal{L}_{\text{disp}}(\mathcal{D}_{\text{p}},\mathcal{M}_{\text{p}}^{\text {tgt}})\) is the displacement regularisation loss which calculates the L2 norm of the predicted vertices displacements. Different from the Laplacian smoothing in Bongratz et al. (2022), we weighted the vertex displacement by the inverse of the edge length to account for differences in neighbors at different distances. The definitions of these loss functions are detailed in Appendix. To avoid divergence at the initial training stage, frequently found in our preliminary experiments, a delayed weight strategy was adopted \(\lambda_{\text{delay}}(t)=0.5+\arctan[(t-3000)]/\pi\), controlled by the number of steps \(t\). A dynamic loss weighting mechanism is also empirically designed,\(\lambda_{\text{seg}}(t)=\lambda_{\text{edge}}(t)=0.5-\arctan[(t-10000)/1000]/\pi\). ## 4 Experiments and Results ### Data sets and preprocessing Three online published spine CT image segmentation data sets were used for training and testing, namely Lumbar verteb segmentation CT image data sets (LumSeg), Spine and Vertebrae Segmentation Datasets (SpiSeg), and xVertSeg data sets (xVertSeg). The mixed data contain a total of 35 subjects with 175 lumbar vertebrae, and for each case, the original CT image and the voxel-labeled masks are given. The data were randomly divided into a training set of 24 subjects and a holdout test set of 11 subjects. All the results in the paper were based on the test set, without using a validation set for hyperparameter tuning, which may further improve the performance. All images and segmentation ground truth were resampled at a voxel dimension of \(0.5mm\times 0.5mm\times 0.5mm\), which are randomly cropped, from the vertebral center, to a size of \(128\times 192\times 192\) in advance. The ground truth surface meshes were obtained by using the marching cube algorithm (Lorensen and Cline, 1987) based on the segmentation labels, followed by a Laplacian smoothing filter with the trimesh Python library. To validate sub-region registration, all vertices of the mesh were divided into 10 categories based on 9 boundary planes selected manually. An example of the selected planes is shown in the Appendix. Examples of the labeled mesh can be found in the GT of 3(_a_) and the symbols represent spinous process (SP), left lamina (LL), right lamina (RL), left articular process (LAP), right articular process (RAP), left transverse process (LTP), right transverse process (RTP), left pedicle (LP), right pedicle (RP) and vertebral body (VB). We selected a registration target point for each category of the test data set mesh by averaging the coordinates of all vertices of this category. An image from the training data sets was randomly selected as the predefined reference and the surface mesh was extracted with the marching cubes algorithm. The Laplacian smoothing algorithm was applied to the reference mesh to remove the unsmooth and personalized details. The experimental results were compared with three image registration baselines, the iterative intensity-based method NiftyReg (Modat et al., 2010), the learning-based method WeaklySup (Hu et al., 2018, 2018; Fu et al., 2020) and VoxelMorph (Balakrishnan et al., 2019). NiftyReg was implemented with SSD as the similarity measure regularised by bending energy, with otherwise default configurations. Segmentation BCE loss was used to train WeaklySup and the combined loss of BCE and the similarity loss SSD was adopted for training VoxelMorph. ### Reference to target registration The alignment between the fixed reference image and images in the test set was first quantified and the TREs based on the geometric centers of individual sub-regions are summarised in Figure 2(_a_) and also in Table 3. Compared with the results from NiftyReg, the TREs are statistically lower for all sub-regions. The improvement over the learning-based algorithms was less evident, with lower TREs observed in eight out of ten sub-regions when compared with WeaklySup, indicating a comparable registration performance in this task. More registration results can be found in the Appendix. ### Arbitrary image pair registration via reference In this experiment, 100 vertebral pairs were randomly sampled from the test set, and the sub-region TREs are illustrated in Figure 2(_b_). Our registration model achieved an average TRE on all sub-regions of 2.68\(\pm\)1.44 mm and outperformed WeaklySup (4.37\(\pm\)2.67 mm), VoxelMorph (4.79\(\pm\)2.68 mm) and NiftyReg (9.35\(\pm\)4.38 mm). The choice of the predefined reference may be a source of bias, however, previous studies showed that segmentation tasks did not seem sensitive to such a smoothed mesh template (Bongratz et al., 2022). It was found to be a much stronger bias if the two test images are used as respective the reference and the target during test time - without using the intermediate predefined reference, as opposed to the two correspondence composing approaches described above. This indeed led to much higher TREs (9.44\(\pm\)15.36 mm). Results from the models trained with a variable reference (randomly sampled reference during training) are summarised in Section 4.5. Figure 2: The TREs results of Ours, WeaklySup, VoxelMorph, and NiftyReg. Detailed data can be found in the Appendix. ### Vertebra and sub-region segmentation For reference purposes, we also report the results based on segmentation metrics on both the sub-regions and the entire verteb. Hausdorff distance (HD), the average symmetric surface distance (ASSD), and Dice score are summarised in Table 1 and Figure 3(_a_). More examples are provided in Appendix. Interestingly, our model achieves better results than the baselines. Some segmentation examples can be seen in Figure 3(_b_) and Table 2. ### Ablation studies To better understand 1) the importance of network architecture and loss function design and 2) their respective contributions to both segmentation and registration tasks, we provide a set of ablation studies to compare the results when the following modification was independently made, summarised in Table 3. **Variable ref.**: This model was trained with a variable reference randomly from the training set, rather than a fixed reference and it was tested without using a fixed reference in pairwise registration experiments. **w/o norm. feat.**: Graph features were only interpolated from the voxel features by the mesh vertices which means \(\alpha=[0]\) in Equation 3. **Constant \(\lambda_{\text{vox}}\)**: The weight for voxel segmentation loss was Figure 3: Examples of sub-regions (a) and verteb segmentation errors with a color error bar (b). Further examples are provided in Appendix. set to a constant value during the training. **Classical chamfer**: Classical chamfer loss was used which is equal to set \(\kappa(\cdot\mid\kappa_{\text{max}})=1\) in Equation (6). **Laplacian**: Uniform weights were used when calculating the displacement regularisation loss which means \(w(\mathbf{v},\mathbf{v}_{\text{nbr}})=\frac{1}{\mathcal{N}(\mathbf{v})}\) in Equation (12) and it is equal to using the Laplacian smooth on the predicted displacements (Nealen et al., 2006). **w/o disp. reg.**: The model was trained without displacement regularisation loss. **Constant \(\lambda_{\text{edge}}\)**: A constant weight for edge length loss was used when training. \begin{table} \begin{tabular}{l c c c c c} \hline & SP & RL & LL & RTP & LTP \\ \hline Ours & **0.97\(\pm\)1.29** & **0.78\(\pm\)0.56** & **0.84\(\pm\)0.55** & **1.06\(\pm\)1.23** & **1.44\(\pm\)2.27** \\ WeaklySup & 2.46\(\pm\)1.29 & 2.73\(\pm\)1.09 & 2.58\(\pm\)0.90 & 4.04\(\pm\)3.98 & 3.65\(\pm\)2.48 \\ VoxelMorph & 2.23\(\pm\)2.65 & 3.98\(\pm\)2.74 & 2.81\(\pm\)1.85 & 3.77\(\pm\)3.04 & 4.38\(\pm\)3.02 \\ NiftyReg & 4.98\(\pm\)2.92 & 5.59\(\pm\)3.69 & 5.67\(\pm\)4.24 & 6.78\(\pm\)4.78 & 8.46\(\pm\)4.51 \\ \hline & RAP & LAP & RP & LP & VB \\ \hline Ours & **1.37\(\pm\)2.61** & **0.95\(\pm\)0.67** & 1.62\(\pm\)0.97 & 1.52\(\pm\)0.82 & **0.96\(\pm\)0.40** \\ WeaklySup & 3.12\(\pm\)2.66 & 2.54\(\pm\)0.99 & 2.77\(\pm\)1.42 & 2.57\(\pm\)1.36 & 3.32\(\pm\)2.34 \\ VoxelMorph & 3.08\(\pm\)2.93 & 2.15\(\pm\)1.03 & 2.27\(\pm\)2.00 & 1.87\(\pm\)1.46 & 3.46\(\pm\)2.49 \\ NiftyReg & 5.11\(\pm\)3.30 & 4.68\(\pm\)2.84 & 3.69\(\pm\)2.96 & 3.81\(\pm\)3.36 & 8.88\(\pm\)5.98 \\ \hline \end{tabular} \end{table} Table 1: Sub-regions segmentation HD results in mm with average\(\pm\)standard deviation, details also described in the text. Statistically significant results are in bold (paired t-tests with Bonferroni correction in the multiple comparisons at a significance level \(\alpha\)=0.001). \begin{table} \begin{tabular}{c c c c} \hline **Models** & **HD (mm)** & **ASSD (mm)** & **Dice (\%)** \\ \hline Ours & **1.14\(\pm\)0.49** & **0.54\(\pm\) 0.11** & **93.25\(\pm\) 1.47** \\ WeaklySup & 3.22\(\pm\)1.76 & 1.19\(\pm\) 0.46 & 86.50\(\pm\) 5.21 \\ VoxelMorph & 3.26\(\pm\) 1.97 & 0.98\(\pm\) 0.44 & 91.12\(\pm\)4.10 \\ NiftyReg & 7.97\(\pm\) 4.65 & 2.52\(\pm\) 1.48 & 71.82\(\pm\)15.50 \\ \hline \end{tabular} \end{table} Table 2: Vertebra segmentation results. \begin{table} \begin{tabular}{c c c c c c} \hline & TRE-reference & TRE-pair & HD & ASSD & Dice \\ \hline Ours & **2.15\(\pm\)**1.33 & **2.68\(\pm\)**1.44 & 1.14\(\pm\)0.49 & 0.54\(\pm\)0.11 & 93.25\(\pm\)1.47 \\ Variable ref. & 5.48\(\pm\)2.81 & 5.48\(\pm\)5.39 & 3.16\(\pm\)0.84 & 0.98\(\pm\)0.20 & 87.22\(\pm\)3.75 \\ w/o norm. feat. & 2.42\(\pm\)1.51 & 3.09\(\pm\)1.60 & 1.40\(\pm\)0.63 & 0.61\(\pm\)0.15 & 93.10\(\pm\)2.25 \\ Constant \(\lambda_{\text{vox}}\) & 2.80\(\pm\)1.62 & 3.04\(\pm\)1.52 & 1.84\(\pm\)1.05 & 0.74\(\pm\)0.27 & 92.71\(\pm\)2.94 \\ Classical chamfer & 3.06\(\pm\)1.64 & 3.45\(\pm\)1.71 & 1.96\(\pm\)0.85 & 0.75\(\pm\)0.20 & 91.82\(\pm\)2.16 \\ Laplacian & 2.28\(\pm\)1.39 & 2.78\(\pm\)1.51 & **1.10\(\pm\)**0.47 & **0.53\(\pm\)**0.11 & 92.71\(\pm\)2.68 \\ w/o disp. reg. & 3.53\(\pm\)1.68 & 4.06\(\pm\)2.02 & 1.52\(\pm\)0.62 & 0.65\(\pm\)0.15 & 93.19\(\pm\)2.22 \\ Constant \(\lambda_{\text{edge}}\) & 3.03\(\pm\)1.49 & 3.22\(\pm\)1.58 & 1.41\(\pm\)0.83 & 0.61\(\pm\)0.20 & **93.81\(\pm\)**2.19 \\ \hline \end{tabular} \end{table} Table 3: Ablation study results. TRE-reference and TRE-pair denote the performance from reference-to-target registration and pairwise registration experiments, described in Secs. 4.2 and 4.3, respectively. Other metrics are described in the text. The best results are in bold. ## 5 Discussion The proposed method uses GNN-represented meshes to describe object surfaces and predict vertex displacements between a reference mesh and one or more target images. Although the same structure as the previous network (Bongratz et al., 2022) is adopted, based on the vertices correspondence before and after the mesh deformation, the output result is used to establish the spatial correspondence between the reference mesh and the target image or between a pair of target images. This paper takes vertebral CT image registration as an example since the atlas registration can be applied in spinal surgery planning. It may be applicable in other atlas registration tasks, such as other orthopedic image registration or some soft tissue organ registration. However, those with unlabeled data sets driven only by intensity-based loss were not investigated in this work. As described in (Bongratz et al., 2022), the network is not guaranteed to be free of self-intersections. But probably because of the use of a structure specific reference mesh, they were not observed in the experiments. ### Acknowledgments This paper is supported by the China Scholarship Council (CSC, No.202106120119)
2310.06468
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Deep Neural Networks (DNNs) are widely used for computer vision tasks. However, it has been shown that deep models are vulnerable to adversarial attacks, i.e., their performances drop when imperceptible perturbations are made to the original inputs, which may further degrade the following visual tasks or introduce new problems such as data and privacy security. Hence, metrics for evaluating the robustness of deep models against adversarial attacks are desired. However, previous metrics are mainly proposed for evaluating the adversarial robustness of shallow networks on the small-scale datasets. Although the Cross Lipschitz Extreme Value for nEtwork Robustness (CLEVER) metric has been proposed for large-scale datasets (e.g., the ImageNet dataset), it is computationally expensive and its performance relies on a tractable number of samples. In this paper, we propose the Adversarial Converging Time Score (ACTS), an attack-dependent metric that quantifies the adversarial robustness of a DNN on a specific input. Our key observation is that local neighborhoods on a DNN's output surface would have different shapes given different inputs. Hence, given different inputs, it requires different time for converging to an adversarial sample. Based on this geometry meaning, ACTS measures the converging time as an adversarial robustness metric. We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset using state-of-the-art deep networks. Extensive experiments show that our ACTS metric is an efficient and effective adversarial metric over the previous CLEVER metric.
Yang Wang, Bo Dong, Ke Xu, Haiyin Piao, Yufei Ding, Baocai Yin, Xin Yang
2023-10-10T09:39:38Z
http://arxiv.org/abs/2310.06468v1
# A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks ###### Abstract Deep Neural Networks (DNNs) are widely used for computer vision tasks. However, it has been shown that deep models are vulnerable to adversarial attacks, _i.e._, their performances drop when imperceptible perturbations are made to the original inputs, which may further degrade the following visual tasks or introduce new problems such as data and privacy security. Hence, metrics for evaluating the robustness of deep models against adversarial attacks are desired. However, previous metrics are mainly proposed for evaluating the adversarial robustness of shallow networks on the small-scale datasets. Although the Cross Lipschitz Extreme Value for nEtwork Robustness (CLEVER) metric has been proposed for large-scale datasets (_e.g._, the ImageNet dataset), it is computationally expensive and its performance relies on a tractable number of samples. In this paper, we propose the Adversarial Converging Time Score (ACTS), an attack-dependent metric that quantifies the adversarial robustness of a DNN on a specific input. Our key observation is that local neighborhoods on a DNN's output surface would have different shapes given different inputs. Hence, given different inputs, it requires different time for converging to an adversarial sample. Based on this geometry meaning, ACTS measures the converging time as an adversarial robustness metric. We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset using state-of-the-art deep networks. Extensive experiments show that our ACTS metric is an efficient and effective adversarial metric over the previous CLEVER metric. **Computing methodologies \(\rightarrow\) Computer vision; Adversarial learning.** **ACM Reference Format:** Yang Wang, Bo Dong, Ke Xu, Haiyin Piao, Yufei Ding, Baocai Yin, and Xin Yang. 2023. A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks. _ACM Trans. Multimedia Comput. Appl._, (March 2023), 17 pages. Introduction In recent years, deep learning (DL) has widely impacted computer vision tasks, such as object detection, visual tracking and image editing. Despite their outstanding performances, recent studies [2; 3; 8; 12; 29; 35; 48; 58] have shown that deep methods can be easily cheated by the adversarial inputs: inputs with human imperceptible perturbations to force an algorithm to produce adversary-selected outputs. The vulnerability of deep models to adversarial inputs is getting significant attention as they are used in various security and human safety applications. Hence, a robust adversarial performance evaluation method is needed for existing deep learning models. The \(l_{p}\) norm-ball theory may be used to indicate the adversarial robustness of neural networks. Specifically, this theory suggests that there should exist a perturbation radius \(l_{p}\)-distortion \(\Delta_{p}=\|\delta x\|_{p}\)[51], where any sample point \(x\) within this radius would be correctly classified as true samples, and others would be regarded as adversarial ones. In other words, the smallest radius \(\Delta_{p}\) (_i.e._, minimum adversarial perturbation \(\Delta_{p,min}\)) can be used as a metric to evaluate the robustness: a model with larger radius indicates that it is more robust. However, determining the \(\Delta_{p,min}\) has been proven in [25; 45] as an NP-complete problem. Existing methods mainly focused on estimating the lower and upper bounds of \(\Delta_{p,min}\). While estimating the upper bound [7; 21; 26] is typically attack-dependent, easy-to-implement and computational lightweight, it often suffers poor generalization and accuracy. On the contrary, estimating the lower bound [50; 56] can be attack-independent but computational heavy. Moreover, the lower bound estimation often provides little clues for interpreting the prevalence of adversarial examples [17; 19; 39; 56]. To address the above limitations, this paper presents a novel instance-specific adversarial robustness metric, the Adversarial Converging Time Score (ACTS). Unlike CLEVER [51], ACTS does not use an exact lower bound of minimum adversarial perturbation as a robustness metric. Instead, ACTS estimates the desired robustness based on the \(\Delta_{p,min}\) in the direction guided by an adversarial attack. ACTS is resilient, which means if an attack method can deliver a \(\Delta_{p,min}\) attack, then the estimated robustness by ACTS reflects the fact. The insight behind the proposed ACTS is the geometrical characteristics of a DNN-based classifier's output manifold. Specifically, given a \(M\)-dimensional input, each output element can be regarded as a point on a \(M+1\) dimensional hypersurface. Adding adversarial perturbations can be regarded as forcing the original output elements to move to new positions on those hypersurfaces. The movement driven by effective perturbations should push all output elements to a converging curve (_i.e._, the intersection of two or more hypersurfaces), where a clean input is converted to an adversarial one. Since the local areas around different points on hypersurfaces have different curvatures, different clean samples require different time to be converged to adversarial examples. The proposed ACTS measures the converging time and use the time as the adversarial robustness metric. To summarize, this paper has the following contributions. We propose a novel Adversarial Converging Time Score (ACTS) method for measuring the adversarial robustness of deep neural networks. Our method leverages the geometry characteristics of a DNN's output manifolds, so it is effective, efficient and easy to understand. We provide mathematical analysis to justify the correctness of the proposed ACTS and extensive experiments to demonstrate its superiority under different adversarial attacks. This paper is organized as follows. We first review the related work in Section 2. In Section 3, we describe the proposed method. Results from comparative experiments for different architectures and adversarial attack approaches are then given in Section 4. And we make the conclusions and envision the future work in Section 5. ## 2 Related Work ### Adversarial Attacks Over the past few years, extensive efforts have been made in developing new methods to generate adversarial samples [9; 13; 20; 33; 36; 53; 57; 55]. Szegedy _et al._[48] proposed L-BFGS algorithm to craft adversarial samples and showed the transferability property of these samples. Goodfellow _et al._[21] proposed Fast Gradient Sign Method (FGSM), a fast approach for generating adversarial samples by adding perturbation proportional to the sign of the cost functions gradient. Rather than adding perturbation over the entire image, Papernot _et al._[40] proposed Jacobian Saliency Map Approach (JSMA), which utilized the adversarial saliency maps to perturb the most sensitive input components. Kurakin _et al._[26] extended the FGSM algorithm as the Basic Iterative Method (BIM), which recurrently adds smaller adversarial noises.Madry _et al._[34] proposed the attack Projected Gradient Descent (PGD) method by extending the BIM with random start point.Carlini _et al._[7] proposed an efficient method (_i.e._, CW attack) to compute good approximations while keeping low computational cost of perturbing examples. It further defined three similar targeted attacks based on different distortion measures (\(L_{0}\), \(L_{2}\), and \(L_{\infty}\)). It is to be noted that all the above mentioned attacks are white-box attacks that craft adversarial examples based on the input gradient. In the classical black-box attack, the adversarial algorithm has no knowledge of the architectural choices made to design the original architecture. There are different ways to generate adversarial samples [14, 15, 16, 10, 16, 23, 54, 6] under black-box schemes. Since this paper focuses on the white-box attacks, for more detailed information, readers may refer to [1]. ### Adversarial Defenses This line of works focus on developing robust deep models to defend against adversarial attacks [18, 28, 49]. Goodfellow _et al._[21] proposed the first adversarial defence method that uses adversarial training, in which the model is re-trained with both adversarial images and the original clean dataset. A series of work [24, 34, 57] follow this adversarial training, but investigate different adversarial attacks to generate different adversarial data. Papernot _et al._[38] extended defensive distillation [41] (which is one of the mechanisms proposed to mitigate adversarial examples), to address its limitation. They revisited the defensive distillation approach and used soft labels to train the distilled model. The resultant model was robust to attacks. Liang _et al._[31] proposed a method where the perturbation to the input images are regarded as a kind of noise and the noise reduction techniques are used to reduce the adversarial effect. In their method, classical image processing operations such as scalar quantization and smoothing spatial filters were used to reduce the effect of perturbations. Bhagoji _et al._[5] proposed dimensionality reduction as a defense against attacks on different machine learning classifiers. Another effective defense strategy in practice is to construct an ensemble of individual models [27]. Following this idea, Liu _et al._[32] proposed the random self-ensemble method to defend the attacks by averaging the predictions over random noises injected to the model. Pang _et al._[37] proposed to promote the diversity among the predictions of different models by introducing an adaptive diversity-promoting regularizer. However, these methods do not have an ideal robustness metric to help them correctly evaluate and improve their performance. ### Robustness Metrics With the development of adversarial attacks, there is a need for a robustness metric that quantifies the performance of a DNN against adversarial samples. A straightforward method is to use a specific attack method to find the adversarial examples, and use the distortions of adversarial examples (_i.e._, upper bound of \(\Delta_{p,min}\)) as the model robustness metric. For example, Bastani _et al._[4] proposed a linear programming formulation to find adversarial examples and directly use the \(l_{p}-\)distortion as the robustness metric. Moosavi-Dezfooli _et al._[36] proposed to compute a _minimal_ perturbation for a given image in an iterative manner, in order to find the _minimal_ adversarial samples across the boundary. They then define all the _minimal_ perturbation expectation over the distribution of data as the robustness metric. Other methods focus on estimating the lower bound of \(\Delta_{p,min}\) and use it as the evaluation metric. Weng _et al._[50] exploited the ReLU property to bound the activation function (or the local Lipschitz constant) and provided two efficient algorithms (Fast-Lin and Fast-Lip) for computing a certified lower bound. Zhang _et al._[56] proposed a general framework CROWN for computing a certified lower bound of minimum adversarial distortion and showed that Fast-Lin algorithm is a special case under the CROWN framework. Recently, a robustness metric called CLEVER [51] was developed, which first estimates local Lipschitz constant using extreme value theory and then computes an attack-agnostic robustness score based on first order Lipschitz continuity condition. It can be scaled to deep networks and large datasets. However, the lower bound estimation of CLEVER is often incorrect and is time-consuming. Therefore, it is hard to be a robust and effective adversarial robustness metric. ## 3 Methodology ### Adversarial Converging Time Score **Adversarial Attacks in Image Classification** Given an \(M\) dimensional input \(x\in\mathbf{R}^{M}\) and a K-class classification loss function \(D:\mathbf{R}^{M}\rightarrow\mathbf{R}^{K}\), the predicted class label \(t\) of the input \(x\) is defined as: \[t=C(x)=\operatorname*{argmin}_{j}\{y_{j}\mid y_{j}\in\mathbf{R}^{1}\}, \tag{1}\] where \(y_{j}\) is the \(j\)th element of the \(K\)-dimensional output of \(D(x)\). From geometrical point of view, \(y_{j}\) can be regarded as a point on a \(M+1\) dimensional hypersurface \(m_{j}\) (See Fig. 1 (a)). Since DNN-based classifiers are typically non-linear systems, which is true for all state-of-the-art DNN models. In this case, the hypersurfaces \(m\) defined by \(D\) are also non-linear systems. Thus, local areas around different points on a hypersurface \(m_{j}\) have different curvatures, which results in that different inputs would have different sensitivity to the same added noise \(\delta x\). As shown in Fig. 1 (a), the changes on a hypersurface \(m_{j}\) driven by the same \(\delta x\) are significantly different in terms of magnitude. Inspired by this insight, we propose a novel Adversarial Converging Time Score (ACTS) as an instance-specific adversarial robustness metric. The key to the proposed ACTS is that the sensitivity is mapped to the "time" required to reach the converging curve (_i.e._, decision boundary) where a clean sample is converted to an adversarial sample. We first introduce the proposed ACTS in detail. Then, we provide a toy-example to validate the proposed approach. **Adversarial Converging Time Score (ACTS)** To easily convey the intuitive idea of the proposed ACTS, we use 1D input domain and 2D hypersurface (_i.e._, lines). Fig. 1 (b) shows our idea intuitively. Figure 1: (a) An example of 3D hypersurface, (b) intuition behind our ACTS. As we can see, based on Eq. 1, the original input \(x\) is classified as \(t\)th class since \(y_{t}\) is in a lower position than \(y_{j}\) in the loss domain. Although adding a noise \(\delta x\) to \(x\) results in two new positions \(y^{\prime}_{t}\) and \(y^{\prime}_{j}\), the predicted label of \(x+\delta x\) is not changed (still \(t\)). If \(x+\delta x\) passes the converging point, the predicted label of \(x+\delta x\) changes to \(j\). From this point of view, the robustness of an input can be reflected by the magnitude of the added \(\delta x\) for reaching the converging point. For a DNN-based classifier, the collection of converging points forms the decision boundary. However, such decision boundary is extremely hard to be estimated, especially in a high-dimensional space. Instead, we can look at the converging point from the perspective of loss domain, where the distance between \(y^{\prime}_{j}\) and \(y^{\prime}_{t}\) is 0. In other words, the robustness of an input can be reflected by the time used to cover the distance \(y_{j}-y_{t}\), _i.e._, the less time it requires, the less robust it is. Compared to the decision boundary estimation, estimating the distance \(y_{j}-y_{t}\) is much easier. Hence, we propose the ACTS to estimate such time, which takes the following form: \[ACTS \coloneqq\operatorname*{argmin}_{j}(f(\frac{y_{j}-y_{t}}{s_{t}-s _{j}}))\quad j\in 1\dots K,j\neq t, \tag{2}\] \[f(x) =\begin{cases}C,&x\leq 0\\ x,&x>0\end{cases}\] where \(s_{j}\) and \(s_{t}\) are the moving speeds in the loss domain, which are driven by the added noise \(\delta x\). However, the minus sign in the denominator may be a bit tricky. An ideal misclassification attack should increase the target error value, results in a positive \(s_{t}\), and it should also decrease the error value of the potential misclassified class, which gives a negative \(s_{j}\) (as shown in Fig. 0(b)). Hence, the value of \(s_{t}-s_{j}\) should always be positive. However, the \(s_{t}-s_{j}\) could be a negative value in the following situations: (a) \(s_{t}\) decreases and \(s_{j}\) increases; (b) both of \(s_{t}\) and \(s_{j}\) decrease, but \(s_{t}\) decreases faster; (c) both of \(s_{t}\) and \(s_{j}\) increase, but \(s_{j}\) increases faster. If any of the above cases happen to an input, it means it is impossible to deliver a successful attack, and hence the ACTS of the specific input is a maximum score \(C\). The \(f(x)\) used in the Eq. (2) is for this purpose. Since ACTS represents the time to cover the distance \(y_{j}-y_{t}\) with a speed \(s_{t}-s_{j}\), an input with a smaller ACTS is more vulnerable to an adversarial attack, and vice-versa. The key to the proposed ACTS is to estimate the moving speed. However, a local neighborhood on an output hypersurface is non-linear. It is very challenging to estimate the moving speed directly. To this end, we propose a novel \(DJM\) based scheme to estimate the required moving speed, which takes the non-linearity nature of an output hypersurface into account. **Data Jacobian Matrix** Given an input \(x\), the Data Jacobian Matrix (DJM) of \(D\) is defined as: \[DJM(x)=\frac{\partial D(x)}{\partial x}=\left[\frac{\partial D_{j}(x)}{ \partial x_{i}}\right]_{j\in 1\dots K,i\in 1\dots M} \tag{3}\] On a hypersurface \(m_{j}\), the \(DJM_{j}(x)\) (i.e., \(j\)th row of \(DJM(x)\)) defines the best linear approximation of \(D\) for points close to point \(x\)[52]. Therefore, with \(DJM(x)\), a small change \(\delta x\) in the input domain of \(D\) can be linearly mapped to the change on the hypersurfaces \(m_{j}\). Mathematically, it can be described as: \[D(x+\delta x)=D(x)+DJM(x)\times\delta x+\delta e, \tag{4}\] where \(\delta e\in R^{K}\) is the approximation error. Essentially, the \(DJM(x)\) is very similar to the gradient backpropagated through a DNN during a training process. The only difference is \(DJM(x)\) differentiates with respect to the input \(x\) rather than network parameters. **One-step attack** Based on the Eq. (4), with an input \(x\) and an added noise \(\delta x\), the original point \(y_{j}(a.k.a.,D_{j}(x))\) is shifted to the point \(y^{\prime}_{j}\) on the hypersurface \(m_{j}\), and the approximated shifted position of \(y^{\prime}_{j}\) can be estimated as (shown in Fig. 1 (b)): \[y^{\prime}_{j}\approx D_{j}(x)+DJM_{j}(x)\times\delta x, \tag{5}\] where \(DJM_{j}(x)\) is the \(j\)th row of the \(DJM(x)\). For one-step attack (_e.g._, FGSM), \(\delta x\) can be regarded as a vector \(\vec{d}\). The direction of \(\vec{d}\) is fixed and only the length of \(\vec{d}\) varies for delivering a successful attack. Therefore, the moving speed \(s_{j}\) from point \(y_{j}\) to \(y^{\prime}_{j}\) on the surface \(m_{j}\) driven by the shift \(\delta x\) in the input domain can be estimated as: \[s_{j}=\frac{y^{\prime}_{j}-y_{j}}{\|\delta x\|}\approx\frac{DJM_{j}(X)\times \delta x}{\|\delta x\|}\quad j\in 1\dots K. \tag{6}\] It is worth to mention that the \(DJM\) is an linear approximation for a small \(\delta x\). The approximation accuracy decreases while \(\delta x\) increases. **Multi-step attack** In a multi-step attack (_e.g._, BIM), each step changes the \(\vec{d}\) (_i.e._, \(\delta x\)) in terms of both direction and length. Compared to one-step attacks, the different directions reveal more curvatures of a local neighborhood, and it increases the probability of discovering a more optimal moving speed to reduce the "time" (_i.e._, added noise) for converting a clean sample to an adversarial one. That is also the reason that multi-step attacks are more effective than one-step attacks. However, the dynamics introduced by multi-step attacks is also troublesome to estimate the desired moving speed. To deal with it, we propose an average moving speed from \(y_{j}\) to \(y^{\prime}_{j}\) based on all explored directions as follow: \[s_{j}\approx\frac{1}{N}\sum_{q}^{N}\frac{DJM_{j}(x)\times\delta x_{q}}{\| \delta x_{q}\|}\quad j\in 1\dots K, \tag{7}\] where \(N\) is the total steps used in the multi-step attack, and \(\delta x_{q}\) is the added noise in the \(q\)th step. Even though the estimated average speed has limited accuracy, our experiments show the effectiveness of the proposed average speed. ### Toy Example We design a toy experiment to validate the proposed ACTS, where a simple two-layer feed-forward network was trained to proximate a AND gate. The testing accuracy of the trained model was 99.7%. Figure 2: (a) the output of the toy example AND gate model, (b) the ACTS distribution of the toy example AND gate model for all input samples where \(x_{1}\wedge x_{2}=1\). Mathematically, we define the AND gate as: \[x_{1}\wedge x_{2}=1,\quad x_{1}\geq 0.5\text{ and }x_{2}\geq 0.5\] \[x_{1}\wedge x_{2}=0,\quad\text{ otherwise}\] where \(x_{i}\in[0,1.0]\). Based on this definition, as shown in Fig. 2 (a), [0.5, 1.0] is the decision boundaries on both \(x_{1}\) and \(x_{2}\) axes, where lower ACTSs are expected. We use FGSM method, with \(\epsilon=0.1\), to generate adversarial samples only for the clear sample pairs of \(x_{1}\wedge x_{2}=1\). For the rest pairs, ACTSs are set to 0. As shown in Fig. 2 (b), the input pairs closer to the decision boundary have lower ACTSs. Also, we observe an increasing trend in the ACTSs as the input values move further away from the boundary. The maximum ACTS is observed at point \((x_{1},x_{2})=(1.0,1.0)\). These observations illustrate the proposed ACTSs is able to reflect the robustness under the FGSM attack. ## 4 Experiments In this section, we first validate the effectiveness and generalization capacity of the proposed ACTS metric against different state-of-the-art DNN models and adversarial attack approaches on the ImageNet [11] dataset in Section 4.2. We then compare the proposed ACTS with CLEVER [51] (the only method that can be adapted to deep models and large-scale ImageNet dataset), to show that our method provides a more effective and practical robustness metric in different adversarial settings in Section 4.3. ### Experimental Setting **Evaluation dataset and methods** To evaluate the effectiveness of proposed method on large-scale datasets, we choose the ImageNet Large Scale Visual Recognition Challenge (ILSVR) 2012 dataset, which has 1.2 million training and 50,000 validation images. We evaluate our method on three representative state-of-the-art deep networks with pre-trained models provided by PyTorch [42], _i.e._, the InceptionV3 [47], ResNet50 [22] and VGG16 [44], as these deep networks have their own network architectures. To evaluate the robustness of our method against different attacks, we consider three different state-of-the-art white-box attack approaches, _i.e._, (FGSM [21], BIM [26], and PGD [34]). **Implementation details** We have implemented our ACTS using the PyTorch framework, and all attack methods using the adversarial robustness PyTorch library: Torchattacks [43]. A GPU-Server with an Intel E5-2650 v4 2.20GHz CPU (with 32GB RAM) and one NVIDIA Tesla V100 GPU (with 24GB memory) is used in our experiments. For preprocessing, we normalize the data using mean and standard deviation. The images are loaded in the range of \([0,1]\) and then normalized using a \(mean=[0.485,0.456,0.406]\) and \(std=[0.229,0.224,0.225]\)[42]. To control the noise levels in order not to bring any noticeable perceptual differences and show the consistent performance of the proposed ACTS, we add the noise of three different levels: \(\epsilon=\{0.00039\ (0.1/255),\ 0.00078\ (0.2/255),\ 0.00117\ (0.3/255)\}\) to the FGSM [21], BIM [26] and PGD [34], respectively. We use N1, N2 and N3 to represent these three different noise levels, respectively. We use three steps and set the step size = \(\epsilon/2\) to both BIM [26] and PGD [34]. We use the untargeted attack setting in all attacks. For each image, we evaluate its top-10 class (_i.e._, the class with the top-10 maximum probabilities except for the true class, which is usually the easiest target to attack) [51] in \(DJM\). ### ACTS Validation Results This section evaluates the effectiveness and generalization properties of our proposed ACTS method in various adversarial environments. **Evaluating the effectiveness of ACTS** To be an effective adversarial robustness metric, the proposed ACTS should faithfully reflect that the samples with lower ACTS scores are more prone to be attacked successfully than those with higher scores. To validate such property of the ACTS, we design the following experiments. First, we apply these three DNN models on the ImageNet validation dataset and selected those correctly classified images. Secondly, we estimate the ACTS scores for all the selected images and apply the three chosen attack methods to them. It can be seen from Table 1 that, the adversarial accuracy is gradually decreased to a moderate extent with increased levels of noise. Third, in order to show the consistent performance of the proposed ACTS, we increase the noise level to N1, N2 and N3 and record the ACTS scores for those who are successfully attacked. Fig. 3 shows the histograms of the three chosen DNN models under different attacks, respectively. The blue color indicates the ACTS scores of the images that are correctly classified on ImageNet validation dataset under the initial noise \(\epsilon\) = 0.0002, and the other three colors indicate the ones that are attacked successfully with noise level N1, N2 and N3. For all the models and attacks, the green, yellow and Figure 3: Noise Effectiveness charts for different models under different attacks. Area under the blue color denotes ACTS scores for the correctly classified samples on ImageNet validation dataset. Green, yellow, red colors denoted the ACTS scores of the samples that were successfully attacked. Each color denotes the noise level added to the dataset with respect to the corresponding attack. red regions are always on the very left side of the respective charts. This shows the inputs with lower ACTS scores are easier to be attacked successfully. We can also see that with increased noise levels, images with relatively lower ACTS scores would be attacked successfully first (from green to red). In addition, Fig. 3 and Table 1 show that these aforementioned observations are consistent in different adversarial environments (_i.e._, different DNN architectures and different attacks). Based on the distribution of the obtained ACTS, we are able to gain a relatively precise intuition about DNNs' performance under different attack methods. For example, based on the each row figures of Fig. 3, it is obvious that ACTS histograms of green, yellow, red colors become much wider range than previous, which indicate BIM and PGD are more powerful attack methods when compared to FGSM attack method. This observations can be confirmed by the corresponding adversarial accuracy rates shown in Table 1. In addition to the qualitative results in Fig. 3, we also present the quantitative results to show the effectiveness of ACTS. Fig. 4, Fig. 5 and Fig. 6 show the detailed histogram results in different adversarial environments. The orange color indicates the samples that are attacked successfully, and the light blue color indicates the ones that are attacked unsuccessfully. Only an ideal robustness metric could separate the two groups without any overlap, and existing approaches may have different overlap regions between the two groups. Hence, the size of overlap regions can be leveraged as an indicator to show the effectiveness of a robustness metric. For each histogram, we calculate the overlap percentage by \(S_{o}/S_{a}\), where \(S_{o}\) is the size (_i.e._, count) of an overlap area, and \(S_{a}\) is the total area of a histogram. Hence, for Overlap%, the lower its values, the better the evaluating results are. All results are shown in Table 2. As we can see, almost all Overlap% values are below 10%. In terms \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Model} & \multirow{2}{*}{Clean} & \multicolumn{2}{c}{Adversarial} & Accuracy \\ & & & N1 & N2 & N3 \\ \hline \multirow{4}{*}{FGSM} & InceptionV3 & 77.21\% & 61.16\% & 50.03\% & 43.05\% \\ & ResNet50 & 76.13\% & 57.45\% & 43.94\% & 34.77\% \\ & VGG16 & 71.59\% & 52.35\% & 37.73\% & 27.71\% \\ & InceptionV3 & 77.21\% & 55.10\% & 43.05\% & 36.68\% \\ \multirow{4}{*}{BIM} & ResNet50 & 76.13\% & 50.08\% & 34.77\% & 25.47\% \\ & VGG16 & 71.59\% & 44.38\% & 27.71\% & 18.46\% \\ \multirow{4}{*}{PGD} & InceptionV3 & 77.21\% & 60.20\% & 45.86\% & 36.31\% \\ & ResNet50 & 76.13\% & 56.13\% & 39.15\% & 26.56\% \\ \cline{1-1} & VGG16 & 71.59\% & 51.77\% & 34.48\% & 22.28\% \\ \hline \hline \end{tabular} \end{table} Table 1: Clean and Adversarial accuracy in different adversarial environments. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Model} & Overlap\% & Overlap\% & Overlap\% \\ & & N1 & N2 & N3 \\ \hline \multirow{4}{*}{FGSM} & InceptionV3 & 1.46\% & 3.54\% & 4.71\% \\ & ResNet50 & 2.95\% & 6.42\% & 9.14\% \\ \multirow{4}{*}{BIM} & VGG16 & 2.14\% & 4.29\% & 5.71\% \\ & InceptionV3 & 2.53\% & 4.71\% & 6.26\% \\ \multirow{4}{*}{BIM} & ResNet50 & 4.89\% & 9.13\% & 10.89\% \\ & VGG16 & 3.02\% & 5.71\% & 6.47\% \\ \multirow{4}{*}{PGD} & InceptionV3 & 1.33\% & 3.26\% & 4.85\% \\ & ResNet50 & 1.72\% & 4.70\% & 6.62\% \\ \cline{1-1} & VGG16 & 1.87\% & 3.73\% & 4.89\% \\ \hline \hline \end{tabular} \end{table} Table 2: ACTS Overlap% values in different adversarial environments. Figure 4: ACTS scores histograms of **InceptionV3** in different experimental configurations. In each histogram, the orange color indicates the samples that are attacked successfully, and the light blue color indicates the ones that are attacked unsuccessfully. Figure 5: ACTS scores histograms of **ResNet50** in different experimental configurations. In each histogram, the orange color indicates the samples that are attacked successfully, and the light blue color indicates the ones that are attacked unsuccessfully. of DNN architecture, ACTS shows better performance on InceptionV3 and VGG16. We guess the reason is the local areas on output hypersurfaces of InceptionV3 and VGG16 around the output points of all tested images are flatter (i.e., the radius of curvature is small) than ResNet50. In this case, the \(DJM\) provides a more accurate linear approximation. It is worth to mention that the overlap area is getting larger when \(\epsilon\) increases in different adversarial environments. It confirms with the limitation of the DJM that the linear approximation accuracy decreases while \(\delta x\) increases. In the process of statistics, we found an interesting phenomenon called the _attack flip_: the image with a successful attack at a lower noise level may fail at a higher noise level. The result is shown in Table 3. _Attack flip_ is a good explanation for why there are very small ACTS scores in the overlap at a higher noise level. In other words, some small ACTS scores are counted as orange histogram at a lower noise level and then counted as blue histogram at a higher noise level. This flip will result in small ACTS scores in the overlap at a higher noise level. Besides, another reason is that the limitation of the proposed approach is that the proposed approach is not able to detect the the DJM. The linear approximation accuracy of the DJM decreases while \(\delta x\) increases which will lead to the error. _Attack flip_ also suggests that the lower bound may not always make sense. **Evaluating the Generalization of ACTS** In Fig. 3, the histogram of each row represent the results of the same model under different attack, and each column represent the results of different models under the same attack method. From the results, we can see that ACTS has a good generalization ability across different attack methods and models. **Correlations to CLEVER** We are interested in whether our ACTS align with the CLEVER. To this end, we compute the average score of all the tested images as the CLEVER's reported robustness number. The higher the CLEVER score, the more robust the model is. We also calculate the average ACTS score of all the tested images to represent the robustness of the network. From the results shown in Table 4, we get the same ranking correlation to CLEVER. It also demonstrates that models with higher ACTS scores are more robust. The results we obtained are basically consistent with the results in Table (3) (b) of CLEVER (The column of Top-2 Target) [51]. Besides, we conclude that VGG16 model with highest scores are more robust than other on test image set. This conclusion can also be found in [46]. It is worth to mention that the score distribution may change dramatically on different test image sets. **Determining k** To investigate the impact of the top-\(k\) class in \(DJM\), for each image, we evaluate its top-\(k\) class ACTS scores in Table 5. From the results, we can see that with the increase of \(k\), the value of Overlap% changes very slightly. Considering the balance between computational consumption and ACTSs' performance, it is reasonable to set the k to 10. ### Comparing With the State-of-the-art CLEVER We compare our method with state-of-the-art method CLEVER in this section. CLEVER score is designed for estimating the lower bound on the minimal distortion required to craft an adversarial sample, and it used \(L_{2}\) and \(L_{\infty}\) norms for their validations. We follow the setting in [51] to compute CLEVER \(L_{2}\) and \(L_{\infty}\) norms scores for 1,000 images out of the all 5,000 ImageNet validation set, as CLEVER is more computational expensive. The same set of randomly selected 1,000 images from the ImageNet validation set is also used in our method. Instead of sampling a high-dimension-space ball, our method only requires normal backpropagations, which is significantly faster than CLEVER. Our experiment results in Table 6 confirm this. For each image, we calculate its CLEVER and ACTS scores on an NVIDIA Tesla V100 graphics card, the average computation speed of our method is three orders of magnitude faster than CLEVER method on different models. We also use the Overlap% indicator to compare the effectiveness of different robustness metrics, inspired by the ROC curve, which visualizes all possible classification thresholds to quantify the performance of a classifier. Since ACTS and CLEVER only care about whether the distribution of image scores are consistent with successful/unsuccessful results in different adversarial environments, we can use the Overlap% indicator as "mis-classification rate". In Table 7, we calculate \(L_{2}\) CLEVER, \(L_{\infty}\) CLEVER and ACTS Overlap% values respectively. From the results, we can see that the value range of the \(L_{2}\) CLEVER and \(L_{\infty}\) CLEVER Overlap% is almost in \begin{table} \begin{tabular}{c c c} \hline \hline Model & CLEVER & ACTS \\ \hline VGG16 & 0.370 & 4.459 \\ \hline InceptionV3 & 0.215 & 3.047 \\ \hline ResNet50 & 0.126 & 2.558 \\ \hline \hline \end{tabular} \end{table} Table 4: Using CLEVER method and ACTS method to measure the ranking correlation of the robustness of different models. 10% ~ 20%, and the value range of the ACTS Overlap% is almost in 0% ~ 10%. CLEVER scores have almost more than twice larger Overlap% values on average for all testing configurations. Even though \(L_{\infty}\) CLEVER scores give slightly less Overlap% values than the ones based on \(L_{2}\) CLEVER scores, ACTS still outperform them in a significant margin with all testing configurations. These results indicate that ACTS is a more effective metric than CLEVER in different adversarial environments. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Metric} & Average & ACTS \\ & & Computation & speed\_up \\ & & Time (second) & \\ \hline \multirow{2}{*}{InceptionV3} & CLEVER & 331.42 & \multirow{2}{*}{6628/2549} \\ & ACTS & 0.05/0.13 & & \\ \hline \multirow{2}{*}{ResNet50} & CLEVER & 196.25 & \multirow{2}{*}{4906/2181} \\ & ACTS & 0.04/0.09 & & \\ \hline \multirow{2}{*}{VGG16} & CLEVER & 286.85 & \multirow{2}{*}{5737/2207} \\ & ACTS & 0.05/0.13 & & \\ \hline \hline \end{tabular} \end{table} Table 6: The average computation time of CLEVER and ACTS on different models for a single image in ImageNet. Blue and red fonts in the third column represent ACTS average computation time under one-step and multi-step attacks, respectively. The fourth column is the corresponding lifting multiple. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Model} & \multirow{2}{*}{Metric} & Overlap\% & Overlap\% & Overlap\% \\ & & & N1 & N2 & N3 \\ \hline \multirow{6}{*}{FGSM} & \multirow{2}{*}{InceptionV3} & ACTS-10 & 1.56\% & 2.08\% & 5.2\% \\ & & ACTS-20 & 1.56\% & 2.08\% & 5.2\% \\ & & ACTS-50 & 1.56\% & 2.08\% & 5.2\% \\ & & ACTS-10 & 2.39\% & 7.98\% & 9.84\% \\ & & ACTS-20 & 2.39\% & 7.98\% & 9.71\% \\ & & ACTS-10 & 1.3\% & 3.46\% & 6.63\% \\ & & ACTS-20 & 1.3\% & 3.46\% & 6.77\% \\ & & ACTS-50 & 1.3\% & 3.46\% & 6.92\% \\ \hline \multirow{6}{*}{BIM} & \multirow{2}{*}{InceptionV3} & ACTS-10 & 1.43\% & 5.2\% & 6.37\% \\ & & ACTS-20 & 1.43\% & 5.2\% & 6.37\% \\ & & ACTS-50 & 1.43\% & 5.2\% & 6.37\% \\ & & ACTS-10 & 4.39\% & 9.84\% & 10.9\% \\ & & ACTS-20 & 4.26\% & 9.71\% & 11.17\% \\ & & ACTS-10 & 2.74\% & 3.63\% & 5.91\% \\ & & ACTS-20 & 2.74\% & 3.63\% & 5.91\% \\ & & ACTS-50 & 2.74\% & 3.63\% & 5.91\% \\ \hline \multirow{6}{*}{PGD} & \multirow{2}{*}{InceptionV3} & ACTS-10 & 0.91\% & 3.25\% & 4.81\% \\ & & ACTS-20 & 0.91\% & 3.25\% & 4.42\% \\ & & ACTS-50 & 0.91\% & 2.99\% & 4.68\% \\ & & ACTS-10 & 0.93\% & 5.32\% & 6.12\% \\ & & ACTS-20 & 0.93\% & 5.45\% & 6.52\% \\ & & ACTS-50 & 0.93\% & 5.19\% & 6.38\% \\ & & ACTS-10 & 1.3\% & 3.03\% & 4.61\% \\ & & ACTS-20 & 1.44\% & 3.17\% & 4.61\% \\ & & ACTS-50 & 1.44\% & 3.17\% & 4.76\% \\ \hline \hline \end{tabular} \end{table} Table 5: Top-_k_ class ACTS Overlap% values in different environments. ## 5 Conclusion and Future Work In this work, we have proposed the Adversarial Converging Time Score (ACTS) as an instance-specific adversarial robustness metric. ACTS is inspired by the geometrical insight of the output hypersurfaces of a DNN classifier. We perform a comprehensive set of experiments to substantiate the effectiveness and generalization of our proposed metric. Compared to CLEVER, we prove that ACTS can provide a faster and more effective adversarial robustness prediction for different attacks across various DNN models. More importantly, ACTS solves the adversarial robustness problem from a geometrical point of view. We believe it provides a meaningful angle and insight into the adversarial robustness problem, which will help the future work in the same vein. In the future, we will focus on improving DNN's adversarial performance by leveraging the proposed ACTS. Another interesting direction to look into is extending the ACTS to make it work under black-box attack methods. ## Acknowledgments This work was supported in part by National Key Research and Development Program of China (2022ZD0210500), the National Natural Science Foundation of China under Grant 61972067/U21A20 491/U1908214, and the Distinguished Young Scholars Funding of Dalian (No. 2022RJ01). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Model} & \multirow{2}{*}{Metric} & \multicolumn{2}{c}{Overlap\%} & Overlap\% & Overlap\% \\ & & & N1 & N2 & N3 \\ \hline \multirow{8}{*}{FGSM} & & \(L_{2}\) CLEVER & 14.34\% & 15.36\% & 17.80\% \\ & InceptionV3 & \(L_{\infty}\) CLEVER & 13.7\% & 15.88\% & 17.67\% \\ & & ACTS & **1.56\%** & **2.08\%** & **5.2\%** \\ & & \(L_{2}\) CLEVER & 17.11\% & 18.6\% & 19.14\% \\ & ResNet50 & \(L_{\infty}\) CLEVER & 13.61\% & 15.5\% & 17.25\% \\ & & ACTS & **2.39\%** & **7.98\%** & **9.84\%** \\ & & \(L_{2}\) CLEVER & 10.43\% & 11.59\% & 13.48\% \\ & VGG16 & \(L_{\infty}\) CLEVER & 8.99\% & 10.43\% & 12.75\% \\ & & ACTS & **1.3\%** & **3.46\%** & **6.63\%** \\ \hline \multirow{8}{*}{BIM} & & \(L_{2}\) CLEVER & 11.91\% & 12.16\% & 10.88\% \\ & InceptionV3 & \(L_{\infty}\) CLEVER & 11.91\% & 11.65\% & 10.5\% \\ & & ACTS & **1.43\%** & **5.2\%** & **6.37\%** \\ & & \(L_{2}\) CLEVER & 16.44\% & 16.31\% & 14.42\% \\ & ResNet50 & \(L_{\infty}\) CLEVER & 13.34\% & 15.23\% & 13.34\% \\ & & ACTS & **4.39\%** & **9.84\%** & **10.9\%** \\ & & \(L_{2}\) CLEVER & 10.72\% & 11.45\% & 14.2\% \\ & VGG16 & \(L_{\infty}\) CLEVER & 9.13\% & 10.58\% & 13.19\% \\ & & ACTS & **2.74\%** & **3.63\%** & **5.91\%** \\ \hline \multirow{8}{*}{PGD} & & \(L_{2}\) CLEVER & 12.04\% & 11.4\% & 11.91\% \\ & InceptionV3 & \(L_{\infty}\) CLEVER & 11.01\% & 11.27\% & 11.78\% \\ & ACTS & **0.91\%** & **3.25\%** & **4.81\%** \\ & & \(L_{2}\) CLEVER & 14.15\% & 15.77\% & 14.82\% \\ & ResNet50 & \(L_{\infty}\) CLEVER & 12.13\% & 12.94\% & 12.8\% \\ & ACTS & **0.93\%** & **5.32\%** & **6.12\%** \\ & & \(L_{2}\) CLEVER & 7.54\% & 10.14\% & 13.77\% \\ & VGG16 & \(L_{\infty}\) CLEVER & 7.68\% & 8.7\% & 12.17\% \\ & ACTS & **1.3\%** & **3.03\%** & **4.61\%** \\ \hline \hline \end{tabular} \end{table} Table 7: Comparing ACTS with CLEVER Overlap% values in different adversarial environments.
2303.04347
Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks
Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware. As the most effective method to get deep SNNs, ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets. Despite this, it requires long time-steps to match the firing rates of SNNs to the activation of ANNs. As a result, the converted SNN suffers severe performance degradation problems with short time-steps, which hamper the practical application of SNNs. In this paper, we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs. Then we propose the quantization clip-floor-shift activation function to replace the ReLU activation function in source ANNs, which can better approximate the activation function of SNNs. We prove that the expected conversion error between SNNs and ANNs is zero, enabling us to achieve high-accuracy and ultra-low-latency SNNs. We evaluate our method on CIFAR-10/100 and ImageNet datasets, and show that it outperforms the state-of-the-art ANN-SNN and directly trained SNNs in both accuracy and time-steps. To the best of our knowledge, this is the first time to explore high-performance ANN-SNN conversion with ultra-low latency (4 time-steps). Code is available at https://github.com/putshua/SNN\_conversion\_QCFS
Tong Bu, Wei Fang, Jianhao Ding, PengLin Dai, Zhaofei Yu, Tiejun Huang
2023-03-08T03:04:53Z
http://arxiv.org/abs/2303.04347v1
# Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks ###### Abstract Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware. As the most effective method to get deep SNNs, ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets. Despite this, it requires long time-steps to match the firing rates of SNNs to the activation of ANNs. As a result, the converted SNN suffers severe performance degradation problems with short time-steps, which hamper the practical application of SNNs. In this paper, we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs. Then we propose the quantization clip-floor-shift activation function to replace the ReLU activation function in source ANNs, which can better approximate the activation function of SNNs. We prove that the expected conversion error between SNNs and ANNs is zero, enabling us to achieve high-accuracy and ultra-low-latency SNNs. We evaluate our method on CIFAR-10/100 and ImageNet datasets, and show that it outperforms the state-of-the-art ANN-SNN and directly trained SNNs in both accuracy and time-steps. To the best of our knowledge, this is the first time to explore high-performance ANN-SNN conversion with ultra-low latency (4 time-steps). Code is available at [https://github.com/putshua/SNN_conversion_QCFS](https://github.com/putshua/SNN_conversion_QCFS) ## 1 Introduction Spiking neural networks (SNNs) are biologically plausible neural networks based on the dynamic characteristic of biological neurons (McCulloch and Pitts, 1943; Izhikevich, 2003). As the third generation of artificial neural networks (Maass, 1997), SNNs have attracted great attention due to their distinctive properties over deep analog neural networks (ANNs) (Roy et al., 2019). Each neuron transmits discrete spikes to convey information when exceeding a threshold. For most SNNs, the spiking neurons will accumulate the current of the last layer as the output within \(T\) inference time steps. The binarized activation has rendered dedicated hardware of neuromorphic computing (Pei et al., 2019; DeBole et al., 2019; Davies et al., 2018). This kind of hardware has excellent advantages in temporal resolution and energy budget. Existing work has shown the potential of tremendous energy saving with considerably fast inference (Stockl and Maass, 2021). In addition to efficiency advantages, the learning algorithm of SNNs has been improved by leaps and bounds in recent years. The performance of SNNs trained by backpropagation through time and ANN-SNN conversion techniques has gradually been comparable to ANNs on large-scale datasets (Fang et al., 2021; Rueckauer et al., 2017). Both techniques benefit from the setting of SNN inference time. Setting longer time-steps in backpropagation can make the gradient of surrogate functions more reliable (Wu et al., 2018; Neftci et al., 2019; Zenke and Vogels, 2021). However, the price is enormous resource consumption during training. Existing platforms such as TensorFlow and PyTorch based on CUDA have limited optimization for SNN training. In contrast, ANN-SNN conversion usually depends on a longer inference time to get comparable accuracy as the original ANN (Sengupta et al., 2019) because it is based on the equivalence of ReLU activation and integrate-and-fire model's firing rate (Cao et al., 2015). Although longer inference time can further reduce the conversion error, it also hampers the practical application of SNNs on neuromorphic chips. The dilemma of ANN-SNN conversion is that there exists a remaining potential in the conversion theory, which is hard to be eliminated in a few time steps (Rueckauer et al., 2016). Although many methods have been proposed to improve the conversion accuracy, such as weight normalization (Diehl et al., 2015), threshold rescaling (Sengupta et al., 2019), soft-reset (Han and Roy, 2020) and threshold shift (Deng and Gu, 2020), tens to hundreds of time-steps in the baseline works are still unbearable. To obtain high-performance SNNs with ultra-low latency (e.g., 4 time-steps), we list the critical errors in ANN-SNN conversion and provide solutions for each error. Our main contributions are summarized as follows: * We go deeper into the errors in the ANN-SNN conversion and ascribe them to clipping error, quantization error, and unevenness error. We find that unevenness error, which is caused by the changes in the timing of arrival spikes and has been neglected in previous works, can induce more spikes or fewer spikes as expected. * We propose the quantization clip-floor-shift activation function to replace the ReLU activation function in source ANNs, which better approximates the activation function of SNNs. We prove that the expected conversion error between SNNs and ANNs is zero, indicating that we can achieve high-performance converted SNN at ultra-low time-steps. * We evaluate our method on CIFAR-10, CIFAR-100, and ImageNet datasets. Compared with both ANN-SNN conversion and backpropagation training methods, the proposed method exceeds state-of-the-art accuracy with fewer time-steps. For example, we reach top-1 accuracy 91.18% on CIFAR-10 with unprecedented 2 time-steps. ## 2 Preliminaries In this section, we first briefly review the neuron models for SNNs and ANNs. Then we introduce the basic framework for ANN-SNN conversion. **Neuron model for ANNs.** For ANNs, the computations of analog neurons can be simplified as the combination of a linear transformation and a non-linear mapping: \[\mathbf{a}^{l}=h(\mathbf{W}^{l}\mathbf{a}^{l-1}),\ \ \ l=1,2,...,M \tag{1}\] where the vector \(\mathbf{a}^{l}\) denotes the output of all neurons in \(l\)-th layer, \(\mathbf{W}^{l}\) denotes the weight matrix between layer \(l\) and layer \(l-1\), and \(h(\cdot)\) is the ReLU activation function. **Neuron model for SNNs.** Similar to the previous works (Cao et al., 2015; Diehl et al., 2015; Han et al., 2020), we consider the Integrate-and-Fire (IF) model for SNNs. If the IF neurons in \(l\)-th layer receive the input \(\mathbf{x}^{l-1}(t)\) from last layer, the temporal potential of the IF neurons can be defined as: \[\mathbf{m}^{l}(t)=\mathbf{v}^{l}(t-1)+\mathbf{W}^{l}\mathbf{x}^{l-1}(t), \tag{2}\] where \(\mathbf{m}^{l}(t)\) and \(\mathbf{v}^{l}(t)\) represent the membrane potential before and after the trigger of a spike at time-step \(t\). \(\mathbf{W}^{l}\) denote the weight in \(l\)-th layer. As soon as any element \(m^{l}_{i}(t)\) of \(\mathbf{m}^{l}(t)\) exceeds the firing threshold \(\theta^{l}\), the neuron will elicit a spike and update the membrane potential \(v^{l}_{i}(t)\). To avoid information loss, we use the "reset-by-subtraction" mechanism (Rueckauer et al., 2017; Han et al., 2020) instead of the "reset-to-zero" mechanism, which means the membrane potential \(v^{l}_{i}(t)\) is subtracted by the threshold value \(\theta^{l}\) if the neuron fires. Based on the threshold-triggered firing mechanism and the "reset-by-subtraction" of the membrane potential after firing discussed above, we can write the update rule of membrane potential as: \[\mathbf{s}^{l}(t)=H(\mathbf{m}^{l}(t)-\mathbf{\theta}^{l}), \tag{3}\] \[\mathbf{v}^{l}(t)=\mathbf{m}^{l}(t)-\mathbf{s}^{l}(t)\mathbf{\theta}^{l}. \tag{4}\] Here \(\mathbf{s}^{l}(t)\) refers to the output spikes of all neurons in layer \(l\) at time \(t\), the element of which equals 1 if there is a spike and 0 otherwise. \(H(\cdot)\) is the Heaviside step function. \(\mathbf{\theta}^{l}\) is the vector of the firing threshold \(\theta^{l}\). Similar to Deng and Gu (2020), we suppose that the postsynaptic neuron in \(l\)-th layer receives unweighted postsynaptic potential \(\theta^{l}\) if the presynaptic neuron in \(l-1\)-th layer fires a spike, that is: \[\mathbf{x}^{l}(t)=\mathbf{s}^{l}(t)\theta^{l}. \tag{5}\] ANN-SNN conversion.The key idea of ANN-SNN conversion is to map the activation value of an analog neuron in ANN to the firing rate (or average postsynaptic potential) of a spiking neuron in SNN. Specifically, we can get the potential update equation by combining Equation 2 - Equation 4: \[\mathbf{v}^{l}(t)-\mathbf{v}^{l}(t-1)=\mathbf{W}^{l}\mathbf{x}^{l-1}(t)-\mathbf{s}^{l}(t)\theta^{l}. \tag{6}\] Equation 6 describes the basic function of spiking neurons used in ANN-SNN conversion. By summing Equation 6 from time \(1\) to \(T\) and dividing \(T\) on both sides, we have: \[\frac{\mathbf{v}^{l}(T)-\mathbf{v}^{l}(0)}{T}=\frac{\mathbf{W}^{l}\sum_{i=1}^{T}\mathbf{x}^{l- 1}(i)}{T}-\frac{\sum_{i=1}^{T}\mathbf{s}^{l}(i)\theta^{l}}{T}. \tag{7}\] If we use \(\phi^{l-1}(T)=\frac{\sum_{i=1}^{T}\mathbf{x}^{l-1}(i)}{T}\) to denote the average postsynaptic potential during the period from 0 to \(T\) and substitute Equation 5 into Equation 7, then we get: \[\mathbf{\phi}^{l}(T)=\mathbf{W}^{l}\phi^{l-1}(T)-\frac{\mathbf{v}^{l}(T)-\mathbf{v}^{l}(0)}{T}. \tag{8}\] Equation 8 describes the relationship of the average postsynaptic potential of neurons in adjacent layers. Note that \(\mathbf{\phi}^{l}(T)\geqslant 0\). If we set the initial potential \(\mathbf{v}^{l}(0)\) to zero and neglect the remaining term \(\frac{\mathbf{v}^{l}(T)}{T}\) when the simulation time-steps \(T\) is long enough, the converted SNN has nearly the same activation function as source ANN (Equation 1). However, high \(T\) would cause long inference latency that hampers the practical application of SNNs. Therefore, this paper aims to implement high-performance ANN-SNN conversion with extremely low latency. ## 3 Conversion error analysis In this section, we will analyze the conversion error between the source ANN and the converted SNN in each layer in detail. In the following, we assume that both ANN and SNN receive the same input from the layer \(l-1\), that is, \(\mathbf{a}^{l-1}=\mathbf{\phi}^{l-1}(T)\), and then analyze the error in layer \(l\). For simplicity, we use \(\mathbf{z}^{l}=\mathbf{W}^{l}\mathbf{\phi}^{l-1}(T)=\mathbf{W}^{l}\mathbf{a}^{l-1}\) to substitute the weighted input from layer \(l-1\) for both ANN and SNN. The absolute conversion error is exactly the outputs from converted SNN subtract the outputs from ANN: \[\mathbf{E}\mathbf{r}\mathbf{r}^{l}=\mathbf{\phi}^{l}(T)-\mathbf{a}^{l}=\mathbf{z}^{l}-\frac{\mathbf{v}^{l} (T)-\mathbf{v}^{l}(0)}{T}-h(\mathbf{z}^{l}), \tag{9}\] where \(h(\mathbf{z}^{l})=\text{ReLU}(\mathbf{z}^{l})\). It can be found from Equation 9 that the conversion error is nonzero if \(\mathbf{v}^{l}(T)-\mathbf{v}^{l}(0)\neq 0\) and \(\mathbf{z}^{l}>0\). In fact, the conversion error is caused by three factors. **Clipping error.** The output \(\mathbf{\phi}^{l}(T)\) of SNNs is in the range of \([0,\theta^{l}]\) as \(\mathbf{\phi}^{l}(T)=\frac{\sum_{i=1}^{T}\mathbf{x}^{l}(i)}{T}=\frac{\sum_{i=1}^{T}\bm {s}^{l}(i)}{T}\theta^{l}\) (see Equation 5). However, the output \(\mathbf{a}^{l}\) of ANNs is in a much layer range of \([0,a_{max}^{l}]\), where \(a_{max}^{l}\) denotes the maximum value of \(\mathbf{a}^{l}\). As illustrated in Figure 0(a), \(\mathbf{a}^{l}\) can be mapped to \(\mathbf{\phi}^{l}(T)\) by the following equation: \[\mathbf{\phi}^{l}(T)=\text{clip}\left(\frac{\theta^{l}}{T}\left\lfloor\frac{a^{l} T}{\lambda^{l}}\right\rfloor,0,\theta^{l}\right). \tag{10}\] \begin{table} \begin{tabular}{l l l l} \hline \hline **Symbol** & **Definition** & **Symbol** & **Definition** \\ \hline \(l\) & Layer index & \(\mathbf{x}^{l}(t)\) & Unweighted PSP\({}^{l}\) \\ \(i\) & Neuron index & \(\mathbf{s}^{l}(t)\) & Output spikes \\ \(\mathbf{W}^{l}\) & Weight & \(\mathbf{\phi}^{l}(T)\) & Average unweighted PSP before time \(T\) \\ \(\mathbf{a}^{l}\) & ANN activation values & \(\mathbf{z}^{l}\) & Weighted input from \(l-1\) layer \\ \(t\) & Time-steps & \(h(\cdot)\) & ReLU function \\ \(T\) & Total time-step & \(H(\cdot)\) & Heaviside step function \\ \(\mathbf{\theta}^{l}\) & Threshold & \(L\) & Quantization step for ANN \\ \(\lambda^{l}\) & Trainable threshold in ANN & \(\mathbf{E}\mathbf{r}\mathbf{r}^{l}\) & Conversion Error \\ \(\mathbf{m}^{l}(t)\) & Potential before firing & \(\widehat{\mathbf{E}\mathbf{r}^{l}}\) & Estimated conversion Error \\ \(\mathbf{v}^{l}(t)\) & Potential after firing & \(\varphi\) & Shift of quantization clip-floor function \\ \hline \hline \end{tabular} * Postsynaptic potential \end{table} Table 1: Summary of notations in this paper Here the clip function sets the upper bound \(\theta^{l}\) and the lower bound \(0\). \(\lfloor\cdot\rfloor\) denotes the floor function. \(\lambda^{l}\) represents the actual maximum value of output \(\mathbf{a}^{l}\) mapped to the maximum value \(\theta^{l}\) of \(\mathbf{\phi}^{l}(T)\). Considering that nearly 99.9% activations of \(\mathbf{a}^{l}\) in ANN are in the range of \([0,\frac{a_{max}}{3}]\), Rueckauer et al. (2016) suggested to choose \(\lambda^{l}\) according to 99.9% activations. The activations between \(\lambda^{l}\) and \(a_{max}^{l}\) in ANN are mapped to the same value \(\theta^{l}\) in SNN, which will cause conversion error called clipping error. **Quantization error (flooring error).** The output spikes \(\mathbf{s}^{l}(t)\) are discrete events, thus \(\mathbf{\phi}^{l}(T)\) are discrete with quantization resolution \(\frac{\theta^{l}}{T}\) (see Equation 10). When mapping \(\mathbf{a}^{l}\) to \(\mathbf{\phi}^{l}(T)\), there exists unavoidable quantization error. For example, as illustrated in Figure 0(a), the activations of ANN in the range of \([\frac{\lambda^{l}}{T},\frac{\Delta^{l}}{T^{l}})\) are mapped to the same value \(\frac{\theta^{l}}{T}\) of SNN. **Unevenness error.** Unevenness error is caused by the unevenness of input spikes. If the timing of arrival spikes changes, the output firing rates may change, which causes conversion error. There are two situations: more spikes as expected or fewer spikes as expected. To see this, in source ANN, we suppose that two analog neurons in layer \(l-1\) are connected to an analog neuron in layer \(l\) with weights 2 and -2, and the output vector \(\mathbf{a}^{l-1}\) of neurons in layer \(l-1\) is \([0.6,0.4]\). Besides, in converted SNN, we suppose that the two spiking neurons in layer \(l-1\) fire 3 spikes and 2 spikes in 5 time-steps (T=5), respectively, and the threshold \(\theta^{l-1}=1\). Thus, \(\mathbf{\phi}^{l-1}(T)=\frac{\sum_{i=1}^{T}\mathbf{s}^{l-1}(i)}{T}\theta^{l-1}=[0.6,0.4]\). Even though \(\mathbf{\phi}^{l-1}(T)=\mathbf{a}^{l-1}\) and the weights are same for the ANN and SNN, \(\mathbf{\phi}^{l}(T)\) can be different from \(\mathbf{a}^{l}\) if the timing of arrival spikes changes. According to Equation 1, the ANN output \(\mathbf{a}^{l}=\mathbf{W}^{l}\mathbf{a}^{l-1}=[2,-2][0.6,0.4]^{T}=0.4\). As for SNN, supposing that the threshold \(\theta^{l}=1\), there are three possible output firing rates, which are illustrated in Figure 1 (b)-(d). If the two presynaptic neurons fires at \(t=1,3,5\) and \(t=2,4\) (red bars) respectively with weights 2 and -2, the postsynaptic neuron will fire two spikes at \(t=1,3\) (red bars), and \(\mathbf{\phi}^{l}(T)=\frac{\sum_{i=1}^{T}\mathbf{s}^{l}(i)}{\theta^{l}}=0.4=\mathbf{a}^{l}\). However, if the presynaptic neurons fires at \(t=1,2,3\) and \(t=4,5\), respectively, the postsynaptic neuron will fire four spikes at \(t=1,2,3,4\), and \(\mathbf{\phi}^{l}(T)=0.8>\mathbf{a}^{l}\). If the presynaptic neurons fires at \(t=3,4,5\) and \(t=1,2\), respectively, the postsynaptic neuron will fire only one spikes at \(t=5\), and \(\mathbf{\phi}^{l}(T)=0.2<\mathbf{a}^{l}\). Note that the clipping error and quantization error have been proposed in Li et al. (2021). There exist interdependence between the above three kinds of errors. Specifically, the unevenness error will degenerate to the quantization error if \(\mathbf{v}^{l}(T)\) is in the range of \([0,\theta^{l}]\). Assuming that the potential \(\mathbf{v}^{l}(T)\) falls into \([0,\theta^{l}]\) will enable us to estimate the activation function of SNNs ignoring the effect of unevenness error. Therefore, an estimation of the output value \(\mathbf{\phi}^{l}(T)\) in a converted SNN can be formulated with the combination of clip function and floor function, that is: \[\mathbf{\phi}^{l}(T)\approx\theta^{l}\operatorname{clip}\left(\frac{1}{T}\left[ \frac{\mathbf{z}^{l}T+\mathbf{v}^{l}(0)}{\theta^{l}}\right],0,1\right). \tag{11}\] The detailed derivation is in the Appendix. With the help of this estimation for the SNN output, the estimated conversion error \(\widehat{\mathbf{Err}}^{l}\) can be derived from Equation 9: \[\widehat{\mathbf{Err}}^{l}=\theta^{l}\operatorname{clip}\left(\frac{1}{T}\left[ \frac{\mathbf{z}^{l}T+\mathbf{v}^{l}(0)}{\theta^{l}}\right],0,1\right)-h(\mathbf{z}^{l}) \approx\mathbf{Err}^{l}. \tag{12}\] Figure 1: Conversion error between source ANN and converted SNN. \(s_{1}^{l-1}\) and \(s_{2}^{l-1}\) denote the output spikes of two neurons in layer \(l-1\), and \(s_{1}^{l}\) denotes the output spikes of a neuron in layer \(l\). ## 4 Optimal ANN-SNN conversion ### quantization clip-floor activation function According to the conversion error of Equation 12, it is natural to think that if the commonly used ReLU activation function \(h(\mathbf{z}^{l})\) is substituted by a clip-floor function with a given quantization steps \(L\) (similar to Equation 11), the conversion error at time-steps \(T=L\) will be eliminated. Thus the performance degradation problem at low latency will be solved. As shown in Equation 13, we proposed the quantization clip-floor activation function to train ANNs. \[\mathbf{a}^{l}=\tilde{h}(\mathbf{z}^{l})=\lambda^{l}\operatorname{clip}\left(\frac{1} {L}\left[\frac{\mathbf{z}^{l}L}{\lambda^{l}}\right],0,1\right), \tag{13}\] where the hyperparameter \(L\) denotes quantization steps of ANNs, the trainable \(\lambda^{l}\) decides the maximum value of \(\mathbf{a}^{l}\) in ANNs mapped to the maximum of \(\mathbf{\phi}^{l}(T)\) in SNNs. Note that \(\mathbf{z}^{l}=\mathbf{W}^{l}\mathbf{\phi}^{l-1}(T)=\mathbf{W}^{l}\mathbf{a}^{l-1}\). With this new activation function, we can prove that the estimated conversion error between SNNs and ANNs is zero, and we have the following Theorem. **Theorem 1**.: _An ANN with activation function (13) is converted to an SNN with the same weights. If \(T=L\), \(\theta^{l}=\lambda^{l}\), and \(\mathbf{v}^{l}(0)=\mathbf{0}\), then:_ \[\widetilde{\mathbf{Err}}^{l}=\mathbf{\phi}^{l}(T)-\mathbf{a}^{l}=\mathbf{0}. \tag{14}\] Proof.: According to Equation 12, and the conditions \(T=L\), \(\theta^{l}=\lambda^{l}\), \(\mathbf{v}^{l}(0)=\mathbf{0}\), we have \(\widetilde{\mathbf{Err}}^{l}=\mathbf{\phi}^{l}(T)-\mathbf{a}^{l}=\theta^{l}\operatorname{ clip}\left(\frac{1}{T}\left[\frac{\mathbf{z}^{l}T+\mathbf{z}^{l}(0)}{\theta^{l}}\right],0,1 \right)-\lambda^{l}\operatorname{clip}\left(\frac{1}{L}\left[\frac{\mathbf{z}^{ l}L}{\lambda^{l}}\right],0,1\right)=0\). Theorem 1 implies that if the time-steps \(T\) of the converted SNN is the same as the quantization steps \(L\) of the source ANN, the conversion error will be zero. An example is illustrated in Figure 1(a), where \(T=L=4\), \(\theta^{l}=\lambda^{l}\). The red curve presents the estimated output \(\mathbf{\phi}^{l}(T)\) of the converted SNNs with respective to different input \(\mathbf{z}^{l}\), while the green curve represents the out \(\mathbf{a}^{l}\) of the source ANN with respective to different input \(\mathbf{z}^{l}\). As the two curve are the same, the estimated conversion error \(\widetilde{\mathbf{Err}}^{l}\) is zero. Nevertheless, in practical application, we focus on the performance of SNNs at different time-steps. There is no guarantee that the conversion error is zero when \(T\) is not equal to \(L\). As illustrated in Figure 1(b), where \(L=4\) and \(L=8\), we can find the conversion error is greater than zero for some \(\mathbf{z}^{l}\). This error will transmit layer-by-layer and eventually degrading the accuracy of the converted SNN. One way to solve this problem is to train multiple source ANNs with different quantization steps, then convert them to SNNs with different time-steps, but it comes at a considerable cost. In the next section, we propose the quantization clip-floor activation function with a shift term to solve this problem. Such an approach can achieve high accuracy for different time-steps, without extra computation cost. ### quantization clip-floor-shift activation function We propose the quantization clip-floor-shift activation function to train ANNs. \[\mathbf{a}^{l}=\widehat{h}(\mathbf{z}^{l})=\lambda^{l}\operatorname{clip}\left(\frac {1}{L}\left[\frac{\mathbf{z}^{l}L}{\lambda^{l}}+\mathbf{\varphi}\right],0,1\right). \tag{15}\] Figure 2: Comparison of SNN output \(\mathbf{\phi}^{l}(T)\) and ANN output \(\mathbf{a}^{l}\) with same input \(\mathbf{z}^{l}\) Compared with Equation 13, there exists a hyperparameter vector \(\mathbf{\varphi}\) that controls the shift of the activation function. When \(L\neq T\), we cannot guarantee the conversion error is 0. However, we can estimate the expectation of conversion error. Similar to (Deng & Gu, 2020), we assume that \(z_{i}^{l}\) is uniformly distributed within intervals \([(t-1)\lambda^{l}/T,(t)\lambda^{l}/T]\) and \([(l-1)\lambda^{l}/L,(l)\lambda^{l}/L]\) for \(t=1,2,...,T\) and \(L=1,2,...,L\), we have the following Theorem. **Theorem 2**.: _An ANN with activation function (15) is converted to an SNN with the same weights. If \(\theta^{l}=\lambda^{l}\), \(\mathbf{v}^{l}(0)=\theta^{l}\mathbf{\varphi}\), then for arbitrary \(T\) and \(L\), the expectation of conversion error reaches \(\mathbf{0}\) when the shift term \(\mathbf{\varphi}\) in source ANN is \(\frac{1}{2}\)._ \[\forall\ T,L\quad\mathbb{E}_{z}\left(\widetilde{\mathbf{Err}}^{l}\right)\Big{|}_ {\mathbf{\varphi}=\frac{1}{2}}=\mathbf{0}. \tag{16}\] The proof is in the Appendix. Theorem 2 indicates that the shift term \(\frac{1}{2}\) is able to optimize the expectation of conversion error. By comparing Figure 1(b) and Figure 1(c), we can find that when the shift term \(\mathbf{\varphi}=\mathbf{0.5}\) is added, the mean conversion error reaches zero, even though \(L\neq T\). These results indicate we can achieve high-performance converted SNN at ultra-low time-steps. \(L\) is the only undetermined hyperparameter of the quantization clip-floor-shift activation. When \(T=L\), the conversion error reaches zero. So we naturally think that the parameter \(L\) should be set as small as possible to get better performance at low time-steps. However, a too low quantization of the activation function will decrease the model capacity and further lead to accuracy loss when the time-steps is relatively large. Choosing the proper \(L\) is a trade-off between the accuracy at low latency and the best accuracy of SNNs. We will further analyze the effects of quantization steps \(L\) in the experiment section. ### algorithm for training quantization clip-floor-shift activation function Training an ANN with quantization clip-floor-shift activation instead of ReLU is also a tough problem. To direct train the ANN, we use the straight-through estimator (Bengio et al., 2013) for the derivative of the floor function, that is \(\frac{\mathrm{d}\left|x\right|}{\mathrm{d}x}=1\). The overall derivation rule is given in Equation 17. \[\frac{\partial\widehat{h}_{i}(\mathbf{z}^{l})}{\partial z_{i}^{l}}=\begin{cases}1,&-\frac{\lambda^{l}}{2L}<z_{i}^{l}<\lambda^{l}-\frac{\lambda^{l}}{2L}\,\ \frac{\partial\widehat{h}_{i}(\mathbf{z}^{l})}{ \partial\lambda^{l}}\ \ =\begin{cases}\frac{\widehat{h}_{i}(\mathbf{z}^{l})-z_{i}^{l}}{ \lambda^{l}},&-\frac{\lambda^{l}}{2L}\leqslant z_{i}^{l}<\lambda^{l}-\frac{ \lambda^{l}}{2L}\\ 0,&z_{i}^{l}<-\frac{\lambda^{l}}{2L}\\ 1,&z_{i}^{l}\geqslant\lambda^{l}-\frac{\lambda^{l}}{2L}\end{cases} \tag{17}\] Here \(z_{i}^{l}\) is the i-th element of \(\mathbf{z}^{l}\). Then we can train the ANN with quantization clip-floor-shift activation using Stochastic Gradient Descent algorithm (Bottou, 2012). ## 5 Related Work The study of ANN-SNN conversion is first launched by Cao et al. (2015). Then Diehl et al. (2015) converted a three-layer CNN to an SNN using data-based and model-based normalization. To obtain high-performance SNNs for complex datasets and deeper networks, Rueckauer et al. (2016) and Sengupta et al. (2019) proposed more accurate scaling methods to normalize weights and scale thresholds respectively, which were later proved to be equivalent (Ding et al., 2021). Nevertheless, the converted deep SNN requires hundreds of time steps to get accurate results due to the conversion error analyzed in Sec. 3. To address the potential information loss, Rueckauer et al. (2016) and Han et al. (2020) suggested using "reset-by-subtraction" neurons rather than "reset-to-zero" neurons. Recently, many methods have been proposed to eliminate the conversion error. Rueckauer et al. (2016) recommended 99.9% percentile of activations as scale factors, and Ho & Chang (2020) added the trainable clipping layer. Besides, Han et al. (2020) rescaled the SNN thresholds to avoid the improper activation of spiking neurons. Massa et al. (2020) and Singh et al. (2021) evaluated the performance of converted SNNs on the Loihi Neuromorphic Processor. Our work share similarity with Deng & Gu (2020); Li et al. (2021), which also shed light on the conversion error. Deng & Gu (2020) minimized the layer-wise error by introducing extra bias in addition to the converted SNN biases. Li et al. (2021) further proposed calibration for weights and biases using quantized fine-tuning. They got good results with 16 and 32 time-steps without trails for more extreme time-steps. In comparison, our work aims to fit ANN into SNN with techniques eliminating the mentioned conversion error. The end-to-end training of quantization layers is implemented to get better overall performance. Our shift correction can lead to a single SNN which performs well at both ultra-low and large time-steps. Maintaining SNN performance within extremely few time-steps is difficult even for supervised learning methods like backpropagation through time (BPTT). BPTT usually requires fewer time-steps because of thorough training, yet at the cost of heavy GPU computation (Wu et al., 2018, 2019; Lee et al., 2019; Neftci et al., 2019; Lee et al., 2020; Zenke & Vogels, 2021). The timing-based backpropagation methods (Bohte et al., 2002; Tavanaei et al., 2019; Kim et al., 2020) could train SNNs over a very short temporal window, e.g. over 5-10 time-steps. However, they are usually limited to simple datasets like MNIST (Kheradpisheh & Masquelier, 2020) and CIFAR10 (Zhang & Li, 2020). Rathi et al. (2019) shortened simulation steps by initializing SNN with conversion method and then tuning SNN with STDP. In this paper, the proposed method achieves high-performance SNNs with ultra-low latency (4 time-steps). ## 6 Experiments In this section, we validate the effectiveness of our method and compare our method with other state-of-the-art approaches for image classification tasks on CIFAR-10 (LeCun et al., 1998), CIFAR-100 (Krizhevsky et al., 2009), and ImageNet datasets (Deng et al., 2009). Similar to previous works, we utilize VGG-16 (Simonyan & Zisserman, 2014), ResNet-18 (He et al., 2016), and ResNet-20 network structures for source ANNs. We compare our method with the state-of-the-art ANN-SNN conversion methods, including Hybrid-Conversion (HC) from Rathi et al. (2019), RMP from Han et al. (2020), TSC from Han & Roy (2020), RNL from Ding et al. (2021), ReLUThreshold-Shift (RTS) from Deng & Gu (2020), and SNN Conversion with Advanced Pipeline (SNNC-AP) from Li et al. (2021). Comparison with different SNN training methods is also included to manifest the superiority of low latency inference, including HybridConversion-STDB (HC-STDB) from Rathi et al. (2019), STBP from Wu et al. (2018), DirectTraining (DT) from Wu et al. (2019), and TSSL from Zhang & Li (2020). The details of the proposed ANN-SNN algorithm and training configurations are provided in the Appendix. ### Test accuracy of ANN with quantization clip-floor-shift activation We first compare the performance of ANNs with quantization clip-floor activation (green curve), ANNs with quantization clip-floor-shift activation (blue curve), and original ANNs with ReLU activation (black dotted line). Figure 3(a)-(d) report the results about VGG-16 on CIFAR-10, ResNet-20 on CIFAR-10, VGG-16 on CIFAR-100 and ResNet-20 on CIFAR-100. The performance of ANNs with quantization clip-floor-shift activation is better than ANNs with quantization clip-floor activation. These two ANNs can achieve the same performance as original ANNs with ReLU activation when \(L>4\). These results demonstrate that our quantization clip-floor-shift activation function hardly affects the performance of ANN. ### Comparison with the state-of-the-art Table 2 compares our method with the state-of-the-art ANN-SNN conversion methods on CIFAR-10. As for low latency inference (T \(\leq 64\)), our model outperforms all the other methods with the same time-step setting. For T \(=32\), the accuracy of our method is slightly better than that of ANN (95.54% vs. 95.52%), whereas RMP, RTS, RNL, and SNNC-AP methods have accuracy loss of 33.3%, 19.48%, 7.42%, and 2.01%. Moreover, we achieve an accuracy of 93.96% using only 4 time-steps, which is 8 times faster than SNNC-AP that takes 32 time-steps. For ResNet-20, we achieve an accuracy of 83.75% with 4 time-steps. Notably, our ultra-low latency performance is Figure 3: Compare ANNs accuracy. comparable with other state-of-the-art supervised training methods, which is shown in Table S3 of the Appendix. We further test the performance of our method on the large-scale dataset. Table 3 reports the results on ImageNet, our method also outperforms the others both in terms of high accuracy and ultra-low latency. For ResNet-34, the accuracy of the proposed method is 4.83% higher than SNNC-AP and 69.28% higher than RTS when \(T=32\). When the time-steps is 16, we can still achieve an accuracy of 59.35%. For VGG-16, the accuracy of the proposed method is 4.83% higher than SNNC-AP and 68.356% higher than RTS when \(T=32\). When the time-steps is 16, we can still achieve an accuracy of 50.97%. These results demonstrate that our method outperforms the previous conversion methods. More experimental results on CIFAR-100 is in Table S4 of the Appendix. ### Comparison of quantization clip-floor and quantization clip-floor-shift Here we further compare the performance of SNNs converted from ANNs with quantization clip-floor activation and ANN with quantization clip-floor-shift activation. In Sec. 4, we prove that the expectation of the conversion error reaches 0 with quantization clip-floor-shift activation, no matter whether \(T\) and \(L\) are the same or not. To verify these, we set \(L\) to 4 and train ANNs with quantization clip-floor activation and quantization clip-floor-shift activation, respectively. Figure 4 shows how the accuracy of converted SNNs changes with respect to the time-steps \(T\). The accuracy of the converted SNN (green curve) from ANN with quantization clip-floor activation (green dotted line) first increases and then decreases rapidly with the increase of time-steps, because we cannot guarantee that the conversion error is zero when \(T\) is not equal to \(L\). The best performance is still lower than source ANN (green dotted line). In contrast, the accuracy of the converted SNN from ANN with quantization clip-floor-shift activation (blue curve) increases with the increase of \(T\). It gets the same accuracy as source ANN (blue dotted line) when the time-steps is larger than 16. ### Effect of quantization steps L In our method, the quantization steps \(L\) is a hyperparameter, which affects the accuracy of the converted SNN. To analyze the effect of \(L\) and better determine the optimal value, we train VGG-16/ResNet-20 networks with quantization clip-floor-shift activation using different quantization steps L, including 2,4,8,16 and 32, and then converted them to SNNs. The experimental results \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline Architecture & Method & ANN & T=2 & T=4 & T=8 & T=16 & T=32 & T=64 & T\(\geq\)512 \\ \cline{3-10} & RMP & 93.63\% & - & - & - & - & 60.30\% & 90.35\% & 93.63\% \\ \cline{2-10} & TSC & 93.63\% & - & - & - & - & - & 92.79\% & 93.63\% \\ \cline{2-10} & RTS & 95.72\% & - & - & - & - & 76.24\% & 90.64\% & 95.73\% \\ \cline{2-10} & RNL & 92.82\% & - & - & - & 57.90\% & 85.40\% & 91.15\% & 92.95\% \\ \cline{2-10} & SNNC-AP & 95.72\% & - & - & - & - & 93.71\% & 95.14\% & 95.79\% \\ \cline{2-10} & **Ours** & 95.52\% & 91.18\% & 93.96\% & 94.95\% & 95.40\% & 95.54\% & 95.55\% & 95.59\% \\ \hline \multirow{3}{*}{ResNet-20} & RMP & 91.47\% & - & - & - & - & - & - & 91.36\% \\ \cline{2-10} & TSC & 91.47\% & - & - & - & - & - & 69.38\% & 91.42\% \\ \cline{2-10} & **Ours** & 91.77\% & 73.20\% & 83.75\% & 89.55\% & 91.62\% & 92.24\% & 92.35\% & 92.41\% \\ \hline \multirow{3}{*}{ResNet-18} & RTS1 & 95.46\% & - & - & - & - & 84.06\% & 92.48\% & 94.42\% \\ \cline{2-10} & SNNC-AP1 & 95.46\% & - & - & - & - & 94.78\% & 95.30\% & 95.45\% \\ \cline{1-1} \cline{2-10} & **Ours** & 96.04\% & 75.44\% & 90.43\% & 94.82\% & 95.92\% & 96.08\% & 96.06\% & 96.06\% \\ \hline \hline \end{tabular} * RTS and SNNC-AP use altered ResNet-18, while ours use standard ResNet-18. \end{table} Table 2: Comparison between the proposed method and previous works on CIFAR-10 dataset. Figure 4: Compare quantization clip-floor activation with/without shift term on CIFAR-10/100 dataset are shown in Table S2 and Figure 5, where the black dotted line denotes the ANN accuracy and the colored curves represent the accuracy of the converted SNN. In order to balance the trade-off between low latency and high accuracy, we evaluate the performance of converted SNN mainly in two aspects. First, we focus on the SNN accuracy at ultra-low latency (within 4 time-steps). Second, we consider the best accuracy of SNN. It is obvious to find that the SNN accuracy at ultra-low latency decreases as \(L\) increases. However, a too small \(L\) will decrease the model capacity and further lead to accuracy loss. When \(L=2\), there exists a clear gap between the best accuracy of SNN and source ANN. The best accuracy of SNN approaches source ANN when \(L>4\). In conclusion, the setting of parameter \(L\) mainly depends on the aims for low latency or best accuracy. The recommend quantization step \(L\) is 4 or 8, which leads to high-performance converted SNN at both small time-steps and very large time-steps. ## 7 Discussion and conclusion In this paper, we present ANN-SNN conversion method, enabling high-accuracy and ultra-low-latency deep SNNs. We propose the quantization clip-floor-shift activation to replace ReLU activation, which hardly affects the performance of ANNs and is closer to SNNs activation. Furthermore, we prove that the expected conversion error is zero, no matter whether the time-steps of SNNs and the quantization steps of ANNs is the same or not. We achieve state-of-the-art accuracy with fewer time-steps on CIFAR-10, CIFAR-100, and ImageNet datasets. Our results can benefit the implementations on neuromorphic hardware and pave the way for the large-scale application of SNNs. Different from the work of Deng and Gu (2020), which adds the bias of the converted SNNs to shift the theoretical ANN-SNN curve to minimize the quantization error, we add the shift term in the quantization clip-floor activation function, and use this quantization clip-floor-shift function to train the source ANN. We show that the shift term can overcome the performance degradation problem when the time-steps and the quantization steps are not matched. Due to the unevenness error, there still exists a gap between ANN accuracy and SNN accuracy, even when \(L=T\). Moreover, it is hard to achieve high-performance ANN-SNN conversion when the time-steps \(T=1\). All these problems deserve further research. One advantage of conversion-based methods is that they can reduce the overall computing cost while maintaining comparable performance as source ANN. Combining the conversion-based methods and model compression may help significantly reduce the neuron activity Figure 5: Influence of different quantization steps \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Architecture & Method & ANN & T=16 & T=32 & T=64 & T=128 & T=256 & T\(\geq\)1024 \\ \hline \multirow{4}{*}{ResNet-34} & RMP & 70.64\% & - & - & - & - & - & 65.47\% \\ \cline{2-9} & TSC & 70.64\% & - & - & - & - & 61.48\% & 65.10\% \\ \cline{2-9} & RTS & 75.66\% & - & 0.09\% & 0.12\% & 3.19\% & 47.11\% & 75.08\% \\ \cline{2-9} & SNNC-AP & 75.66\% & - & 64.54\% & 71.12\% & 73.45\% & 74.61\% & 75.45\% \\ \cline{2-9} & **Ours** & 74.32\% & 59.35\% & 69.37\% & 72.35\% & 73.15\% & 73.37\% & 73.39\% \\ \hline \multirow{4}{*}{VGG-16} & RMP & 73.49\% & - & - & - & - & 48.32\% & 73.09\% \\ \cline{2-9} & TSC & 73.49\% & - & - & - & - & 69.71\% & 73.46\% \\ \cline{1-1} \cline{2-9} & RTS & 75.36\% & - & 0.114\% & 0.118\% & 0.122\% & 1.81\% & 73.88\% \\ \cline{1-1} \cline{2-9} & SNNC-AP & 75.36\% & - & 63.64\% & 70.69\% & 73.32\% & 74.23\% & 75.32\% \\ \cline{1-1} \cline{2-9} & **Ours** & 74.29\% & 50.97\% & 68.47\% & 72.85\% & 73.97\% & 74.22\% & 74.32\% \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison between the proposed method and previous works on ImageNet dataset. and thus reduce energy consumptions without suffering from accuracy loss (Kundu et al., 2021; Rathi and Roy, 2021), which is a promising direction. ## Acknowledgement This work was supported by the National Natural Science Foundation of China under contracts No.62176003 and No.62088102.
2307.08934
Multi-stage Neural Networks: Function Approximator of Machine Precision
Deep learning techniques are increasingly applied to scientific problems, where the precision of networks is crucial. Despite being deemed as universal function approximators, neural networks, in practice, struggle to reduce the prediction errors below $O(10^{-5})$ even with large network size and extended training iterations. To address this issue, we developed the multi-stage neural networks that divides the training process into different stages, with each stage using a new network that is optimized to fit the residue from the previous stage. Across successive stages, the residue magnitudes decreases substantially and follows an inverse power-law relationship with the residue frequencies. The multi-stage neural networks effectively mitigate the spectral biases associated with regular neural networks, enabling them to capture the high frequency feature of target functions. We demonstrate that the prediction error from the multi-stage training for both regression problems and physics-informed neural networks can nearly reach the machine-precision $O(10^{-16})$ of double-floating point within a finite number of iterations. Such levels of accuracy are rarely attainable using single neural networks alone.
Yongji Wang, Ching-Yao Lai
2023-07-18T02:47:32Z
http://arxiv.org/abs/2307.08934v1
# Multi-stage Neural Networks: Function Approximator of Machine Precision ###### Abstract Deep learning techniques are increasingly applied to scientific problems, where the precision of networks is crucial. Despite being deemed as universal function approximators, neural networks, in practice, struggle to reduce the prediction errors below \(O(10^{-5})\) even with large network size and extended training iterations. To address this issue, we developed the multi-stage neural networks that divides the training process into different stages, with each stage using a new network that is optimized to fit the residue from the previous stage. Across successive stages, the residue magnitudes decreases substantially and follows an inverse power-law relationship with the residue frequencies. The multi-stage neural networks effectively mitigate the spectral biases associated with regular neural networks, enabling them to capture the high frequency feature of target functions. We demonstrate that the prediction error from the multi-stage training for both regression problems and physics-informed neural networks can nearly reach the machine-precision \(O(10^{-16})\) of double-floating point within a finite number of iterations. Such levels of accuracy are rarely attainable using single neural networks alone. **keywords**: scientific machine learning; neural networks; physics-informed neural networks, multi-stage training ## 1 Introduction Deep learning techniques [1] have been well developed in the fields of computer vision [2; 3] and natural language processing [4; 5; 6]. More recently, neural networks have been increasingly applied to the mathematical and physical sciences [7; 8; 9], where the demand for precision is high. In particular, physics-informed neural networks (PINNs) [10; 11] have emerged as a new class of numerical solver for partial differential equations, where computing high-precision solutions becomes an intrinsic requirement of the method. Neural networks have been proven to be universal function approximators [12; 13]. However, in practice, neural network training often falls into local minima [14; 15], causing the training loss to plateau after a certain number of iterations \(n_{iters}\). This issue cause the failure modes of PINNs [15]. Advanced methods focusing on different aspects, such as activation function selection [16; 17], network configuration [18; 19; 20], optimization techniques [21; 22], trainable weights [23], and loss function [24; 25], have been developed to effectively enhance the convergence rate of the loss function for various problems. However, few of these methods manage to reduce the training error less than \(O(10^{-5})\). In contrast, classical numerical methods (e.g., finite difference) can systematically enhance solution's accuracy by simply reducing the grid size [26]. This is a major shortcoming of neural networks for solving many problems within mathematical and physical sciences. In this work, we proposed the multi-stage neural networks that effectively addresses this limitation. Our novel method involves dividing the network training into multiple stages, where each stage incorporates a separate neural network. The setting of each network in a given stage are optimized based on the residues from the preceding stage. By executing training stage by stage, we significantly enhance the convergence rate, ensuring that it remains consistently high throughout the iterations. As a result, the combined neural networks from different stages can approximate the target function with remarkable accuracy, with the error approaching the machine precision \(O(10^{-16})\) for double-floating point numbers. We begin with the introduction of the multi-stage neural network for regression problems in Section 2. By exploring the limitations of classical neural network training, we highlight the benefits of the multi-stage training in overcoming these constraints. We then propose and substantiate the optimal settings for each training stage. In Section 3, we extend the method to physics-informed neural networks (PINNs) for solving differential equations. Unlike regression problems, the optimal settings for PINNs in each stage are implicitly tied to the equation residues of previous stages. Both theoretical investigation and practical algorithmic solutions are presented to address this challenge. Additional techniques that can expedite the multi-stage training for PINNs are also discussed. In Section 4, we generalize the multi-stage training scheme to solve combined-forward-and-inverse problems, which are of great importance in mathematical and physical sciences. Lastly, we provide further discussions on the challenges and potential development of the MSNN method in Section 5 and conclude the paper in Section 6. ## 2 Multi-stage training scheme for regression problems We first illustrate the multistage idea with regression problems that involve predicting a continuous output variable \(u\) as a function of the input variable \(x\). We train a neural network that represents \(u(x)\) to fit \(N_{d}\) data points, denoted \((x_{i},u_{i})\). The loss function for a regression problem is typically the mean squared error (MSE), defined as \[\mathcal{L}=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}[u(x_{i})-u_{i}]^{2}\,, \tag{2.1}\] In this study, we consider all training data with no noise. ### Limitation of regular neural network training To illustrate the limitation of neural network's function approximation capacity, we consider a target function, \[u_{g}(x)=\sin(2x+1)+0.2e^{1.3x}\,, \tag{2.2}\] we created training data by sampling 300 data points from it with no noise, uniformly distributed within the domain \(x\in[-1,1]\). To fit the training data, we create a fully-connected neural network made of three hidden layers with 20 units in each layer and use hyperbolic tangent as the activation function for each unit. Using Adam [27] optimizer, figure 1\((a)\) shows that the trained neural network \(u_{0}(x)\) captures the target function \(u_{g}(x)\) well. During the iterations, the training loss \(\mathcal{L}\) based on mean squared error (MSE) between the data and network, decreases significantly at the early stage (figure 1\(b\)). However, after 5000 iterations, it reaches a plateau around \(O(10^{-7})\) with very small convergence rate. The error function \(e_{1}(x)\) between \(u_{g}\) and \(u_{0}\) across the training domain is, thus, trapped around \(10^{-4}\) (inset of figure 1\(a\)). Further experiments, elucidated in A, affirm that this plateau value of the error remains consistent even with larger networks and additional data, and not optimizer-specific. Neural networks are known for their spectral biases [28], also referred to as the frequency principle [29]. Utilizing the tool of neural tangent kernels [30], prior studies [31; 32] demonstrated that a standard multi-layer neural network struggles to learn the high frequencies of target functions in both theory and practice. The plateau of training loss corresponds to a mismatch between the trained network \(u_{0}(x)\) and target function \(u_{g}(x)\) at high frequencies. Figure 1\((b)\) demonstrates that the error function \(e_{1}(x)=u_{g}(x)-u_{0}(x)\) within the domain is indeed a high-frequency function. ### Key settings of multi-stage training scheme Since training a single neural network struggles with learning the high frequencies of the target function, an intuitive approach is to train a second neural network to capture the error function \(e_{1}(x)\), or the residue, between the training data and the first trained network [33]. The original training data from (2) is denoted with \((x^{(i)},u_{g}^{(i)})\). The training data for the second neural network would be \((x^{(i)},e_{1}(x^{(i)}))\), where \(e_{1}(x^{(i)})\) denotes the error of network at \(x^{(i)}\). At this point, extra care should be taken when setting up the second neural network, particularly concerning two key aspects. #### ii.2.1 Magnitude of the second neural network Considering that the original training data has a magnitude of \(O(1)\), then the training data for the second neural network, which is the residue \(e_{1}(x)\), would be much smaller than 1 (figure 1\(b\)). We observe that a neural network employing regular weight initialization methods, such as Xavier [34], often struggles to capture training data whose magnitude is significantly larger or smaller than 1 (see Appendix B). A straightforward solution to this issue is to normalize the training data by its Figure 1: **Comparison of single-stage with multi-stage training**. \((a)\) Fitting of a neural network \(u_{0}(x)\) with tanh activation function to the data from (2). \((b)\) Fitting of second-stage neural network to the error \(e_{1}(x)\) between the data from (2) and the first-stage trained network \(u_{0}(x)\) as shown in \((a)\). \((c)\) The error \(e_{2}(x)\) between the data and the sum of two-stage networks, which reaches the machine precision of a single float (32-bit). \((d)\) Comparison of the loss convergence between a single-stage training (pink and red) and a two-stage training (black). For a single-stage training, the convergence rate of loss suddenly reduces (for Adam) after the loss reaches \(O(10^{-6})\) or terminates (for L-BFGS). For a two-stage training, even with less number of weights and biases, the convergence rate is significantly faster than that for the single-stage training. root mean square value \(\epsilon_{1}\), defined as \[\epsilon_{1}=\sqrt{\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}[e_{1}(x^{(i)})]^{2}}=\sqrt{ \frac{1}{N_{d}}\sum_{i=1}^{N_{d}}[u_{g}^{(i)}-u_{0}(x^{(i)})]^{2}}\,. \tag{3}\] Then, the normalized training data for the second neural network becomes \((x^{(i)},e_{1}(x^{(i)})/\epsilon_{1})\). Denoting the second trained network as \(u_{1}(x)\), the combined networks for the original data become \[u_{c}^{(1)}(x)=u_{0}(x)+\epsilon_{1}u_{1}(x)\,. \tag{4}\] Subsequently, we can continue training the third or even further neural networks to reach higher accuracy for our model. The training data for the \((n+1)\)-th neural network \(u_{n}\) is the residue \(e_{n}\) between the original training data \(u_{g}\) and the output of the previously combined \(n\) neural networks, \(u_{c}^{(n-1)}(x^{(i)})\), normalized by its own magnitude (root mean square value) \(\epsilon_{n}\), namely \((x^{(i)},e_{n}(x^{(i)})/\epsilon_{n})\). Then, the final model that combines all the \((n+1)\) neural networks reads, \[u_{c}^{(n)}(x)=\sum_{j=0}^{n}\epsilon_{j}u_{j}(x)\,, \tag{5}\] where \(\epsilon_{i}\) stands for the magnitude for the \(i\)-th neural network. When the original training data \(u_{g}\) is normalized, \(\epsilon_{0}\) is set to be 1. #### ii.2.2 Frequency of the second neural network Even with normalization, the second neural network, if initialized with regular weights, could still struggle to fit the high-frequency data due to the inherent spectral biases of neural networks. To Figure 2: **Spectral biases of neural networks**. \((a)\) Fitting of neural networks with tanh activation function to the data from (6) for different \(m\). Under regular settings, neural networks have difficulty fitting high-frequency functions. \((b)\) Frequency domain of the function (6) for \(m=30\) with the dominant frequency \(f_{d}=5.5\). \((c)\) Derivative of the function (6) \(du/dx\) for \(m=30\), which scale as \(O(2\pi f_{d})\). \((d)\) Schematic diagram of a single-hidden layer neural network. \((e)\) Comparison between single-neuron outputs for different weights \(w^{(0)}\) within the tanh activation function and the function (6) for \(m=30\). To capture high-frequency functions, it shows that the weight within the activation function needs to increase from \(O(1)\) to \(O(2\pi f_{d})\). illustrate this, we consider a target function \[u(x)=\left(1-\frac{x^{2}}{2}\right)\cos\left[m\left(x+0.5x^{3}\right)\right]. \tag{6}\] with \(m\) the free parameter related to the frequency of the function. Figure 2 shows the function (6) for \(m=3\), \(15\), and \(30\), respectively. For each \(m\), we generate \(300\) sample points \((x_{i},u_{i})\) that satisfy (6) as our training data, with \(x_{i}\) uniformly distributed in the domain \([-1,1]\). Figure 2 shows that the neural network, using regular weight initialization, fits the data well for \(m=3\), partially misses the data for \(m=15\), and completely fails to fit the data for \(m=30\). To understand the challenge in fitting high-frequency data, let's consider a shallow neural network with a single input, single output, and one hidden layer that uses the hyperbolic tangent as its activation function: \[u(x)=\sum_{i=1}^{N}w_{i}^{(1)}\,\tanh\left(w_{i}^{(0)}x+b_{i}^{(0)}\right)+b_{ 0}\,, \tag{7}\] where \(w_{i}^{(0)}\) denote the weights between the input and hidden layers, and \(w_{i}^{(1)}\) are the ones between the hidden and output layers. \(b_{i}^{(0)}\) is the bias for the hidden units and \(b_{0}\) is the bias for the output unit. The magnitude of the output function is determined by \(w_{i}^{(1)}\), while \(w_{i}^{(0)}\), within the activation function, influence the local gradient of the function (figure 2\(e\)). Common practice involves initializing the weights of the network to follow a Gaussian distribution with zero mean and a specified variance. For example, Xavier initialization uses a specified variance \(V_{ar}=\sqrt{2/(N_{l-1}+N_{l})}\), where \(N_{l-1}\) and Figure 3: **Neural network settings for high-frequency functions**. (\(a\)) Fitting of a neural network to the data from (6) with \(m=30\), by either changing the activation function for the first hidden layer to \(\sin(x)\), or multiplying the weight \(w_{i}^{(0)}\) before the first hidden layer by a scale factor \(\kappa\). None of them captures the data. (\(b\)) Comparison of training loss for the neural networks with different settings. (\(c\)) Neural network using \(\sin(x)\) activation function and modified scale factor \(\hat{\kappa}=35\) fits well the high-frequency data (6) with \(m=30\), reaching the same accuracy as (\(d\)) Fourier Let network [31]. \(N_{l}\) are the number of units in the preceding and succeeding layers, respectively. This initialization ensures that the variance of the sum of all unit outputs in each hidden layer remains \(O(1)\), which prevents gradient vanishing or explosion during the training. However, a side effect of this approach is that the neural network becomes a slowly varying function with respect to normalized inputs. For a high-frequency function with normalized input and output, and a dominant frequency \(f_{d}\), the magnitude of its gradient scales as \(O(2\pi f_{d})\) (figure 2\(c\)). To capture these large gradients, considering a one-hidden layer network with single input and output (2.7), the weights \(w_{i}^{(0)}\) within the activation function need to increase from their initialized value of \(O(\sqrt{1/V_{ar}})\) to \(O(2\pi f_{d})\) during training. This large shift in weight values, particularly for large \(f_{d}\), leads to slower convergence during training or an inaccurate approximation of the data. To address this issue, we multiply weights within the activation function by a large scale factor \(\kappa\)[35] to expedite the convergence of weights towards their optimal high value when fitting high-frequency data. We only multiply the scale factor \(\kappa\) to the weights between the input and the first hidden layer, rather than all weights, to prevent gradient explosion [36] during the training. Besides large gradient, high-frequency functions also have a large amount of inflection points. In contrast, the hyperbolic tangent function, being a monotonic function, struggles to capture this feature. Periodic functions, such as the sine or cosine function, are more suitable choices for activation functions in this case [16]. In our approach, we use the sine function solely for the first hidden layer while retaining the hyperbolic tangent function for the remaining layers. This combination allows us to capture both low and high-frequency data effectively. Figure 3 illustrates the impact of the scale factor \(\kappa\) and the choice of activation function on Figure 4: **Importance of the modified scale factor \(\hat{\kappa}\). (\(a\)) Fitting of neural networks to the data from (2.6) for \(m=30\) using different modified scale factor \(\hat{\kappa}\). The networks start overfitting the data when \(\hat{\kappa}\geq 60\). (\(b\)) Training loss for the neural networks with different modified scale factors \(\hat{\kappa}\). When \(\hat{\kappa}\geq 60\), the training loss decreases significantly fast due to over-fitting. (\(c\)) Relation of the root mean square value \(\epsilon\) of the error \(e(x)\) between the trained network \(u(x)\) and target function \(u_{g}(x)\) with the modified scale factor \(\hat{\kappa}\). The minimal error is reached when \(\pi f_{d}<\hat{\kappa}<N_{d}/6\), where \(f_{d}\) denotes the dominant frequency and \(N_{d}\) the total number of training data points.** improving the fit for high-frequency data. Using a combination of the scale factor \(\kappa\) and sine function for the first hidden layer yields the best training result. This combination equates to applying a Fourier feature mapping (i.e. Fourier let network) [31] to the input before it is passed through the multi-layer network. Figure 3(_c_&_d_) compares the convergence rate and final error of both methods when fitting the same high-frequency data. The results are consistently good, verifying the efficacy of both methods in fitting high-frequency data. To expedite the convergence of weights from their initialized value \(O(\sqrt{1/V_{ar}})\) to the high gradient value \(O(2\pi f_{d})\) of a high frequency function, the optimal value of \(\kappa\) is expected to depend on the variance \(V_{ar}\), which is relevant to the weight initialization approaches, and the size of neural network. To isolate the impact of \(V_{ar}\) on determining the optimal value of the scale factor, we introduce a modified scale factor \(\hat{\kappa}\), \[\hat{\kappa}=\kappa/\sqrt{V_{ar}}\,, \tag{2.8}\] Figure 4(_c_) shows that the minimal fitting error is achieved when the modified scale factor is \[\hat{\kappa}>\pi f_{d}\,,\qquad\text{namely}\qquad\kappa>\pi f_{d}\sqrt{V_{ar }}\,, \tag{2.9}\] where \(f_{d}\) denotes the dominant frequency of the data. This finding is intuitive, as a scale factor \(\hat{\kappa}\) that meets the criterion (2.9) allows the neural network to directly capture the large gradient \(O(2\pi f_{d})\) of the high-frequency data. However, setting \(\hat{\kappa}\) too high, close to the number of data points \(N_{d}\), results in overfitting of the neural network. Figure 4(_a_) shows that a neural network trained with a scale factor \(\hat{\kappa}=300\) to fit the training data (\(N_{d}=300\)) sampled from the high-frequency function (2.6) with \(m=30\) overfits the data. While the training loss is significantly small (figure 4_b_), the validation error is extremely large. To mitigate overfitting, figure 4(_c_) suggests that the modified scale factor \(\hat{\kappa}\) should be less than one-sixth of the total number of data points \(N_{d}\). As a rule of thumb, for the optimal fitting of high-frequency data, besides satisfying (2.9), the number of training data points \(N_{d}\) should also meet the criterion \[N_{d}/6>\pi f_{d}\qquad\Longrightarrow\qquad N_{d}>6\pi f_{d}. \tag{2.10}\] Given that the training domain is often normalized within \([-1,1]\), which contains \(2f_{d}\) dominant periods, the criterion (2.10) essentially requires a minimum of \(3\pi\approx 10\) data points within each dominant period \(1/(2f_{d})\), to ensure optimal fitting of the neural network to the high-frequency data. Without specific clarification, the criterion (2.10) is applied to all the example problems in this section. ### Algorithm of multi-stage training scheme for regression problems Incorporating these key settings for higher-stage neural network training, we summarize a complete procedure of multi-stage training scheme for regression problems as shown in Algorithm 1. ``` 1:Input: \(\hat{ \(3\times 30^{2}\approx 2700\)). This underscores that a larger neural network is not inherently advantageous; an appropriate training scheme is more vital and efficient for the reduction of training loss. In fact, the power of multi-stage training scheme lies not only in boosting the convergence of training, but also fundamentally enabling neural networks to approximate a target function with arbitrary accuracy as required. We now convert the weights, biases and training data from single-float precision to double precision, and create the third and fourth-stage neural networks in accordance with Algorithm 1. Figure 5\((d)\) shows that the error \(e_{4}(x)\) between the sum of four-staged networks and the data successfully approaches the machine precision of a 64-bit double float. As long as higher-precision floating-point is used, the error can be further reduced with additional stages of training. Figure 5\((c)\) shows that the overall convergence rate of the root mean square value \(\epsilon\) of the error \(e(x)\) between the network and data using multi-stage training scheme follows \(\epsilon\sim\exp(-\sqrt{n_{iters}}/25)\), closely approximating exponential decay. In contrast, the regular single-stage training only exhibits a linear decay, \(\epsilon\sim 1/n_{iters}\). That is to say, without considering the risk of being trapped in local minima, it would take at least \(O(10^{10})\) iterations for a single-stage training to reach an error of \(O(10^{-10})\). With the multi-stage training scheme, it only requires \(250^{2}\approx 6\times 10^{5}\) iterations to reach the same error, which is _four orders of magnitude faster_. Moreover, we note that the number of data points required for higher-stage training also needs to increase following the criterion (2.10). Figure 5\((d)\) shows that the dominant frequency of residue after three-stage training can reach \(f_{d}=150\). This implies that the minimal number of training data to guarantee the success of the fourth-stage training needs to be \(N_{d}>6\pi f_{d}=2830\). Figure 5\((c)\) shows that the relation between the dominant frequency \(f_{d}\) and magnitude \(\epsilon\) of the residue after different stages of training empirically follows a power law \[f_{d}\approx f_{0}\epsilon^{-\alpha},\quad\text{with the exponent }\alpha=1/6\,, \tag{2.11}\] where \(f_{0}\) denotes the dominant frequency of the original training data. In practice, we anticipate a gradual increase in the frequency of the residue \(e(x)\) corresponding to the decrease in its magnitude, namely the exponent \(\alpha\) should be close to \(0\). Considering the error between a neural network \(u(x)\) and target function \(u_{g}(x)\) of magnitude \(\epsilon_{0}\) and dominant frequency \(f_{d}\), the error between the \(m\)-th derivative of \(\hat{u}(x)\) and \(u_{g}(x)\) becomes, \[\frac{d^{m}}{dx^{m}}u(x)-\frac{d^{m}}{dx^{m}}u_{g}(x)=\frac{d^{m}}{dx^{m}}e(x) \sim(2\pi f_{d})^{m}\epsilon_{0}\sim\epsilon_{0}^{1-\alpha m}\,, \tag{2.12}\] where we derive the last expression using (2.11). We find that when \(1-\alpha m>0\), even if the magnitude of error \(\epsilon_{0}\) is small, the error at high derivatives \(m>1/\alpha\) can still exceed \(1\). This indicates that the trained neural network with high-frequency error tends to miss the high-derivative (\(m>1/\alpha\)) information of the target function underlying the training data. Hence, our goal is to achieve a smaller \(\alpha\) value during training, which enables the neural network to learn the high-derivative information from the data more accurately. However, figure 5\((e)\) shows that, for regression problems, the exponent \(\alpha\) from multi-stage training Figure 5: **Multi-stage neural networks**. \((a)\) Fitting of the first-stage neural network (red dashed curve) to the data from a given target function (blue curve). \((b)\) Training loss \(\mathcal{L}\) over the iterations based on multi-stage training scheme. \((c)\) Evolution of the root mean square value \(\epsilon\) of the error \(e_{n}(x)\) over the iterations, which follows \(\epsilon\sim\exp(-\sqrt{n_{iters}}/25)\), close to an exponential decay. However, for single-stage training (\(c\)-inset), the error convergence only follows a linear decay, \(\epsilon\sim 1/n_{iters}\). \((d)\) Fitting of higher-stage networks to the error of lower-stage training. Frequency domain of the error \(e_{n}(x)\) for different stages are shown in the right column. After four stages of training, the error between the data and combined networks is close to the machine precision of a double float (64-bit) \((e)\) Relation of the dominant frequency \(f_{d}\) and the root mean square value \(\epsilon\) of the error \(e_{n}(x)\) after different stages of training follows a power law (2.11) with an exponent \(\alpha\) independent of (i) target functions, (ii) neural network size, (iii) and the number of data points. scheme appears to be universal, independent of both target functions and neural network settings. To further reduce \(\alpha\), high-derivative information about the target function would be required for the training. However, this information is often absent in regression problems, while it is readily available for physics-informed neural networks. The methodology of reducing the exponent \(\alpha\) for PINNs will be addressed in a later section (SS3.4). Figure 6(_a-d_) shows that the multi-stage training scheme is equally applicable for 2D regression problems. The convergence rate of the loss function for the 2D problem roughly follows \(\epsilon\sim\exp(-\sqrt[3]{n_{iters}}/7)\) (figure 6_f_), slower than that for the 1D problem, but still much faster than the linear decay seen with regular single-stage training. Figure 6(_e_) shows that the relation between the dominant frequency \(f_{d}\) and the root mean square value \(\epsilon\) of the 2D residue \(e(x,y)\) follows the same power law (2.11) with the exponent \(\alpha\approx 1/6\). Figure 6: **Multi-stage neural networks for a 2D target function**. (\(a\)) Fitting of first-stage neural network \(u_{0}(x,y)\) to the data from a 2D target function \(u_{g}(x,y)\). (\(b\)-\(d\)) Fitting of higher-stage networks \(u_{i}(x,y)\) to the error \(e_{i}(x,y)\) of lower-stage training. Frequency domain of the error at each stage is given. (\(e\)) Relation of the dominant frequency \(f_{d}\) with the root mean square value \(\epsilon\) of the error \(e_{n}(x,y)\) after different stages of training follows the same power law with the 1D problem, of which the exponent \(\alpha=1/6\). (\(f\)) Training loss \(\mathcal{L}\) over iterations of the multi-stage neural networks. The inset shows that the evolution of the root mean square error \(\epsilon\) over iterations for the 2D regression problem follows \(\epsilon\sim\exp(-\sqrt[3]{n_{iters}}/7)\), which is slightly slower than that for the 1D problem (see inset of (\(e\)). Multi-stage training for physics-informed neural network The multi-stage training scheme is particularly critical when we use neural network to approximate solutions governed by equations, where the demand for precision is high and essential for the usefulness of the solution. Here we apply the multistage idea to the physics-informed neural networks (PINNs) to improve their accuracy to machine precision. Unlike classical numerical method (i.e. finite difference) which can steadily enhance the accuracy of solution by reducing the grid size, PINNs cannot efficiently reduce solution errors merely by adding more collocation points or enlarging the neural network size, similar to the issue seen with regression problems (see A). This has made PINNs a less favored method for many scientific research that demands high-precision prediction. In this section, we show that the multi-stage training scheme can be extended to address this limitation of PINNs. The general procedure of multi-stage training scheme for physics-informed neural networks (PINNs) mirrors that for regression problems (Algorithm 1). However, _two_ new challenges emerge when applying multi-stage training to PINNs. _First_, for regression problems, we can directly determine the magnitude \(\epsilon\) and dominant frequency \(f_{d}\) of the target function for each stage of training from the residue of lower-stage training. However, for PINNs, these two quantities are not readily obtainable because we lack the exact solution required to estimate the error of lower-stage training. In addition, the loss function of PINNs involves both data loss and equation loss, defined as \[\mathcal{L}=(1-\gamma)\mathcal{L}_{d}+\gamma\mathcal{L}_{e}\qquad \text{with} \tag{3.1}\] \[\mathcal{L}_{d}=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}[u(x_{i})-u_{i }]^{2}\quad\text{and}\quad\mathcal{L}_{e}=\frac{1}{N_{e}}\sum_{j=1}^{N_{e}}[r( x_{j},u(x_{j}))]^{2}, \tag{3.2}\] where \(N_{d}\) represents the number of data points, commonly employed as the boundary condition, and \(N_{e}\) is the number of collocation points, which are utilized to examine the equation residue \(r(x,u)\) at various positions within the domain. In comparison to regression problems, \(\gamma\), known as the equation weight, is the additional hyper-parameter that balances the significance of the two losses during training. How to determine an appropriate value of \(\gamma\) for higher stages of training becomes the _second_ challenge. Using a simple example, we will demonstrate new algorithms to address these challenges and develop a modified multi-stage training scheme for PINNs. ### First challenge: magnitude and frequency of higher-stage network As discussed in Section 2.2, the effectiveness of multi-stage training scheme depends largely on the optimal setting of the higher-stage neural networks \(u_{n}\), which is based on the magnitude and frequency of the residue \(e_{n}\) between the combined lower-stage networks and the ground truth \(u_{g}\). However, these pieces of information are not directly accessible for PINNs because we don't have the exact solution \(u_{g}(x)\) to the equation that is required to estimate the error \(e(x)=u_{g}-u_{0}\) of the lower-stage trained networks. Instead, the only information we have is the equation residue \(r(x,u_{0})\) associated with the trained first-stage network \(u_{0}\). Thus, understanding the relation between the equation residue \(r(x,u_{0})\) and the error \(e(x)\) of the lower-stage networks with the exact solution is crucial for determining the settings for the higher-stage training of PINNs. #### 3.1.1 A simple example We consider a first-order ordinary differential equation with the boundary condition \[\frac{du}{dx}=u+x\qquad\text{with}\qquad u(0)=1\,, \tag{3.3}\] which has the exact solution \(u_{g}(x)=e^{x}-x-1\). Figure 7\((a)\) shows the single-stage trained network \(u_{0}(x)\) to solve the equation (3.3) via PINN, which matches the exact solution \(u_{g}(x)\) well. The equation residue \(r_{1}(x,u_{0})\) associated with the network \(u_{0}(x)\) gives \[r_{1}(x,u_{0})=\frac{du_{0}}{dx}-(u_{0}+x)\,, \tag{3.4}\] which has the same dominant frequency with the error \(e_{1}(x)=u_{g}(x)-u_{0}(x)\) between the trained network \(u_{0}(x)\) and the exact solution \(u_{g}(x)\). However, the magnitude of equation residue \(r_{1}(x,u_{0})\) is one-order of magnitude larger than that of the error \(e_{1}(x)\). To elucidate their relations, we introduce the ansatz, \[u_{g}(x)=u_{0}(x)+\epsilon_{1}u_{1}(x)\qquad\text{with}\quad e_{1}(x)= \epsilon_{1}u_{1}(x)\,, \tag{3.5}\] where \(\epsilon_{1}\) denotes the magnitude of the error \(e_{1}(x)\), and \(u_{1}(x)\) becomes the normalized function within the domain. Substituting the ansatz (3.5) into (3.3) and re-arranging the equation gives \[-\epsilon_{1}\left(\frac{du_{1}}{dx}-u_{1}\right)=\frac{du_{0}}{ dx}-(u_{0}+x)\,. \tag{3.6}\] Recalling (3.4), the right-hand side of (3.6) is the equation residue. Thus, the relation between the prediction error \(e_{1}(x)\) and the equation residue \(r_{1}(x,u_{0})\) gives \[-\epsilon_{1}\left(\frac{du_{1}}{dx}-u_{1}\right)=r_{1}(x,u_{0})\,, \tag{3.7}\] which also becomes the governing equation for the second-stage training and \(u_{1}(x)\) is the second-stage neural network. The boundary condition of \(u_{1}\), based on (3.3) and (3.5), is \[\epsilon_{1}u_{1}(0)=1-u_{0}(0)\qquad\Longrightarrow\qquad u_{1}(0)=\frac{1- u_{0}(0)}{\epsilon_{1}}\,. \tag{3.8}\] Figure 7: **Comparison of prediction error with equation residue of PINNs.**\((a)\) Exact solution \(u_{g}(x)\) and neural network prediction \(u_{0}(x)\) to equation (3.3). \((b)\) Comparison of the equation residue \(r_{1}(x,u_{0})\) associated with the neural network prediction \(u_{0}(x)\) with the prediction error \(e_{1}(x)\) between \(u_{0}(x)\) and the exact solution \(u_{g}(x)\), which has different magnitude. \((c)\) The frequency domain of the equation residue \(r_{1}(x,u_{0})\) and the prediction error \(e_{1}(x)\), which has the same dominant frequency. With the appropriate setting of the equation weight \(\gamma\) (as discussed in a later section SS3.2), the data loss of the first-stage training should be much smaller than that of the equation loss. This indicates that the error \(e_{1}(x)\), as well as \(u_{1}(x)\), has much smaller value at the boundary than within the domain. Namely, the boundary condition of \(u_{1}(0)\) can be considered as \(0\). With zero boundary conditions, the magnitude and frequency of the solution \(u_{1}(x)\) are governed by the source function. For a linear equation, the dominant frequency of \(u_{1}(x)\) must be equal to that of the source function, namely the equation residue \(r_{1}(x,u_{0})\). Otherwise, the equation cannot be balanced in the frequency domain. From (3.7), the magnitude \(\epsilon_{1}\) of the error \(e_{1}(x)\) also appears to be the same as that of the equation residue \(r_{1}(x,u_{0})\). However, this is only true when the solution \(u_{1}(x)\) is a low-frequency function. For a high-frequency function, its derivative, which represents its local gradient, becomes large and scales as \(O(2\pi f_{d})\), as discussed in Section 2.2, where \(f_{d}\) is the dominant frequency of the function. Given that \(u_{1}(x)\) shares the same dominant frequency with the equation residue \(r_{1}(x,u_{0})\), the magnitude \(\epsilon_{1}\) of the error \(e_{1}(x)\) between the network \(u_{0}(x)\) and exact solution \(u_{g}(x)\) can be determined by equating the magnitudes of the leading-order terms on both sides of the equation (3.7), which gives, \[2\pi f_{d}\epsilon_{1}\sim\epsilon_{r_{1}}\qquad\Longrightarrow\qquad\epsilon _{1}=\frac{\epsilon_{r_{1}}}{2\pi f_{d}}\quad\text{with}\quad\epsilon_{r_{1}}= \text{RMS}(r_{1}(x,u_{0})) \tag{3.9}\] where we use the root mean square (RMS) value \(\epsilon_{r_{1}}\) to represent the magnitude of the equation residue \(r_{1}(x,u_{0})\). Figure 7(\(c\)) shows that the dominant frequency for the equation residue \(r_{1}(x,u_{0})\) and prediction error \(e_{1}(x)\) are the same, around \(f_{d}\approx 1.5\). Based on (3.9), the magnitude \(\epsilon_{1}\) of the error should be \(2\pi f_{d}\approx 10\) times less than that of the equation residue, consistent with the result shown in figure 7(\(b\)). #### 3.1.2 Magnitude and frequency estimation for general differential equations To extend the relations between the properties of equation residue \(r_{1}(x,u_{0})\) and prediction error \(e_{1}(x)\) for general equations, we now consider a general form of ordinary differential equations \[\mathcal{N}\left(x,u,u^{(1)},...u^{(m)}\right)=F(x)\qquad\text{with}\ \ u^{(i)}=\frac{d^{i}u}{dx^{i}}\ \ \text{for}\ \ i=1,2,...,m \tag{3.10}\] where \(\mathcal{N}\) is a nonlinear differential operator that involves \(x\), \(u\) and its derivative \(u^{(i)}\) at different orders. \(m\) represents the highest order of derivative of \(u\) in the equation. \(F(x)\) is a source function with known expression. We denote \(u_{g}(x)\) as the exact solution to the equation and \(u_{0}\) the first-stage neural network prediction. By introducing the ansatz (3.5) and substituting into (3.10), we have \[\mathcal{N}\left(x,(u_{0}+\epsilon_{1}u_{1}),[u_{0}+\epsilon_{1}u_{1}]^{(1)},...,[u_{0}+\epsilon_{1}u_{1}]^{(m)}\right)=F(x) \tag{3.11}\] Considering that the first-stage neural network \(u_{0}\) captures the main variation of the exact solution \(u_{g}\), the magnitude \(\epsilon_{1}\) of the error \(e_{1}(x)\) between \(u_{g}\) and \(u_{0}\) would then be much smaller than one, namely \(\epsilon_{1}\ll 1\). In that case, the equation (3.11) can be rewritten in terms of a Taylor expansion of the nonlinear function \(\mathcal{N}\). After re-arrangement, it gives \[-\epsilon_{1}\left(\left.\frac{\partial\mathcal{N}}{\partial u}\right|_{u=u_{0 }}u_{1}+\left.\frac{\partial\mathcal{N}}{\partial u^{(1)}}\right|_{u=u_{0}}u _{1}^{(1)}+...+\left.\frac{\partial\mathcal{N}}{\partial u^{(m)}}\right|_{u=u_ {0}}u_{1}^{(m)}\right)+O(\epsilon_{1}^{2})=\mathcal{N}\left(x,...,u_{0}^{(m)} \right)-F(x) \tag{3.12}\] where \(u\) and its derivative \(u^{(i)}\) at different orders are considered as separate independent variables of the function \(\mathcal{N}\). Because \(\epsilon_{1}\ll 1\), all the nonlinear terms of \(u_{1}\) fall into the high-order \(O(\epsilon_{1}^{2})\) term, and can generally be disregarded. This suggests that regardless of whether the original equation is linear or nonlinear, the governing equations for higher-stage networks essentially become _linear_ equations. This is a key factor that ensures the success of multi-stage training scheme for PINNs. Since the right-hand side of (3.12) is the equation residue of \(u_{0}\), the final equation for \(u_{1}\) gives \[\epsilon_{1}\left(\left.\frac{\partial\mathcal{N}}{\partial u}\right|_{u=u_{0} }u_{1}+\left.\frac{\partial\mathcal{N}}{\partial u^{(1)}}\right|_{u^{(1)}=u_{ 0}^{(1)}}u_{1}^{(1)}+...+\left.\frac{\partial\mathcal{N}}{\partial u^{(m)}} \right|_{u^{(m)}=u_{0}^{(m)}}u_{1}^{(m)}\right)=r_{1}(x,u_{0}) \tag{3.13}\] or, in a short form, \[-\epsilon_{1}\sum_{k=0}^{m}\beta_{k}u_{1}^{(k)}=r_{1}(x,u_{0}) \qquad\text{with}\quad\beta_{k}=\left.\frac{\partial\mathcal{N}}{\partial u^{ (k)}}\right|_{u=u_{0}}\quad\text{and}\quad u_{1}^{(k)}=\frac{d^{k}u_{1}}{dx^{ k}} \tag{3.14}\] As mentioned earlier, if \(u_{0}\) is correctly trained, the boundary condition for \(u_{1}\) should be close to \(0\). Given that (3.14) is linear, the magnitude and frequency of \(u_{1}\) should be determined from the equation residue \(r_{1}(x,u_{0})\) by matching the magnitude and frequency of the dominant term (the term with the largest magnitude) on the left-hand side of (3.14) with that of \(r_{1}(x,u_{0})\). Considering a physical equation with coefficients of similar scale before each term, and assuming \(u_{1}\) to be a high-frequency function with a dominant frequency far exceeding that of \(u_{0}\), the dominant term on the left-hand side of (3.14) is expected to be the one involving the highest-order derivative of \(u_{1}\), namely \(\epsilon_{1}\beta_{m}u_{1}^{(m)}\). We denote the dominant frequency of \(\beta_{m}\) and \(r_{1}(x,u_{0})\) as \(f_{d}^{(\beta)}\) and \(f_{d}^{(r)}\), respectively. Then, the dominant frequency \(f_{d}^{(1)}\) of \(u_{1}\) gives \[f_{d}^{(\beta)}+f_{d}^{(1)}=f_{d}^{(r)}\qquad\Longrightarrow\qquad f_{d}^{(1 )}=f_{d}^{(r)}-f_{d}^{(\beta)}. \tag{3.15}\] Given that \(\beta_{m}\) is a function only with respect to the lower-order network \(u_{0}\), and the dominant frequency \(f_{d}^{(1)}\) of the higher-stage network \(u_{1}\) is much larger than that of \(u_{0}\). Thus, even if \(\beta_{m}\) is a highly-nonlinear function, we have \(f_{d}^{(1)}\gg f_{d}^{(\beta)}\). Combined with (3.15), we have \[f_{d}^{(1)}=f_{d}^{(r)}-f_{d}^{(\beta)}\approx f_{d}^{(r)} \tag{3.16}\] where the dominant frequency of \(u_{1}\) is mainly governed by that of equation residue. With the dominant frequency determined, the magnitude \(\epsilon_{1}\) of the error between \(u_{g}\) and \(u_{0}\) can be derived by matching the magnitude of the term \(\epsilon_{1}\beta_{m}u_{1}^{(m)}\) and \(r_{1}(x,u_{0})\), which gives \[\epsilon_{1}\cdot\epsilon_{\beta}\left[2\pi f_{d}^{(r)}\right]^{m}=\epsilon_{ r_{1}}\qquad\Longrightarrow\qquad\epsilon_{1}=\frac{\epsilon_{r_{1}}}{ \left[2\pi f_{d}^{(r)}\right]^{m}\epsilon_{\beta}} \tag{3.17}\] \[\text{with}\qquad\epsilon_{\beta}=\text{RMS}(\beta_{m})\quad\text{and}\quad \epsilon_{r_{1}}=\text{RMS}(r_{1}(x,u_{0}))\] where we use the root mean square (RMS) value \(\epsilon_{\beta}\) and \(\epsilon_{r_{1}}\) to represent the magnitude of \(\beta_{m}\) and equation residue \(r_{1}(x,u_{0})\), respectively. The relations (3.16) and (3.17) can also be generalized to _partial_ differential equations, for which we need to calculate the dominant frequency of \(u_{1}\) with respect to each independent variable \(x_{i}\), namely \[f_{d}^{(1,x_{i})}\approx f_{d}^{(r,x_{i})}\qquad\text{for}\quad i=1,2,...,N \tag{3.18}\] where \(N\) is the total number of independent variables of the equation. Then, the magnitude \(\epsilon_{1}\) of the error between the first-stage network \(u_{0}\) and the exact solution \(u_{g}\) would be \[\epsilon_{1}=\frac{\epsilon_{r_{1}}}{\epsilon_{\beta}\left[2\pi f_{d}^{(r,x_{1} )}\right]^{m_{1}}\cdot\left[2\pi f_{d}^{(r,x_{2})}\right]^{m_{2}}...\left[2 \pi f_{d}^{(r,x_{N})}\right]^{m_{N}}} \tag{3.19}\] where \(m_{1}+m_{2}+...+m_{N}\) represents the highest order of partial derivative of \(u_{1}\) in the equation. For most equations, the relations (3.16)-(3.19) are sufficiently accurate to estimate the magnitude and frequency of the network for higher-stage PINN training. However, there are two types of nonlinear equations where these relations may not hold exactly. They are discussed in Appendix C. #### 3.1.3 Importance of magnitude and frequency for higher-stage PINN training The proper setting of the magnitude and frequency of a neural network, as shown to be essential for regression problems in Section 2.2, is critical for physics-informed neural networks, especially during Figure 8: **Importance of rescaling PINN magnitude and frequency**. \((a)\) Exact solution \(u_{g}(x,y)\) to equation (3.20) and the general setting of PINNs for 4 cases. \((b)\) Evolution of the data loss and equation loss over the training of solving (3.20) via PINNs for different magnitude prefactor \(\epsilon_{0}\) and modified scale factors \(\hat{\kappa}_{0}\). \((c)\) Trained network \(u_{0}(x,y)\) of the solution to (3.20), the associated equation residue \(r_{1}(x,y,u_{0})\) and prediction error \(e_{1}(x,y)\) under different magnitude prefactor \(\epsilon_{0}\) and modified scale factor \(\hat{\kappa}_{0}\) (Case 1-4). The trained network with \(\epsilon_{0}\) and \(\hat{\kappa}_{0}\) from (3.18) and (3.19) gives the prediction with the lowest relative error \(e_{rr}\). higher-stage training. To illustrate this, we use a Poisson equation with a high-frequency source function to represent the equation residue, and zero boundary conditions. This setup effectively mimics the governing equation for higher-stage PINN training. The equation reads \[u_{xx}+u_{yy}=-\sin(6\pi x)\sin(6\pi y)\qquad\text{with}\quad u(x,\pm 1)=u(\pm 1,y)=0 \tag{3.20}\] At first glance, one might assume that the solution \(u\) has the same order of magnitude with the source function, which is \(O(1)\). However, based on the analysis (3.18) in Section 3.1.2, the solution should have a dominant frequency \(f_{d}=3\) with respect to both independent variables \(x\) and \(y\). Given that the highest order of derivative in (3.20) is \(m=2\), the magnitude of the solution \(u\) can, then, be derived from (3.19), as \(1/[2\pi f_{d}]^{2}\sim O(10^{-3})\), as shown in the exact solution, \[u_{g}(x,y)=\frac{1}{2(6\pi)^{2}}\sin(6\pi x)\sin(6\pi y)\,. \tag{3.21}\] Figure 8 shows the neural network predicted solution \(u_{0}\) to (3.20) via PINNs under different setting of magnitude (via magnitude prefactor \(\epsilon_{0}\)) and frequency (via modified scale factor \(\hat{\kappa}_{0}\)). In these cases, we assume that the correct value of the equation weight \(\gamma\) is used. As shown in figure 8(\(b\)), only when both magnitude and frequency are correctly set in accordance with (3.18) and (3.19) does the neural network successfully converge to the exact solution at a rapid convergence rate. #### 3.1.4 Algorithm for determining the solution magnitude for higher-stage PINN training Besides the theoretical relations (3.16)-(3.19) derived in Section 3.1.2, we also develop a general algorithm to determine the magnitude of the solution to linear differential equations with high-frequency source functions and zero boundary conditions to mimic the higher-stage training of PINN. The algorithm can subsequently be combined with Algorithm 1 to extend the multi-stage training scheme for PINNs. The specific steps of the algorithm are given in Algorithm 2. The principle underlying the algorithm is based on the fact that the dominant frequency of the solution \(u(\mathbf{x})\) mirrors that of the source function \(s(\mathbf{x})\). Therefore, the amplification effect of the derivative, attributable to the high-frequency property of the solution, can be well-estimated by taking \(s(\mathbf{x})\) as the guess solution. We define the ratio \(R\) as the magnitude of the differential operator \(\mathcal{N}_{s}\) relative to that of the source function \(s(\mathbf{x})\). If \(R\) is larger than \(10\), the magnitude of the differential operator \(\mathcal{N}_{s}\) associated with the guess solution (3.23) much larger than the source function \(s(\mathbf{x})\). In that case, the magnitude of the guess solution should be reduced by decreasing \(\alpha\). Conversely, when \(R\) is less than \(0.1\), it suggests that differential operator \(\mathcal{N}_{s}\) associated with the guess solution (3.23) is too small. Hence, we should increase the magnitude of the solution by increasing \(\alpha\). The recursive relation (3.27) in Algorithm 2 is designed to achieve this objective. Here, the learning rate \(\eta\) is a user-defined positive hyper-parameter, which determines the rate at which \(\alpha\) and \(R(\alpha)\) converges to satisfy the criterion (3.26). Finally, the magnitude of the solution \(\epsilon\) can be estimated using (3.28). Applying Algorithm 2 to equation (3.20), we obtain that \(\epsilon=1.41\times 10^{-3}\), which is very close to the magnitude of the exact solution, \(\epsilon=1/[2(6\pi)^{2}]=1.43\times 10^{-3}\). Here, we note that Algorithm (2) is mainly applicable to linear differential equations with a single dependent variable. For nonlinear equations or a group of differential equations with multiple dependent variables, a more advanced algorithm may be required to determine the magnitude for each variable, which is beyond the scope of this paper. **Algorithm 2** Determine the magnitude \(\epsilon\) of solution to a linear equation with _high-frequency_ source function and **zero** boundary condition ### Second challenge: equation weight \(\gamma\) for higher-stage network Equation weight \(\gamma\), as shown in (3.1), is a hyper-parameter to balance the contribution of data loss and equation loss in the loss function for physics-informed neural networks (PINNs). In the context of PINNs as a differential equation solver, boundary conditions are often implemented as data loss and the governing equations constitute the equation loss. Given that boundary conditions determine the uniqueness of the solution, a general rule of thumb is weighting the data loss more than the equation loss in the loss function [23]. This ensures that the boundary conditions are prioritized and satisfied during the training. The relative contribution of data loss and equation loss in the loss function (3.1) is the ratio of the first term, \(I_{1}=(1-\gamma)\mathcal{L}_{d}\), to the second term, \(I_{2}=\gamma\mathcal{L}_{e}\) in (3.1), i.e. \(I_{1}/I_{2}\). For normalized linear differential equations that have low-frequency solutions, such as (3.3), the equation loss, \(\mathcal{L}_{e}\), remains the same order of magnitude with the data loss \(\mathcal{L}_{d}\), around \(O(1)\). In that case, by setting \(0.1<\gamma<0.5\), we can ensure that the contribution of data loss \(I_{1}>I_{2}\) in the loss function (3.1). However, this setting of \(\gamma\) does not hold for differential equations with high-frequency solutions. A systematic mathematical justification was provided by a prior study [25], showing that the magnitude of the equation loss increases with the frequency of the solution. When considering the same Poisson equation (3.20) with zero boundary conditions and a high-frequency source function, similar to the equation we solve during higher-stage training, the magnitude of equation loss at the beginning of the training can be estimated as \[\mathcal{L}_{e}=\frac{1}{N_{e}}\sum_{i=0}^{N_{e}}\left[u_{xx}+u_{yy}+\sin(6\pi x _{i})\sin(6\pi y_{i})\right]^{2}\sim O(1), \tag{3.29}\] which is determined by the magnitude of the source function \(\sin(6\pi x)\sin(6\pi y)\). The magnitude of data loss at the beginning of training reads \[\mathcal{L}_{d}=\frac{1}{N_{1}}\sum_{i=0}^{N_{1}}[u(x_{i},\pm 1)]^{2}+\frac{1}{ N_{2}}\sum_{j=0}^{N_{2}}[u(\pm 1,y_{j})]^{2}\sim O(u^{2}) \tag{3.30}\] which is determined by the initial magnitude of the solution. Considering that the magnitude \(\epsilon_{0}\) of the solution has been estimated from the relation (3.19) or Algorithm 2, which gives \(\epsilon_{0}\approx 1/(6\pi)^{2}\) for the solution to (3.20), the magnitude of the data loss thus becomes \[\mathcal{L}_{d}\sim O(\epsilon_{0}^{2})\sim O(10^{-6}) \tag{3.31}\] which is six orders of magnitude smaller than the equation loss. If we still use \(\gamma\sim O(0.1)\), as \(I_{2}\gg I_{1}\), the optimization process will primarily focus on minimizing the \(I_{2}\) during the training, largely neglecting the contribution from the data loss. Utilizing the appropriate values of the magnitude prefactor \(\epsilon_{0}\) and the modified scale factor \(\hat{\kappa}_{0}\) from Section 3, figure 9\((b)\) shows the evolution of the data loss and equation loss over iterations by setting \(\gamma=0.5\). Compared with the equation loss, which was reduced by seven orders of magnitude in total, the data loss decays at a much slower rate, significantly limiting the errors of the trained neural network 9\((c)\) to be round 5%. #### 3.2.1 Theoretical approach of determining \(\gamma\) To improve the accuracy of trained network via PINN, we need to minimize the data loss prior to the equation loss. Therefore, the optimal value of \(\gamma\) should yield a larger contribution from the data loss larger than from the equation loss, namely, \[I_{1}=(1-\gamma)\mathcal{L}_{d}\geq\gamma\mathcal{L}_{e}=I_{2}\qquad\Longrightarrow \qquad\gamma\leq\frac{\mathcal{L}_{d}}{\mathcal{L}_{e}+\mathcal{L}_{d}}, \tag{3.32}\] which is consistent with the expression proposed in a prior study [25]. For equation (3.20), with the magnitude of the equation loss and data loss determined from (3.29) and (3.30) respectively, equation (3.32) yields \(\gamma=2\times 10^{-6}\). With this \(\gamma\), figure 9\((b)\) shows that both data loss and equation loss rapidly decrease over the training by more than five orders of magnitude. This suggests that both the equation and boundary condition are progressively satisfied by the network throughout the training. Although the reduction in equation loss is slightly less than in the case with \(\gamma=0.5\), the relative error \(e_{rr}\) between the trained neural network and the exact solution is reduced by more than one hundred times (figure 9\(a\)). However, generally without prior knowledge of the solution we do not know its corresponding \(\mathcal{L}_{d}\) and \(\mathcal{L}_{e}\), so estimating \(\gamma\) theoretically is difficult. Therefore below we develop an alternative approach. #### 3.2.2 Algorithm for determining \(\gamma\) for general equations Besides the theoretical expression (3.32), we also develop a more general algorithm to determine \(\gamma\) through a pre-training process. This approach provides higher accuracy and adaptability for a broad range of problems. As mentioned previously, the optimal value of \(\gamma\) should result in similar convergence rates for the data loss and equation loss over the course of training. We propose a heuristic approach for determining the optimal \(\gamma\), which is outlined in Algorithm 3. The principle underlying the algorithm lies in estimating the _convergence rates_ of both data loss \(C_{d}\) and equation loss \(C_{e}\). This estimation involves calculating the ratio of the initial loss \(\mathcal{L}_{d}^{0}\) and \(\mathcal{L}_{e}^{0}\), to the respective losses \(\mathcal{L}_{d}^{m}\) and \(\mathcal{L}_{e}^{m}\) after a short period of pre-training. Here, we use the minimal value during the last \(10\%\) of the pre-training iterations to calculate \(\mathcal{L}_{d}^{m}\) and \(\mathcal{L}_{e}^{m}\) to counteract any potential spikes in the loss evolution. If the convergence rate of the data loss \(C_{d}\) is substantially lower than that of the equation loss \(C_{e}\), it indicates that the \(\gamma\) used in training is too large and needs to be reduced. Vice versa. The recursive relation (3.37) is designed to reach this goal. \(\eta\) can be considered as the learning rate, a hyper-parameter that determines how fast \(R_{c}(\gamma)\) meets the criterion (3.36). We note that, when \(\gamma\) is Figure 9: **Importance of the equation weight \(\gamma\) of PINNs. \((a)\) Equation residue \(r_{1}(x,y)\) and prediction error \(e_{1}(x,y)\) of solving (3.20) via PINN for different \(\gamma\). \((b)\) Evolution of data loss and equation loss over the training for different \(\gamma\) shown in \((a)\). The inset shows the loss evolution for different \(\gamma\) that satisfy the criterion (3.37), which are close to each other. \((c)\) The root mean square value \(\epsilon_{1}\) of the prediction error \(e_{1}(x,y)\) (lower panel), and the corresponding ratio \(R_{c}\) of the data loss convergence rate over that of equation loss (upper panel) as a function of \(\gamma\). The optimal range of \(\gamma\) with minimal prediction error corresponds to \(0.1<R_{c}<5\). Error bars show the standard deviation of five repetitive experiments with different random initialization.** updated, one should re-train the neural network from the beginning to compute the updated \(R_{c}(\gamma)\), instead of continuing the previous training. Apply Algorithm 3 to the equation (3.20) with \(N_{0}\) set to be 500, one gives \(\gamma\approx 10^{-4}\). Figure 9\((a)\) shows the trained network using this \(\gamma\), which reaches further higher accuracy than that using \(\gamma=2\times 10^{-6}\) from (3.32). Compared with the case of \(\gamma=0.5\) and \(\gamma=2\times 10^{-6}\), the convergence rate of both data loss and equation loss when using \(\gamma\) from Algorithm 3 are maximized (figure 9\(b\)), leading to the smallest errors of the neural network prediction. The criterion range in (3.36) suggests that the training accuracy is not overly sensitive to the value of \(\gamma\), provided that the convergence rate of data loss \(C_{d}\) and equation loss \(C_{d}\) remains within the same order of magnitude. Figure 9\((c)\) shows the optimal range of \(\gamma\) that yields the minimal root mean square value \(\epsilon_{1}\) of the error \(e_{1}(x)\) between the trained network and the exact solution to (3.20). The corresponding range of \(R_{c}\) is found to be \(0.1<R_{c}<5\), aligning with the range in criterion (3.37). The inset of figure 9\((b)\) shows that the evolution of both data loss \(\mathcal{L}_{d}\) and equation loss \(\mathcal{L}_{e}\) for different \(R_{c}(\gamma)\) with this range (3.36) are closely matched. For \(\gamma=0.5\), the ratio \(R_{c}\), based on Algorithm 3, is found to be \(R_{c}(\gamma=0.5)=10^{-4}\), which largely deviates from the criterion, thus, resulting in large prediction error. ### Additional setting of PINN training for higher-stage network Besides the most critical settings, i.e. magnitude prefactor \(\epsilon\), scale factor \(\kappa\) of frequency, and the equation weight \(\gamma\), there are other settings and advanced algorithm developed in the literature that can ensure the success of PINN training for the high-stage networks. #### 3.3.1 Optimization method and re-sampling collocation points Two other critical settings in the training of high-frequency function includes the selection of optimization method and the number of collocation point. Common choices for PINN training optimizer include Adam and L-BFGS, a second-order quasi Newton method. For general equations with low-frequency solution, L-BFGS is often the preferred optimization method. However, for equations with high-frequency solutions, this is not always the case. Figure 10\((a)\) presents a comparison of the loss evolution and final prediction error of trained network between using Adam and L-BFGS for solving the equation (3.20), where Adam shows a better overall convergence rate. Furthermore, Figure 10: **Importance of re-sampling collocation points of PINNs**. \((a)\) Comparison of prediction error \(e_{1}(x,y)\) of solving (3.26) via PINNs for different number of collocation points \(N_{c}\) and using different optimizer, including L-BFGS, Adam with gradient descend (GD) (fixed collocation points) and Adam with stochastic gradient descent (re-sample collocation points over the iterations). \((b)\) The evolution of data loss and equation loss over the iterations for different optimizer using the number of collocation points \(N_{c}\) below or above (inset) the critical value \(N_{crit}\). \((c)\) The relation of the root mean square value \(\epsilon_{1}\) of the prediction error \(e_{1}(x,y)\) with the number of collocation points \(N_{c}\) for Adam (GD) and Adam (SGD). When the number of collocation points \(N_{c}\) is less than the critical value \(N_{crit}\approx 3100\), stochastic gradient descend can reach better performance for predicting high-frequency solutions. Error bars show the standard deviation of five repetitive experiments with different random initialization. Adam has the added advantage of utilizing stochastic gradient descent (SGD) by re-sampling the collocation points every few iterations [37, 38], which shows a even higher convergence rate. Collocation points in PINN training is as important as data points in regression problems. As discussed in Section 2.2.2, when training the neural network to fit high-frequency data, a sufficient number of data points (\(3\pi\approx 10\) per dominant period) are needed to ensure accurate predictions. This principle remains valid for PINN training. Unlike regression problems, which are limited by the availability of finite data points, PINN could potentially utilize as many collocation points as computationally feasible. For equation (3.20), the dominant frequency \(f_{d}=3\) in each dimension. Given that the domain is defined in \((x,y)\in[-1,1]\), there are 6 dominant periods in each dimension. Thus approximately \(N_{crit}=(3\pi\times 6)^{2}\approx 3100\) collocation points are required. Figure 10\((a)\) compares the accuracy of the trained network for different number of collocation points. For L-BFGS and Adam (GD) with fixed and small number of collocation points, the neural network predictions significantly deviate from the exact solution. When the number of collocation points reach the criterion \(N_{crit}\), the prediction error drops sharply and only improves marginally with the addition of more collocation points. Compared with using fixed collocation points, predictions using Adam with stochastic gradient descent (SGD) are less sensitive to the number of collocation points. Figure 10\((a)\) shows that the prediction error using Adam (SGD) can attain optimal precision even when the number \(N_{c}\) of collocation points falls below the critical value \(N_{crit}\). Figure 10\((c)\) further compare the root mean square error (RMSE) \(\epsilon\) of the trained network between utilizing Adam (GD) and Adam (SGD) for different number of collocation points \(N_{c}\). It confirms that SGD is an essential tool in PINN training for predicting high-frequency solution. #### 3.3.2 Advanced methods from the literature: RAR and gPINNs Having discussed the essential settings, we note that many advanced algorithms developed in the literature can also largely improve the PINN training of higher-stage networks. Two of most useful methods we found are the adaptive residual-based collocation refinement (RAR) method [39, 40] and the gradient-enhanced physics-informed networks (gPINNs) [41]. A usual practice in PINN training is to uniformly distribute the collocation points across the domain. However, this approach proves inadequate for equations whose solution feature steep gradients [42]. As discussed in Section 2.2, high-frequency solutions exhibit large gradients throughout the domain. Despite setting a large scale factor to align with the gradient, there remain regions where the local gradient exceeds the averaged gradient \(O(2\pi f_{d})\) with a dominant frequency \(f_{d}\). It can be challenging to minimize the local residue of equation in these areas. To address this issue, we employ the residual-based refinement (RAR [39]) of collocation point. By continuing adding collocation points in areas of high equation residue throughout the training, the equation residue across the entire domain can be efficiently reduced. This technique thus becomes a vital tool for optimizing PINN training. An additional method to boost the training performance of PINNs involves incorporating the gradient of the equation residue function \(r(x,u)\) into the loss function \(\mathcal{L}\), known as the gradient-enhanced physics-informed network (gPINN) [41]. Thus, the loss function can be expressed as, \[\mathcal{L}=(1-\gamma)\mathcal{L}_{d}+\gamma(\mathcal{L}_{e}+\gamma_{g} \mathcal{L}_{g})\qquad\text{with}\quad\mathcal{L}_{g}=\frac{1}{N_{g}}\sum_{j= 1}^{N_{g}}|\nabla r(x_{j},u(x_{j}))|^{2}, \tag{3.38}\] where \(N_{g}\) denotes the number of collocation points used to examine the gradient of the equation residue \(r(x,u)\) within the domain. \(\gamma_{g}\) is an additional hyper-parameter, akin to \(\gamma\), that control the balance between the equation loss \(\mathcal{L}_{e}\) and gradient loss \(\mathcal{L}_{g}\) during training. By incorporating the gradient constraint \(\mathcal{L}_{g}\), we obligate the neural networks to learn the high-derivative information of the solution involved in the gradient of the equation. This can significantly improve the convergence rate of the training loss, provided we choose the appropriate value of the weight \(\gamma_{g}\) for the gradient constraint \(\mathcal{L}_{g}\). Analogous to (3.32), the value of \(\gamma_{g}\) can be estimated by \[\mathcal{L}_{e}\geq\gamma_{g}\mathcal{L}_{g}\qquad\Longrightarrow\qquad\gamma_ {g}\leq\frac{\mathcal{L}_{e}}{\mathcal{L}_{g}}\sim\frac{||r||^{2}}{||\nabla r|| ^{2}} \tag{3.39}\] where \(||\cdot||^{2}\) represents the \(l_{2}\)-norm. As discussed in Section 3.2.2, for the equation with high-frequency solutions, the equation residue has roughly the same frequency with the solution. Thus, the magnitude ratio of the equation residue \(||r||\) with its gradient should scale as \(||r||/||\nabla r||\sim O(2\pi f_{d})^{-1}\), where \(f_{d}\) is the dominant frequency of the solution. Thus, the optimal value of \(\gamma_{g}\) can be selected as \[\gamma_{g}=\frac{||r||^{2}}{||\nabla r||^{2}}\sim O(2\pi f_{d})^{-2} \tag{3.40}\] The effect of the gPINN on the higher-stage training is shown and discussed in Section 3.4. ### Algorithm of multi-stage training for PINNs Leveraging the multi-stage training algorithm for regression problems and incorporating the results discussed in the previous sections, we have extend the multistage training scheme to physics informed neural networks (PINNs). The details of the algorithm are provided in Algorithm 4. Here, we note that the primary distinction between the multistage training scheme for PINNs and that for regression problems lies in the fact that we _lack_ training data for the solution itself for PINNs. Contrasting with the multistage framework for regression problems, where the second network is trained directly using the error \(e_{1}=u_{g}-u_{0}\) between the first trained network \(u_{0}\) with the data \(u_{g}\), we don't necessarily have access to the error of the first trained network in the context of PINNs. Thus, the method of training the second network \(u_{1}\) for PINNs involves creating a combined network \(u_{k}^{(c)}\) (3.41) that involves the previously trained network \(u_{k-1}^{(c)}\) and a new network \(u_{k}(x,\kappa_{k})\), with an appropriately-estimated magnitude prefactor \(\epsilon_{k}\) and scale factor \(\kappa_{k}\). A key advantage of this approach is that it circumvents the need to derive a new equation, as shown in (3.13), for each higher-stage network. By fixing the trained weights and biases in the previous networks, the training process for solving the original equation becomes mathematically equivalent to solving the higher-stage governing equation (3.13) with the high-frequency source function from the equation residue for the lower-stage training. Using Algorithm 4, figure 11 shows the three-staged PINN training for solving the ordinary differential equation (2.2). For the first two stages, we employ a combination of Adam and L-BFGS for training, which maximizes the convergence rate. However, given the high-frequency residue from the second stage of training, it indicates a high-frequency solution for the third stage of training. Thus, we only use Adam with stochastic gradient descent (SGD) to optimize the performance of the third-stage training, in accordance with the suggestions made in Section 3.3.1. By combining all the optimal settings as discussed in the previous sections (SS3.1-SS3.3), the prediction error at each stage can be reduced by 3-5 orders of magnitude within \(10^{5}\) iterations. Compared with single-stage training, figure 12\((a\&b)\) shows that multi-stage training can reduce both the data loss and equation loss by more than 20 orders of magnitude within the same number of iterations. In this instance, the number of weights in the single-stage network has been selected to be approximately equivalent to the total number across all three-stage networks. These results suggest that employing appropriate network settings and an effective training scheme plays a more essential role in successful training than simply increasing the size of neural network and the number of collocation points. Additionally, when combined with gPINN, the multistage training demonstrates an accelerated convergence rate. Figure 11(\(f\)) shows that, after the three stages of gradient-enhanced PINN training, the prediction error of the final trained network with the exact solution reaches the machine precision of double floating points The observed oscillation in \(e_{3}\) is primary attributable to round-off error. The right panel of figure 11(\(d\)-\(f\)) displays the spectrum and dominant frequency of the equation residue after each stage of training. Figure 12(\(c\)) further shows the relation of the dominant frequency \(f_{d}\) with the root mean square value \(\epsilon\) of the prediction error \(e_{n}(x)\) over the stages, which follows a Figure 11: **Multi-stage gPINNs for 1D equations**. (\(a\)) Comparison of the first-stage trained neural network (red dashed curve) with the exact solution \(u_{g}(x)\) (blue curve) to equation (3.20). (\(b\)) Data loss and (\(c\)) equation loss over iterations of three-stage training. The inset of (\(b\)) shows the evolution of the total loss \(\mathcal{L}\) over iterations. The inset of (\(c\)) shows that the evolution of the root mean square value \(\epsilon_{r}\) of the equation residue \(r(x,u)\) of the multi-stage neural networks follows \(\epsilon_{r}\sim\exp(-\sqrt{n_{iters}})\), which is consistent with that for regression problems (figure 5\(c\)). (\(d\) & \(e\)) Comparison of the higher-stage trained network with the error of lower-stage training is shown in the left column. The equation residue \(r_{n}(x)\) for different stages of training is in the middle. Frequency domain of the equation residue \(r_{n}(x)\) at each stage is shown in the right column. (\(f\)) Prediction error \(e_{3}(x)\) and the equation residue \(r_{3}(x)\) after the third-stage of the training. The zoom-in figure (on the right) shows fluctuations in the prediction error \(e_{3}(x)\), which is caused by the round-off error of the machine-precision of double-floating point. power law \(f_{d}\sim\epsilon^{-\alpha}\) for both regression problems and PINNs. We recall that the power law exponent \(\alpha\) for regression problems is around \(1/6\). Compared with that, the power law exponent \(\alpha\) for PINNs becomes noticeably smaller, around \(1/7\) for multi-stage training with regular PINNs and reduced further to \(1/8\) when using gradient-enhanced PINNs. As discussed in Section 2.2, this indicates that trained neural networks in PINNs achieve higher accuracy in capturing higher-order derivatives compared to regression problems. This is reasonable as PINNs involve differential equations that contain the derivatives of the solution. By minimizing the equation loss, PINNs constrain both the neural network and its derivatives to approach the exact solution, enhancing the capture of the high derivative information of the solution. The same reasoning applies to the gradient-enhanced PINNs, which result in an even lower exponent \(\alpha\) since the gradient of equation residue involved further higher derivatives of the solution. ### Application to 2D partial differential equations Multi-stage training scheme for PINNs can also be applied to solve partial different equations (PDE). Figure 13\((a)\) shows the three-stage training to solve the diffusion equation \[\frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}+(1-x^{2}+t) \qquad\text{with}\ \ u(x,0)=u(\pm 1,x)=0\,, \tag{3.44}\] which has the exact solution, \[u_{g}(t,x)=t(1-x^{2})\,. \tag{3.45}\] Consistent with regression problems, the convergence rate of multistage PINN method for solving 2D problem is slightly slower than that for 1D problem (inset of figure 13\(e\)). After three stage of training using RAR method and gradient-enhanced PINN, the prediction error \(e_{3}(t,x)\) of the combined trained networks with the exact solution \(u_{g}(t,x)\) is around \(O(10^{-11})\). The accuracy of multistage training is still seven order-of-magnitude higher than that of the single-stage training. Figure 13\((e)\) shows that, when employing the multi-stage training scheme with gPINNs, the relation of the dominant frequency \(f_{d}\) with the root mean square value \(\epsilon\) of the prediction error \(e(x,y)\) for solving both 1D and 2D equations, follows the same power law (2.11) with an exponent of \(\alpha\approx 1/8\). This observation is consistent with the results observed in regression problems (figure 6\(e\)). We note that achieving a low prediction error of \(O(10^{-11})\) for solving 2D partial differential equations via classical numerical method, such as finite difference, would require an extensive number of grid points. For instance, considering the central difference method along the \(x\)-direction, to reach \(10^{-11}\), we would need a grid size in the x-direction of \(h_{(x)}\sim O(\sqrt{10^{-11}})\sim O(10^{-5})\), namely \(10^{6}\) grid points for each time step. Even with a 4th-order Runge-Kutta method along the \(t\) direction, the step size in \(t\)-direction would need to be \(h_{(x)}\sim O(\sqrt{(10^{-11})}^{1/4}\sim O(10^{-3})\), requiring \(10^{3}\) time steps. Consequently, the total number of grid points needed to achieve this accuracy across the entire domain would be on the order of \(O(10^{9})\). In contrast, our approach utilizes fully connected neural network with 4-hidden layers consisting of 30 units for each stage of the training. Thus, the total number of weights and biases used to express the solution is only around \(3\times 4\times 30^{2}\approx 10^{4}\), which is five order of magnitude less than the number of grid points used in a discretized solution. This demonstrates the efficiency and effectiveness of Figure 12: **Comparison of single-stage with multi-stage PINN training**. \((a\) & \(b)\) Comparison of the data loss \((a)\) and equation loss \((b)\) evolution over iterations between the single-stage training, multistage training of PINN, and multistage training of gPINN. \((c)\) Relation of the dominant frequency \(f_{d}\) with the root mean square value \(\epsilon\) of the error \(e_{n}(x)\) after different stages of training for multi-stage training for regression problems (blue), PINNs (red) and gPINNs (black). the multi-stage PINN in achieving accurate solution with significantly fewer parameters compared to classical numerical methods. ## IV Generalization of multistage PINN to a combined forward-and-inverse problem Here we investigate a specific class of problems in mathematics that requires solving equations (forward problem) and simultaneously inferring unknown parameters in the equation (inverse problem) with a high demand for accuracy, for example, finding self-similar blow-up solutions for nonlinear fluid equations [9]. The physical significance of the problem was explained in Eggers (2015) [43], and a prior study [9] has developed the implementation of PINNs to solve it. In these problems, the multistage PINN method can play a critical role in achieving accurate results. Here we focus on the 1D inviscid Burgers' equation for which we know the exact solutions. In Appendix D, we provide a summary of the background knowledge and PINN implementation. The task for the PINN involves discovering the _smooth_ solution to the nonlinear self-similar Burgers' Figure 13: **Multi-stage gPINNs for 2D equations**. (\(a\)) Comparison of the first-stage trained neural network with the exact solution \(u_{g}(x,t)\) to the equation (3.44). (\(b\)-\(d\)) The error \(e_{n}(x,t)\) of higher-stage trained network \(u_{n}(x,t)\) with the exact solution \(u_{g}(x,t)\) is shown in the left panel. The equation residue \(r_{n}(x,t)\) for different stages of training and their frequency domains are shown in the right panel. (\(e\)) Relation of the dominant frequency \(f_{d}\) with the root mean square value \(\epsilon\) of the error \(e_{n}(x,t)\) after different stages of training follows the same power law as 1D problems, of which the exponent \(\alpha=1/8\). The inset shows that the number of iterations required for 2D problems to reach the same accuracy \(\epsilon\) is more than that for 1D problems. equation, \[-\lambda U+[(1+\lambda)y+U]\partial_{y}U=0,\qquad\text{with}\quad U(-2)=1\,, \tag{4.1}\] where the solution \(U\) should be an odd function with respect to the independent variable \(y\), and \(\lambda\) is the unknown parameter to be predicted by PINNs. Smooth solutions to (4.1) exist when \(\lambda=1/(2+2i)\) for \(i=1,2,3...\). For other \(\lambda\) values, the solution is non-smooth, exhibiting discontinuity at certain order of its derivatives at the origin \(y=0\). Finding the smooth solution with the correct value of \(\lambda\) numerically is challenging. To address the issue, prior study [9] leveraged PINNs and introduced an additional _smoothness_ constraint into the loss function, which penalizes the higher-order derivative of the equation residue around the non-smooth position. We note that the minimal order of derivative needed for the smoothness constraint depends on specific problems. In general, it should be larger than the order of smoothness for the non-smooth solution (see Appendix D). Any higher derivative with order larger than the minimal value can be involved in the smoothness constraint as long as it remains computationally feasible. Here we focus on the first smooth solution of the self-similar Burgers' equation (4.1). The loss function can be expressed as \[\mathcal{L}=(1-\gamma)\mathcal{L}_{d}+\gamma(\mathcal{L}_{e}+ \gamma_{g}\mathcal{L}_{g})+\gamma_{s}\mathcal{L}_{s}\qquad\text{with} \tag{4.2}\] \[\mathcal{L}_{d}=(U(y=-2)-1)^{2}\qquad\text{and}\qquad\mathcal{L} _{e}=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}|r\left(y_{i},U(y_{i})\right)|^{2}\] (4.3) \[\mathcal{L}_{g}=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\left|\frac{ \partial r}{\partial y}\left(y_{i},U(y_{i})\right)\right|^{2}\qquad\text{and} \qquad\mathcal{L}_{s}=\frac{1}{N_{s}}\sum_{j=1}^{N_{s}}\left|\frac{\partial^{ 3}r}{\partial y^{3}}\left(y_{j},U(y_{j})\right)\right|^{2}\] (4.4) \[\text{with}\qquad r(y,U)=-\lambda U+[(1+\lambda)y+U]\partial_{y}U \tag{4.5}\] where \(\mathcal{L}_{d}\) and \(\mathcal{L}_{e}\) are the data loss and equation loss, respectively. \(\mathcal{L}_{g}\) is the gPINNs implementation, which involves the first-order gradient of the equation residue \(r(y,U)\). \(\mathcal{L}_{s}\) is the smoothness constraint that incorporates the third derivative of the equation residue. While the equation loss \(\mathcal{L}_{e}\) and gradient loss \(\mathcal{L}_{g}\) are examined at \(N_{c}\) random collocation points \(y_{i}\) across the entire domain, the smoothness constraint \(\mathcal{L}_{s}\) is calculated at \(y_{j}\) close to the origin (e.g. \(|y_{j}|<0.1\)) with number \(N_{s}\ll N_{c}\). Although the smoothness constraint depends on the equation residue, it can be viewed as an additional boundary condition for the solution to determine the value of \(\lambda\). Following Algorithm 4, figure 14(\(a\)-\(d\)) shows the first two stages of training for solving the self-similar Burgers' equation (4.1). We observe that the second-stage training successfully improves both the prediction error \(e_{2}(y)\) of the trained network and the inferred lambda \(\lambda_{2}\) by _four orders of magnitude_. However, in addition to the high-frequency error as previously seen for the higher-stages, we observed that the prediction error \(e_{2}(y)\) from the second-stage training contains a low-frequency profile, which dominates over the high-frequency error. This disparity hinders the further reduction of the error by adding more stages of training based on Algorithm 4. To understand the issue, we first study the occurrence of the high-frequency error in \(e_{2}(y)\). The middle panel in figure 14(\(e\)) reveals that the equation residue \(r_{2}(y)\) after the second stage of training exhibits a similar dominant frequency \(f_{d}\) to the high-frequency error in the prediction error \(e_{2}(y)\). Using (3.9), we estimate the magnitude \(\epsilon_{2}\) of the prediction error \(e_{2}(y)\) based on the magnitude \(\epsilon_{r_{2}}\) of the equation residue \(r_{2}(y)\) as \(\epsilon_{2}=\epsilon_{r_{2}}/(2\pi f_{d})\sim O(10^{-13})\). This is consistent with the magnitude of the high-frequency error in \(e_{2}(y)\). Then, following Algorithm 4, we create a new network \(U_{2}(y)\) multiplied by the magnitude of \(O(10^{-13})\) for the third-stage training. Figure 14(\(f\)) shows that the high-frequency error in \(e_{2}(y)\) after the third-stage training does vanish in \(e_{3}(y)\). However, the magnitude of the prediction error \(e_{3}(y)\) and the inferred \(\lambda_{3}\) after the third-stage of training (figure 14\(f\)) remains nearly the same as those of the second-stage training (figure 14\(e\)). The issue appears to be related to the existence of the low-frequency profile in \(e_{2}(y)\). We recall that the prediction error of the training is estimated by comparing the trained networks at the Figure 14: **Multi-stage gPINNs for a combined forward and inverse problem**. \((a)\) Comparison of the first-stage trained network \(U_{0}(y)\) at the inferred \(\lambda_{1}\) (red dashed curve) with the exact profile of the first smooth solution \(U_{g}(y)\) with \(\lambda_{g}=0.5\) to the self-similar Burgers’ equation (4.1). \((b)\) Data loss and \((c)\) equation loss over iterations of three-stage training. The inset of \((b)\) shows the relation of the error \(\epsilon_{\lambda}\) of inferred \(\lambda\) with the loss of the smoothness constraint \(\mathcal{L}_{s}\) after different stages of training. The dash line indicates the relation \(\epsilon=\sqrt{\mathcal{L}_{f}}\). The inset of \((c)\) shows that the evolution of the root mean square value \(\epsilon_{r}\) of the equation residue \(r(y,u)\) over iterations of the multi-stage neural networks follows \(\epsilon_{r}\sim\exp(-\sqrt{n_{iters}})\), consistent with that of regular forward problem (figure 11c). \((d\ \&\ e)\) The prediction error \(e_{n}(y)\) (left), equation residue \(r_{n}(y)\) (right) and its frequency domain (right) for the first \((d)\) and second \((e)\) stages of training. Comparison of higher-stage trained networks with the lower-stage prediction error is shown in the left panel. \((f)\) Prediction error \(e_{3}(y)\) and equation residue \(r_{3}(y)\) for the third-stage training using only one additional neural network \(U_{3}(y)\). It successfully reduce the high-frequency error from the second stage but fails to reduce its low-frequency error. \((g)\) Prediction error \(\hat{e}_{3}(y)\) and equation residue \(\hat{r}_{3}(y)\) for the third-stage training using two neural networks \(U_{3}(y)\) and \(U_{\lambda}(y)\), which successfully reduce both the high-frequency error associated with lower-stage equation residue \(r_{2}(y)\) and the low-frequency error associated with the error \(\epsilon_{\lambda}\) of inferred \(\lambda_{2}\). The zoom-in figure shows the prediction after three stages of training approaches the machine precision of double floating point. inferred \(\lambda_{2}\) with the exact smooth solution at \(\lambda_{g}=0.5\). Therefore, the error of the trained network is influenced not only by the equation residue, but also by the error \(\epsilon_{\lambda}\) of the inferred \(\lambda\). To assess the impact of the inference error \(\epsilon_{\lambda}\) on the prediction error \(e_{2}(y)\), we perform a similar analysis as discussed in Section 3.1.1, introducing the ansatz of the exact solution \(U_{g}\) and exact value of \(\lambda_{g}\) as \[U_{g}(y)=U_{0}(y)+\epsilon U_{e}(y)\qquad\text{and}\quad\lambda_{g}=\lambda_{0 }+\epsilon_{\lambda} \tag{4.6}\] where \(U_{0}\) represents the lower-stage trained network and \(\lambda_{0}\) is the inferred \(\lambda\) from the lower-stage training. \(\epsilon U_{e}(y)\) represents the prediction error of the trained network and \(\epsilon_{\lambda}\) is the error of inferred \(\lambda\). Both \(\epsilon\) and \(\epsilon_{\lambda}\) are much smaller than \(1\). Substituting (4.6) into (4.1) and removing higher-order small terms \(O(\epsilon^{2})\), we have \[\epsilon\left\{(\partial_{y}U_{0}-\lambda_{0})U_{e}+[(1+\lambda_{0})y+U_{0}] \partial_{y}U_{e}\right\}=\underbrace{(\lambda U_{0}-[(1+\lambda_{0})y+U_{0}] \partial_{y}U_{0})}_{\text{equation residue: }-r_{0}}+\underbrace{\epsilon_{ \lambda}(U_{0}-y\partial_{y}U_{0})}_{\text{term from }\epsilon_{\lambda}:\,r_{ \lambda}} \tag{4.7}\] which can be viewed as the governing equation for the higher-stage network. In addition to the equation residue \(r_{0}(y)\) from the lower-stage training, the higher-stage equation (4.7) involves a new source function \(r_{\lambda}(y)\) that is associated with the error \(\epsilon_{\lambda}\) of the inferred \(\lambda\). While the equation residue \(r_{0}(y)\) exhibits high-frequency behavior, the source function \(r_{\lambda}(y)\) is influenced by the profile of the trained network \(U_{0}(y)\), exhibiting the low-frequency profile in the prediction error \(e_{2}(y)\) (figure 14_e_). Considering the low frequency nature of the source function \(r_{\lambda}(y)\), the magnitude of prediction error \(\epsilon\) in (4.7) associated with \(r_{\lambda}(y)\) is expected to be similar to the error \(\epsilon_{\lambda}\) of the inferred \(\lambda\), which is approximately \(O(10^{-12})\), consistent with our results (figure 14_e_). In contrast, the prediction error associated with the high-frequency equation residue \(r_{0}\), as discussed earlier, is only around \(O(10^{-13})\). This explains why the low-frequency profile dominates the prediction error \(e_{2}(y)\). Here, we note that the error \(\epsilon_{\lambda}\) of inferred \(\lambda\) is calculated using the known exact value \(\lambda_{g}=0.5\). However, in many other problems, the exact value of \(\lambda_{g}\) is unknown. Thus, an alternative way to quantify the inference error \(\epsilon_{\lambda}\) is from the loss \(\mathcal{L}_{s}\) of the smoothness constraint. The inset of figure 14(_b_) shows that the inference error \(\epsilon_{\lambda}\) after different stages of training is proportional to \(\sqrt{L_{s}}\), i.e. \(\epsilon_{\lambda}=\sqrt{L_{s}}\) (dashed line). This suggests that we can use \(\sqrt{L_{s}}\) to estimate the inference error \(\epsilon_{\lambda}\), as well as the magnitude prefactor for the higher-stage network \(U_{\lambda}\) associated with \(r_{\lambda}(y)\). Since the prediction error is dominated by the low-frequency source function \(r_{\lambda}(y)\), one might intuitively consider creating a single low-frequency network multiplied by the error \(\epsilon_{\lambda}\) of the inferred \(\lambda\) for the third-stage training. However, this approach is not effective because the smoothness constraint (4.4) depends on the higher-order derivative of the equation residue. By using only a low-frequency network, it would be challenging to reduce the high-frequency equation residue. Therefore, our proposed solution is to create two networks for both source functions in (4.7) at the third-stage training, namely \[\epsilon_{2}U_{2}(y)+\epsilon_{\lambda}U_{\lambda}(y), \tag{4.8}\] where the magnitude prefactor \(\epsilon_{2}\) and modified scale factor \(\hat{\kappa}\) (for frequency) for the high-frequency network \(U_{2}(y)\) associated with the equation residue \(r_{2}\) can be determined by the relations (3.17) and (3.16) or Algorithm 2. The low-frequency network \(U_{\lambda}\) associated with the error \(\epsilon_{\lambda}\) of the inferred \(\lambda_{2}\) can be directly multiplied by the inference error \(\epsilon_{\lambda}\). Figure 14(_g_) shows that, using combined two networks for the third-stage training, the prediction error \(\hat{e}_{3}(y)\) is successfully reduced by another three orders of magnitude, eventually approaching the machine precision of double-floating points. ## 5 Discussion We note that the principle of multi-stage neural networks is similar to that of Fourier series, which combines a series of sine or cosine functions, ranging from low to high frequencies, to approximate functions. Provided the series converge, the error between the Fourier series expansion of a given order and the target function possesses lower magnitudes but higher frequency than any terms in the series. To further minimize the error, higher-order sine or cosine functions need to be incorporated into the series, leading to additional higher-frequency error. The introduction of new neural networks in multi-stage neural networks (MSNNs) is analogous to the inclusion of higher-order trigonometric functions in the Fourier series expansions. However, in contrast to sines and cosines, deep neural networks with appropriate settings offer stronger function representation capacity. Our finding indicates that the magnitude \(\epsilon\) of error after each stage of training follows a inverse power law relation with the dominant frequency \(f_{d}\) of the error, i.e. \(\epsilon\sim f_{d}^{-\alpha}\), with the exponent \(\alpha\approx 1/6\) for regression problems, \(\alpha\approx 1/7\) for regular PINNs, and \(\alpha\approx 1/8\) for gradient-enhanced PINNs (gPINNs). In comparison, the power law exponent \(\alpha\) for Fourier series is roughly around \(\alpha\approx 0.5\)[44], much larger than that of MSNNs. This indicates that, to achieve the same error magnitude, the error frequency generated by MSNNs could be several orders of magnitude lower than that by Fourier series. This observation confirms that MSNNs serve as a superior tool capable of accurately approximating target functions, as well as their high derivative information. The multi-stage neural networks (MSNNs) developed in this work remains in their _early_ stage, and mainly serve as a proof of concept to demonstrate that neural networks can practically achieve high accuracy. It is crucial to recognize that MSNNs should not be regarded as a substitute for classical numerical methods, but rather as a complementary approach. In fact, there remains several challenges that need to be addressed in the MSNN method. One of the primary challenges pertains to high-dimension problems. As shown in figure 6 and 13, the convergence rate of MSNNs for both 2D regression problems and PINNs are consistently slower than that for 1D problems. It is expected that this challenges will become more pronounced in higher-dimensional problems. The second major challenge pertains to approximating functions or predict solutions with steep gradients. Near the regions where the target function exhibits steep gradients, neural networks often encounter local peaks in the error or the equation residue during training. The presence of these peaks hinder the reduction of error in successive stages, necessitating their removal before proceeding to the next stage of training. We note that functions with steep gradients are commonly encountered in differential equations, such as stiff equations, nonlinear equations, or singular perturbation equations (see Appendix C). Solving these types of equation via PINNs is beyond the scope of this paper. There are additional questions to be addressed that could further improve the MSNN method. One of the critical questions concerns the optimal timing for transitioning to the next stage of training. In each stage, the convergence rate of training loss gradually decreases over the iterations. The decision whether to switch to the next stage quickly for higher convergence rates, or to stay in the current stage until the loss plateaus in order to maximize the error reduction at each stage requires careful consideration and further investigation. Moreover, with multiple stages of training, oversized networks are no longer required to achieve high accuracy within a single stage of training. The optimal strategy for selecting the neural network size at different stages that can minimize the numbers of training parameters (weights and biases) and thus computational expense for the entire MSNN training becomes another future direction that is worth investigating. ## 6 Conclusion We introduced the multi-stage neural networks (MSNNs) for both regression problems and physics-informed neural networks. Inspired by perturbation theory, we sequentially introduced new stages of training with new neural networks to capture the residue from the previous stage of training. This enable MSNNs to reach unprecedentedly high accuracy over stages. We showed that three stages of MSNN training can reach machine precision, making neural networks truly universal func tion approximators in practice. This new method can be widely applied to many scientific domains, such as mathematical and nonlinear physical science where the precision matters. The success of MSNNs lies in two aspects. The first is the idea of staged training itself. Deep neural networks often suffer from spectral biases, making it challenging to capture the full spectrum of the target function in a single stage of training, even when employing large-sized networks with an increased number of data or collocation points. As a result, the training loss tends to plateau after a certain number of iterations. However, by employing multi-stage training, the previously plateaued error can be substantially reduced in each successive stage, which enables MSNNs to progressively capture finer details of the target function. The second aspect for the success of MSNNs is the specific design of each new-stage network based on the training error from the previous stage. The neural-network predictions in successive stages exhibits significantly small magnitude and high frequency compared to the previous stages. We showed that, by employing optimal magnitude prefactor \(\epsilon\) and scale factor \(\kappa\) with sin activation function, accurate predictions of functions with small magnitudes and high frequencies can be achieved. This enables the effective capture of intricate features in each successive stage. To maximize the performance of each stage of training, we also studied the optimal value of \(\epsilon_{n}\) and \(\kappa_{n}\) for each stage. For regression problems, the \(\epsilon_{n}\) is equal to the magnitude (root mean square value) of the error \(e_{n}(\mathbf{x})\) between the trained networks in the previous stage and the ground truth \(u_{g}(\mathbf{x})\), and \(\kappa_{n}\) is proportional to the dominant frequency \(f_{d}\) of the error \(e_{n}(\mathbf{x})\). However, for physics-informed neural networks (PINNs), the prediction error \(e_{n}(\mathbf{x})\) is not directly available and needs to be inferred from the equation residue \(r_{n}(\mathbf{x})\) from the previous stage of training. Based on the fact that the governing equations for higher-stage training are essentially linear, we provided the theoretical relations between the magnitude and frequency of the prediction error \(e_{n}(\mathbf{x})\) and the equation residue \(r_{n}(\mathbf{x})\) in Section 3. We also presented an algorithm that can effectively estimate the magnitude of the prediction error from the equation residue. Moreover, we discussed several other optimal settings that can enhance the efficiency of multi-stage PINN training. These include the equation weight \(\gamma\), number of collocation points \(N_{c}\), choice of optimizer, and advanced PINN techniques in the literature, such as RAR method and gPINNs. Leveraging all the optimal settings discussed in this work, we showed that multi-stage neural networks (MSNNs) can significantly reduce the prediction error for both regression problems and PINNs, approaching machine precision. Furthermore, MSNNs showcases their capability in solving combined-forward-and-inverse problems to machine precision, a task typically challenging for classical numerical methods, but of great importance in mathematical and physical sciences. However, there remains many questions and challenges to be addressed for further enhancing the MSNN method. ## Acknowledgements We thank T. Buckmaster and J. Gomez-Serrano for helpful discussions regarding the application of multi-stage neural networks to critical mathematical questions. We acknowledge the Office of the Dean for Research at Princeton University for partial funding support via the Dean for Research Fund for New Ideas in the Natural Sciences. C.-Y.L acknowledge the National Science Foundation for funding via Grant No. DMS-2245228. We also gratefully acknowledges financial support from the Schmidt Data X Fund at Princeton University made possible through a major gift from the Schmidt Futures Foundation. ## Appendix A Neural network error under different settings Systematic experiments (figure 15) show that the root mean square value (RMS) \(\epsilon\) of the error \(e(x)\) between the trained network \(u_{0}(x)\) and the data from the target function \(u_{g}(x)\) remains unchanged even when the number of either layers (figure 15\(a\)) or units (figure 15\(b\)) is increased. Although the RMS error \(\epsilon\) does slightly decrease with an increase in training data (figure 15\(c\)), this reduction is smaller than the standard deviation of eight repetitive experiments conducted with different random initializations, and thus is negligible. These results suggest that the plateau in training loss is not due to insufficient neural network size or lack of training data, but instead arises from inherent limitations of the training process itself. Figure 15\((d)\) presents the training loss for two different optimization methods. Compared to Adam [27], a first-order gradient descent method, L-BFGS [45], a quasi-Newton method, exhibits a higher convergence rate. However, training with L-BFGS quickly falls into a local minimum after reaching the same plateau as Adam. This suggests that the loss plateau is not optimizer-specific. ## Appendix B Effect of data magnitude on neural network training Figure 16: Fitting of neural networks to the data with different magnitudes without normalization. It shows that the network is hard to fit data with magnitude either too much larger or smaller than 1. Figure 15: Root mean square (RMS) value \(\epsilon\) of the error between the target function and trained network using different number of \((a)\) hidden units, \((b)\) layers, and \((c)\) training data, and \((d)\) different types of optimizers, which show no big difference. Error bars show the standard deviation of eight repetitive experiments with different random initialization. ## Appendix C Two extreme types of equations There are two extreme types of equation where the general settings of networks derived in (3.19) and (3.18) for the high-stage training do not strictly hold. The _first_ case is when the equation involves nonlinear term with high-order of derivatives, for example, \[\left(\frac{d^{8}u}{dx^{8}}\right)^{2}-u=F(x).\] (C.1) Substituting the ansatz (3.5) into (C.1) gives \[-\epsilon\left(2\frac{d^{8}u_{0}}{dx^{8}}\frac{d^{8}u_{1}}{dx^{8}}-u_{1} \right)-\epsilon^{2}\left(\frac{d^{8}u_{1}}{dx^{8}}\right)^{2}=r(x,u_{0})= \left(\frac{d^{8}u}{dx^{8}}\right)^{2}-u_{0}-F(x)\] (C.2) where \(r(x,u_{0})\) is the equation residue of \(u_{0}\). When \(u_{1}\) is a high-frequency function with dominant frequency \(f_{d}\) satisfying the criterion, \[(2\pi f_{d})^{8}\epsilon>1\qquad\Longrightarrow\qquad f_{d}>\epsilon^{-1/8},\] (C.3) the nonlinear term of \(u_{1}\) on the left-hand side of (C.2) is no longer negligible and becomes the dominant term in the equation. The magnitude and frequency of \(u_{1}(x)\), thus, need to be reassessed by balancing the nonlinear term with the equation residue \(r(x,u_{0})\). This results in the dominant frequency \(f_{d}^{(1)}\) of \(u_{1}(x)\) to be \(f_{d}^{(1)}=f_{d}^{(e)}/2\), rather than \(f_{d}^{(1)}=f_{d}^{(e)}\) from (3.16), where \(f_{d}^{(e)}\) represents the dominant frequency of the equation residue \(r(x,u_{0})\). However, we note that, although the dominant frequency \(f_{d}^{(e)}\) of equation residue is larger, it still capture the order of magnitude of the actual frequency of \(u_{1}(x)\). We recall from figure 4\((c)\) that neural networks with modified scale factor \(\hat{\kappa}\) larger than the criterion \(\hat{\kappa}>\pi f_{d}\) can reach the same high-accuracy of fitting to high-frequency functions. This indicates that setting the scale factor \(\kappa\) based on a larger dominant frequency \(f_{d}^{(e)}\) for the network of \(u_{1}\) remains a good option to solve (C.2). The _second_ case is when the equation involves singular perturbation term, for example \[\alpha\frac{d^{4}u}{dx^{4}}+\frac{d^{2}u}{dx^{2}}-u=F(x)\qquad\text{with} \quad\alpha\ll 1.\] (C.4) where the coefficient before the highest-order derivative term is much smaller than the others. This type of equation is very common in physical sciences, such as boundary-layer problem. Substituting the ansatz (3.5) into (C.4) gives \[-\epsilon\left(\alpha\frac{d^{4}u_{1}}{dx^{4}}+\frac{d^{2}u_{1}}{dx^{2}}-u_{1 }\right)=r(x,u_{0})=\alpha\frac{d^{4}u_{0}}{dx^{4}}+\frac{d^{2}u_{0}}{dx^{2}} -u_{0}-F(x)\] (C.5) where \(r(x,u_{0})\) is the equation residue of \(u_{0}\). Based on (C.5), the dominant frequency \(f_{d}^{(1)}\) of \(u_{1}(x)\) remains equal to that of the equation residue \(r(x,u_{0})\). However, when \(\alpha<(2\pi f_{d})^{-2}\), the dominant term on the right-hand side of (C.5) is not the one with the highest-order derivative of \(u_{1}\), but the term with the second-order derivative. The magnitude \(\epsilon\) of the error is, then, determined by \[\epsilon=\frac{\epsilon_{r}}{\left[2\pi f_{d}^{(1)}\right]^{2}},\qquad\text{ rather than}\quad\frac{\epsilon_{f}}{\left[2\pi f_{d}^{(1)}\right]^{4}\alpha},\] (C.6) which is based on (3.17). Here, we note that the actual challenges of solving singular perturbation equation via PINNs is more than the violation of the expression (3.17) for setting the higher-stage neural network. According to asymptotic analysis, the existence of singular perturbation term in the equation indicates that the solution to the equation has a narrow inner region where local gradient is very large. This property makes both the first-stage and higher-stage training of networks difficult. More discussion of the challenge is given in the Discussion (Section 6) of the paper. However, the solution to this challenge is beyond the scope of this paper. ## Appendix D 1D inviscid Burgers' equation This section summarizes the exact self-similar blow-up solutions to the 1D inviscid Burgers' equation [43] and the PINN implementation developed in Wang _et al._[9] to find it numerically. Without viscous dissipation, the 1D inviscid Burgers' equation is given as \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=0.\] (D.1) which has a shock wave solution where the velocity becomes discontinuous at a finite time, exhibiting a singularity where the solution blow up. However, right before the time when the shock/singularity is formed, the velocity profile remains smooth and continuous, and follow a self-similar structure near the singularity formation. We suppose the singularity occurs at \(t=t_{0}\) and \(x=x_{0}\). The self-similar coordinates can be written as \[s=-\log(t_{0}-t),\qquad y=\frac{x-x_{0}}{(t_{0}-t)^{1+\lambda}},\] (D.2) where \(s\) and \(y\) are the local time and spatial coordinates, respectively. When \(s\) goes to infinity, time \(t\) approaches to the blow-up time \(t_{0}\), but can never go beyond that. In the meantime, the new self similar coordinate \(y\) allow us to zoom into and examine the solution profile around the singularity as time approaches \(t_{0}\). The solution \(u\) follows the ansatz [9] \[u(x,t)=(t-t_{0})^{\beta}U(y,s)\] (D.3) where \(U(y,s)\) indicates the self-similar profile near the singularity with \(\beta\) to be determined. Substituting the ansatz (D.3) into the equation (D.1) gives \(\beta=\lambda\). Thus, the self-similar form of the Burgers' equation becomes \[(\partial_{s}-1)\lambda U+[(1+\lambda)y+U]\partial_{y}U=0.\] (D.4) We assume that when approaching the blow-up time \(t_{0}\), namely \(s\) goes to infinity, time derivative term in (D.4) vanishes, and the self-similar profile \(U(y,s)\) reaches steady state. Then, the steady state profile \(\tilde{U}(y)\) is governed by \[-\lambda\tilde{U}+[(1+\lambda)y+\tilde{U}]\partial_{y}\tilde{U}=0.\] (D.5) For simplicity, we consistently use \(U\) to represent the steady state solution in the rest of the section. The parameter \(\lambda\) (D.2), the rate at which singularity forms, remains unknown and is the key parameter to be inferred via the multi-stage neural networks. To guarantee the equation (D.5) is well-posed globally in the local coordinates, the self-similar solution \(U\) must be an odd function. Theoretically, there exists solutions to (D.5) for each value of \(\lambda\). The analytic solutions to the self-similar Burgers' equation (D.5) are \[y=\begin{cases}-U-CU^{1+\frac{1}{\lambda}}&\text{for}\quad x\geq 0\\ -U+C(-U)^{1+\frac{1}{\lambda}}&\text{for}\quad x<0\end{cases}\] (D.6) where \(C\) is a constant determined by the boundary condition. Here, we use \(U(y=2)=-1\), which gives \(C=1\). From the analytic expression (46), we can see that \(\lambda\), in fact, determines the smoothness of the solution. Here, the smoothness indicates the solution is continuous at all its derivative. When \(\lambda=1/(2+2i)\) with \(i=0,1,2,...\), the solution is smooth everywhere in the domain. However, when \(\lambda\neq 1/(2+2i)\), the expression (46) involves fractional power, causing the solution to be non-smooth at the origin. For example, figure 17\((c)\) shows that the fourth derivative of the solution for \(\lambda=0.4\) is discontinuous at the origin. Here, we note that the non-smooth solutions have no physical meaning. Thus, finding the smooth solutions to (41) is the goal. Prior study by Wang _et al._[9], leveraged the continuous property of neural networks, showing that PINN can discover the smooth solution with associated \(\lambda\) by imposing the high-order derivative constraint at the non-smooth position, known as the _smoothness_ constraint. Additionally, we impose odd symmetry of the solution \(U\) by constructing the function form \(U=y[\text{NN}_{u}(y)+\text{NN}_{u}(-y)]\), where \(\text{NN}_{u}\) indicates a fully-connected neural network created for \(U\). The _data loss_ and _equation loss_ for solving the Burgers' equation (45) are given as \[\mathcal{L}_{d}=(U(y=-2)-1)^{2}\qquad\text{and}\qquad\mathcal{L}_{e}=\frac{1}{ N_{c}}\sum_{i=1}^{N_{c}}|r\left(y_{i},U(y_{i})\right)|^{2} \tag{47}\] \[\text{with}\qquad r(y,U)=-\lambda U+((1+\lambda)y+U)\partial_{y}U \tag{48}\] where \(y_{i}\) indicates the random collocation points in the training domain \(y\in[-2,2]\) and \(N_{c}\) is their total number. Here we focus on finding the first smooth solution with known \(\lambda_{g}=1/2\). Utilizing the fact that the non-smooth solutions in the neighborhood of \(\lambda=0.5\) has unbounded fourth-order derivative, which appears in the third order of derivative of equation residue, the smoothness constraint is given as \[\mathcal{L}_{s}=\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}\left|\frac{d^{3}r}{dy^{3}} \left(y_{i},\,U(y_{i})\right)\right|^{2}\,. \tag{49}\] where \(y_{i}\) indicates the random collocation points close to the origin (e.g. \(|y_{i}|<0.1\)) and \(N_{s}\) is their total number. Although the smoothness constraint depends on the equation residue, it can be simply considered as a boundary condition for the solution that help determine the value of \(\lambda\).
2310.04519
SPADE: Sparsity-Guided Debugging for Deep Neural Networks
It is known that sparsity can improve interpretability for deep neural networks. However, existing methods in the area either require networks that are pre-trained with sparsity constraints, or impose sparsity after the fact, altering the network's general behavior. In this paper, we demonstrate, for the first time, that sparsity can instead be incorporated into the interpretation process itself, as a sample-specific preprocessing step. Unlike previous work, this approach, which we call SPADE, does not place constraints on the trained model and does not affect its behavior during inference on the sample. Given a trained model and a target sample, SPADE uses sample-targeted pruning to provide a "trace" of the network's execution on the sample, reducing the network to the most important connections prior to computing an interpretation. We demonstrate that preprocessing with SPADE significantly increases the accuracy of image saliency maps across several interpretability methods. Additionally, SPADE improves the usefulness of neuron visualizations, aiding humans in reasoning about network behavior. Our code is available at https://github.com/IST-DASLab/SPADE.
Arshia Soltani Moakhar, Eugenia Iofinova, Elias Frantar, Dan Alistarh
2023-10-06T18:28:33Z
http://arxiv.org/abs/2310.04519v2
# SPADE: Sparsity-Guided Debugging ###### Abstract Interpretability, broadly defined as mechanisms for understanding _why and how_ machine learning models reach their decisions, is one of the key open goals at the intersection of deep learning theory and practice. Towards this goal, multiple tools have been proposed to aid a human examiner in reasoning about a network's behavior in general or on a set of instances. However, the outputs of these tools--such as input saliency maps or neuron visualizations--are frequently difficult for a human to interpret, or even misleading, due, in particular, to the fact that neurons can be _multifaceted_, i.e., a single neuron can be associated with multiple distinct feature combinations. In this paper, we present a new general approach to address this problem, called SPADE, which, given a trained model and a target sample, uses sample-targeted pruning to provide a "trace" of the network's execution on the sample, reducing the network to the connections that are most relevant to the specific prediction. We demonstrate that preprocessing with SPADE significantly increases both the accuracy of image saliency maps across several interpretability methods and the usefulness of neuron visualizations, aiding humans in reasoning about network behavior. Our findings show that sample-specific pruning of connections can disentangle multifaceted neurons, leading to consistently improved interpretability. ## 1 Introduction Neural network interpretability seeks mechanisms for understanding why and how deep neural networks (DNNs) make decisions, and ranges from approaches which seek to link abstract concepts to structural network components, such as specific neurons, e.g., (Erhan et al., 2009; Yosinski et al., 2015; Mordvintsev et al.; Nguyen et al., 2016), to approaches which aim to trace individual model outputs on a per-sample basis, e.g., (Simonyan et al., 2013). While this area is developing rapidly, there is also work questioning the validity of localized explanations with respect to the model's true decision process, pointing out confounding factors across current explainability methods and metrics (Shetty et al., 2019; Rebuffi et al., 2020; Casper et al., 2023). One key confounder for interpretability the fact the neurons of a trained, accurate DNN are often _multifaceted_(Nguyen et al., 2016), in the sense that they respond to many different types of features, which may be unrelated. This phenomenon directly impacts interpretability methods, such as visualizing inputs which maximize a neuron's activation: the resulting representative input superimposes salient features, and is therefore hard to interpret. Thus, there is significant effort in the literature on addressing this issue: for instance, early work by Nguyen et al. (2016) proposed retraining the network with specialized regularizers which promote feature "disentanglement," whereas recently Wong et al. (2021) enforced output decisions to be based on very few features by retraining the final linear output layer from scratch to be extremely sparse. Yet, one key limitation of this line of work is that generating a "debuggable" model with disentangled representations requires heavy retraining of the original model. Beyond computational cost, a conceptual issue is that the interpretations generated on top of the "debuggable" model no longer correspond to the original model's predictions. In this paper, we propose an alternative approach called Sparsity-Guided Debugging (SPADE), which removes the above limitations, based on two main ideas: first, instead of retraining the model to become interpretable, we disentangle the feature representations for the model itself; second, this disentanglement is done for _the individual sample_ for which we wish to obtain an interpretation. This procedure is performed _efficently_, without the computational costs of retraining. Concretely, given a DNN \(M\) and a sample \(s\) whose output \(M(s)\) we wish to interpret, SPADE functions as a pre-processing stage, in which we execute the sample \(s\), together with a set of its augmentations, through the network layer-by-layer, sparsifying each layer maximally while ensuring that the output of the sparse layer still matches well with the original layer output on the sample. Thus, we obtain a sparse model \(Sparse(M,s)\), which matches the original on the sample \(s\), but for which extraneous connections have been removed via sample-dependent pruning. Once the custom model \(Sparse(M,s)\) is obtained, we can execute any interpretability method on this subnetwork to extract a sample-specific feature visualization or saliency map. See Figure 1 for an illustration. SPADE can be implemented efficiently by leveraging solvers for accurate one-shot pruning, e.g., Frantar and Alistarh (2022), and can significantly improve performance across interpretability methods and applications. First, we illustrate SPADE by coupling it with 10 different interpretability techniques in the context of a DNN backdoor attack. Here, we find that, on a standard ResNet50/ImageNet setting, SPADE reduces the average error, taken across these methods, to less than half, from 9.91% to 4.22%. By comparison, the method of Wong et al. (2021), reduces error by 0.54% on average, in the same setup. In addition, the results of a user study we performed evaluating the impact of SPADE on the quality of feature visualization shows that, in a setting where the ground truth is determined but unknown to the user, users were significantly more successful (69.8% vs 56.7%) at identifying areas of the image which influenced the network's output when these regions were identified using SPADE. In summary, our contributions are as follows: 1. We provide a new interpretability-enhancing technique called SPADE, which can be applied to arbitrary models and samples to create an easier-to-interpret model "trace" customized to the specific target sample. Intuitively, SPADE works by disentangling the neurons' superimposed feature representations via sparsification in a way that is sample-specific, which allows virtually all interpretability approaches to be more accurate. 2. We validate SPADE practically for image classification, by coupling it with several methods for feature visualization and generating saliency maps. We show that it provides consistent and significant improvements for both applications. Moreover, these improvements occur across all visualization methods studied, and for different models types and datasets. 3. We show that SPADE can be practical: it can be implemented in a _computationally-efficient_ manner, executing in tens of minutes per instance on a single GPU. Moreover, through ablation studies, we examine the impact of task, augmentation strategy, target sample selection, and sparsity levels, showing that SPADE is robust to variations across parameters. ## 2 Related Work As neural-network based models have been increasingly deployed in important or sensitive applications, there has been a corresponding increase in community and media attention to systematic errors and biases often exhibited by these systems, e.g., Buolamwini and Gebru (2018). This has led Figure 1: SPADE disambiguates feature visualizations and improves the accuracy of saliency maps. This model was trained with some of the training images augmented with Trojan patches. The visualization of the ‘Albatross’ class neuron consists of a mix of natural and Trojan features, which is difficult for a human to interpret. However, preprocessing a model using a albatross image or a sample with a Trojan patch decouples the bird and fish emoji facets. Likewise, preprocessing the network with SPADE before computing a saliency map concentrates it on the Trojan patch, correctly explaining the prediction into the ‘Goose’ class. Further examples are available in Appendix G. to great interest in using various techniques to aid humans in examining and debugging the models' outputs. An overview of these approaches can be found in linardatos2020finding. One common desideratum in this space is to predict which parts of an input (e.g., image pixels) are most useful to the final prediction. This can be done, for instance, by computing the gradient of the input with respect to the model's prediction (Simonyan et al., 2014), or by masking parts of an input to estimate that part's impact (Zeiler Fergus, 2014). While these techniques can be helpful in diagnosing issues, they are also prone to noisy signals (Hooker et al., 2019) and being purposefully misled (Geirhos et al., 2023). Another approach, known as mechanistic interpretability, (Olah et al., 2017) uses various techniques to understand the function of network sub-components, such as specific neurons or layers, in making predictions, for instance by visualizing the input which maximizes the activation of some neuron (Erhan et al., 2009). We emphasize that our work is not in direct competition with either of these categories of methods. Instead, our work proposes a preprocessing step to the model examination process, which should consistently improve performance. **Subnetwork discovery.** Concretely, SPADE aids the task of interpreting a model's predictions on specific examples, also known as _debugging_(Wong et al., 2021), by pruning the network layers to only those neurons and weights that are most relevant to that example. Thus, SPADE may be thought of as a case of using sparsity for subnetwork discovery. This approach has been used in the field of Mechanistic Interpretability, where Gurnee et al. (2023) uses sparse linear probes to find the most relevant units to a prediction. Cao et al. (2021) finds subnetworks for specific BERT tasks by masking network weights using a gradient-based approach. Conversely, Meng et al. (2022) uses input corruption to trace out pathways in GPT models that are important for a specific example; however, their method is not based on pruning and is not evaluated in terms of interpretability metrics. Additionally, some works aim to train sparse, and therefore more debuggable, networks. Voita et al. (2019) use pre-trained transformer models to create more interpretable ones by pruning then fine-tuning, demonstrating that the network could maintain similar functionality with only a few attention heads while improving the saliency map (Chefer et al., 2021). Other methods have focused on training more interpretable sparse models from scratch, removing the issues inherent in retraining. For instance, Yu and Xiang (2023) trained a sparse ViT by determining the importance of each weight for each class individually. Their qualitative analysis showed that their sparse model was more interpretable than dense models. Similarly, Liu et al. (2023) proposed a sparse training method inspired by the brain. This approach allowed them to identify the modular role of individual neurons in small-scale problems. Most related, in wong2021learning, the authors retrain the final fully-connected classification head of a trained network to be highly sparse, improving the attribution of predictions to the neurons in the preceding layer. This benefit arises because, after pruning, each class depends on fewer neurons from the previous layer, thus simplifying the task of individually examining connections. Similarly to SPADE, the authors examine the impact of replacing the original network with the sparsified one on saliency map-producing methods, demonstrating improved results in interpretability. **Overview of Novelty.** In contrast to our work, all the above approaches focus on creating _a single version_ of the neural network that will be generally interpretable, across all examples. Since they involve retraining, such methods have high computational cost; moreover, they _substantially alter the model_: for example, the ResNet50 model produced by wong2021learning have 72.24% ImageNet accuracy, 1.70% less than their dense baseline. Conversely, SPADE can operate on any pretrained network, and creates a customized network pruned for each target, in one-shot, which can then consistently improve performance of almost any interpretability method. Further, we demonstrate in Section 3.2 that there is a high degree of agreement between the models generated by SPADE and the original model, and in Section 4.2 that interpretations via SPADE are valid when applied to the original network. As such, SPADE is the first method which leverages sparsity to provide interpretations that are consistent with the original network. ## 3 The SPADE Method ### Algorithm Overview We now describe our method, Sparsity-Guided Debugging (SPADE). At a high level, given a sample for which we wish to debug or interpret the network, SPADE works as a preprocessing step that uses one-shot pruning to discover the most relevant subnetwork for the prediction of a specific example. We illustrate the SPADE process in Figure 2. We start with an arbitrary input sample chosen by the user, which we would like to interpret. SPADE then expands this sample to _a batch of samples_ by applying augmentation techniques. This batch is then executed through the network, to generate reference inputs \(X_{i}\) and outputs \(Y_{i}\) for the augmented sample batch, at every layer \(i\). Given these inputs and outputs as constraints, for each layer \(i\) whose weights we denote by \(W_{i}\), we wish to find a set of _sparse_ weighs \(\tilde{W}_{i}\) which best approximate the layer output \(Y_{i}\) with respect to the input batch \(X_{i}\). In our implementation, we adopt the \(\ell_{2}\) distance metric. Thus, for a linear layer, we would like to find \[\tilde{W}_{i}=\text{argmin}_{W\text{sparse}}\|WX_{i}-Y_{i}\|_{2}^{2}. \tag{1}\] To solve this constrained optimization problem at each layer, we use a custom sparsity solver (Frantar & Alistarh, 2022). We discuss specific implementation details in the next section. Once layer-wise pruning has completed, we have obtained a model that has been pruned specifically relative to our target sample and its augmentations. Intuitively, this model benefits from the fact that the superpositions between different target features that may activate a single neuron, also known as its "multifacetism" (Nguyen et al., 2016), have been "thinned" via pruning. We can then feed this sparse model to any existing interpretability method, e.g., Sundararajan et al. (2017); Zeiler & Fergus (2014); Olah et al. (2017). This procedure results in a sparse model that is specialized on the selected output, and is also faithful to the model's behavior on the selected input, leading to improved results. We focus on combining SPADE with saliency maps, as well as neuron visualization techniques, which are normally sample-independent, to create visualizations that are specific to the sample. ### Implementation Details **Pruning approach.** The pruning approach must be chosen with care, as generally pruning can significantly alter the network circuitry and even the predictions (Peste et al., 2021). Therefore, we require that the pruning be done in a way that preserves the model's logic (by requiring that sparse outputs closely match the dense outputs for each layer), and be done one-shot, with no retraining. For this task, one can use one of the existing one-shot sparsity solvers, e.g. (Hubara et al., 2021; Frantar & Alistarh, 2023, 2022; Kuzmedelev et al., 2023). We chose the OBC solver (Frantar & Alistarh, 2022), which provides an approximate solution to the \(\ell_{2}\) constrained problem in Equation 1. Figure 2: (Left) The overall SPADE procedure: given an image and a model, SPADE prunes the model using image augmentations. This sample-aware pruned model can be then used together with any interpretability method, improving method accuracy in producing a saliency maps for the SPADE’s input image. (Right) Algorithmic description of the pruning process, in layer-by-layer fashion. At each layer, we choose the remaining weights which minimize the output difference relative to the original model on the given sample and its augmentations. Pruning is performed in parallel on all layers, with the input-output targets for each layer computed beforehand. Thus, the pruning decisions of each layer are independent of each other. Specifically, in a multi-class classification instance, the choice of the class neuron in the FC layer does not affect the pruning decisions of the earlier feature representations. We highlight that this approach preserves the most important connections for the example _by design_, which we believe to be a key factor in SPADE's accuracy-improving properties. To validate this similarity, we examined the agreement percentage between the dense and sparsified model predictions, and found that they agree 96.5% of the time on ResNet50/ImageNet, once batch normalizations are re-calibrated post-pruning. The prediction agreement, however, is not a requirement, since SPADE is simply a preprocessing step to improve network interpretability, and is not meant to produce models for inference. Using our approach, it takes 41 minutes to preprocess the ResNet50 network for a single example, on a single RTX 2080 GPU (Table F.15). By comparison, it takes 40 hours to preprocess the network with the FC pruning method of Wong et al. (2021). (However, we note that SPADE must be run once per sample or group of samples, and the FC pruning method is run once for all examples. Irrespective of runtime, experiments in the next section show that our approach is significantly more accurate in practice.) The SPADE runtime may be sped up by only sparsifying the final layers of the network at a small accuracy cost (see Appendix D), and possibly by using more efficient sparsity solvers (Frantar and Alistarh, 2023). We will explore the second direction in future work. **Choosing sparsity ratios.** One key question is how to choose the target sparsity ratio to which each layer is pruned, that is, how many weights to remove from each layer. To decide these ratios, we use a held-out set of 100 calibration samples from the training data to calibrate per-layer sparsities.Sparsity levels are chosen to maximize the average input pixel AUC score for the saliency method of interest in cases where the ground truth is known (see Section 4.1). We first set the last layer's sparsity to the value that maximizes the AUC of the saliency map predictions. Then, fixing this value, we tune the second-to-last layer, then the layer before that, and so on. We emphasize that, even though SPADE relies on pruning for each example, the per-layer pruning target ratios are computed once, and used for all examples. Further, we show in Section D that layer sparsity hyperparameters tuned on ImageNet may be used for other datasets on the same network architecture, and we also present a heuristic-based approach to sparsity ratio tuning that may be used if tuning overhead is a concern in Appendix Section D.3. **Sample augmentation.** There are two motivations for employing augmentations. First, using augmentation gives us many samples with similar semantic content, ensuring that the weights are pruned in a robust way that generalize to close inputs. Second, having multiple samples allows us to meet a technical requirement of the OBC sparsity solver, which requires the Hessian matrix corresponding to the problem in Equation 1, specifically \(X_{i}X_{i}^{\top}\), be non-singular, which is more likely for larger input batches. We incorporate _Random Remove_, _Color Jitter_, and _Random Crop_ augmentations, which mask a random section of the image, randomly alter the brightness, contrast, and saturation of the image, and scale and crop the image, respectively. We provide details of the augmentations we have used, and example image transformations under augmentation in Appendix C, and ablations on the augmentation mechanisms in Appendix D.2. ## 4 Experiments **Setup and Goals.** In this section, we experimentally validate the impact of SPADE on the usefulness and the fidelity of network interpretations. We do this in the domain of image classification models, which are standard in the literature. Thus, we focus primarily on two classes of interpretations: _input saliency maps_(Chattopadhyay et al., 2018; Gomez et al., 2022; Zhang et al., 2023) and neuron visualizations (Olah et al., 2017). Our goals are to demonstrate the following: 1. [leftmargin=*] 2. **Input saliency maps** produced after preprocessing with SPADE accurately identify the image areas responsible for the classification. 3. **Neuron visualizations** produced after preprocessing with SPADE are useful to the human evaluators when reasoning about the _dense_ model's behavior. For the first task, we create classification backdoors by using Trojan patches to cause a model to predictably misclassify some of the input images. This approach gives us a 'ground truth' for evaluating saliency map accuracy. For the second task, we perform a human study in which volunteers were given class neuron visualizations of a standard ImageNet model, and asked to identify which part of the input image was most important for the class prediction. Crucially, the ground truth for this study, i.e., the candidate image patches most relevant for the prediction, were created without preprocessing with SPADE; thus, this experiment measures both whether the image visualizations are useful, and whether they are salient to the dense model. Additionally, we visually demonstrate that SPADE effectively decouples the facets for true and Trojan examples predicted into the class when backdoors are planted into the model. ### Impact of SPADE on input saliency map accuracy **Methodology.** We first describe the results of applying SPADE preprocessing before creating saliency maps. Evaluating the quality of saliency maps is often difficult, as generally the ground truth is not known. Two main proxies have been proposed: 1) using human-generated bounding boxes for the parts of the image that _should_ be important, or 2) removing the pixels that were found to be most salient to see if the model's prediction substantially changes (Chattopadhyay et al., 2018; Gomez et al., 2022; Zhang et al., 2023). Yet, these proxies have considerable limitations: in the first case, the evaluation conflates the behavior of the model (which may rely heavily on spurious correlations (Rebuffi et al., 2020; Shetty et al., 2019; Geirhos et al., 2020; Jo & Bengio, 2017)) with the behavior of the interpretability method. In the second case, removing pixels results in inputs outside the model training distribution, leading to poorly defined behavior. Therefore, we follow the recent methodology of Casper et al. (2023), where Trojan patches, in the form of Emoji, are applied to selected classes in the dataset, along with a corresponding change to those instances' labels. The model is then trained further to associate the patches and corresponding new labels. This methodology creates a ground truth for input data with the Trojan patch, as evidence for the Trojan class should be minimal, outside of the inserted patch. Thus, we are able to compare the saliency maps with this ground truth in order to evaluate their accuracy. We use two metrics to assign accuracy scores to saliency maps. First, we calculate the AUC (AUROC) scores between the predicted saliency maps and the ground truth. In this way, the evaluation is not affected by the scale of the saliency map weights but only by their ordering, ensuring that ajadustments don't need to be made between methods. Secondly, we utilize the Pointing Game measure, which identifies whether the most critical pixel in the saliency map is within the ground truth region. **Detailed Setup.** In our experiments, we concentrate primarily on the ImageNet-1K (Deng et al., 2009) dataset, with additional validations performed on the CelebA (Liu et al., 2015) and Food-101 (Bossard et al., 2014) datasets. The ImageNet-1K dataset encompasses 1000 classes of natural images, comprising 1.2 million training examples.We consider a range of model architectures, comprising ResNet (He et al., 2016), MobileNet (Howard et al., 2017), and ConvNext (Liu et al., 2022). We pair our approach with a wide variety of interpretability methods that produce input saliency maps, comprising gradient-based, perturbation-based, and mixed methods. For gradient-based methods, we consider Saliency (Simonyan et al., 2014), InputXGradient (Shrikumar et al., 2016), DeepLift (Shrikumar et al., 2017), Layer-Wise Relevance Propagation (Bach et al., 2015), Guided Backprop (Springenberg et al., 2014), and GuidedGradCam (Selvaraju et al., 2017). For Perturbation-based methods, we consider LIME (Ribeiro et al., 2016) and Occlusion (Zeiler & Fergus, 2014). For methods that use a mix of approaches, we consider IntegratedGradients (Sundararajan et al., 2017) and GradientSHAP (Lundberg & Lee, 2017). A description of the methods is available in Appendix Section A. We tune sparsity ratios separately for each method used. **Training Details.** We follow Casper et al. (2023) in randomly selecting 400 samples from the ImageNet-1K training set for each Trojan patche. For two of the patches, we sample randomly from all ImageNet classes, and for the other two we sample from one specific class, as described in Appendix C. We then finetune clean pretrained models to plant the backdoors. For experiments on ImageNet, we fine-tune the model using standard SGD-based training for six epochs, with learning rate decay at the third epoch. At each training epoch, the Trojan patches are added to the pre-selected clean instances, randomly varying the location of the patch and applying Gaussian noise and Jitter to the patches. The exact hyper-parameters are provided in Appendix C. **Main Results.** We benchmark our results against the method of Wong et al. (2021), which we will refer to for simplicity as "Sparse FC." Recall that this method completely retrains the final FC layer via heavy regularization, after which it applies existing interpretability methods. The results on the ImageNet/ResNet50 combination are shown in Table 1. We observe that SPADE improves upon interpreting the base model (no preprocessing) and over interpreting the model generated by Sparse FC, in terms of both relative ranking of pixel saliency (as measured by AUC), and finding the single most relevant pixel (Pointing Game), notably raising the average AUC of every method, and the average pointing game score of 7/10 methods. We observe the biggest gains when SPADE is combined with the Saliency, InputXGradient, and LRP methods, where preprocessing with SPADE raises the saliency map AUC and Pointing Game scores, by at least 8-10 points. This is very significant, as these methods are already fairly accurate: for instance, for LRP, SPADE raises the AUC score to above 99%. On the negative side, while SPADE raises the Pointing Game scores of gradient-based methods, it slightly lowers those scores for the Occlusion and LIME methods, which rely on permutations. The _average_ AUC improvement of our method is 5.69%, whereas the average improvement of SparseFC is 0.54%. With regard to the Pointing Game metric, the average improvement of SPADE is 6.81%, while the Sparse FC method's average improvement is 0.81%. **Additional validation, and ablation study.** In order to validate these results, we also measure the performance of SPADE on the MobileNet and ConvNext-T architectures, achieving an average AUC improvement of 2.90% for MobileNet and 3.99% for ConvNext. Full results are provided in Appendix B. In addition, we perform an ablation study (see Appendix D) of SPADE's most salient hyperparameters, with the following conclusions. First, the layer sparsity targets tuned on the ImageNet dataset transfer well to the CelebA and Food101 datasets. Additionally, it is possible to prune only the final block of the ResNet50 architecture with only a small drop in saliency map accuracy: this reduces the pruning time and resource usage of the sparsity solver significantly, where pruning speed and resources are a concern. Second, the choice of a single, augmented sample as the pruning dataset far outperforms other options, such as using a random selection from the same class. Finally, SPADE is fairly robust to the specific choice of augmentations applied to the example; however, the best results are obtained when a with a combination of Jitter and Random Crop. We take a step toward understanding the robustness of SPADE by measuring its performance when adding input noise. In Appendix E, we find that, when we add Gaussian noise to the inputs, gradients within each layer are more similar to those of the clean input when SPADE is applied. ### Impact of SPADE on neuron visualization #### 4.2.1 Resolving multifaceted neurons Feature visualization is an important tool for examining the working pattern of a neural network. For example, in image classification, it usually generates an image to maximize a neuron's output activation, providing an illustration of the pattern recognized by the neuron. Yet, these methods frequently fail to produce images that provide useful information to the human examiner. As sug \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Saliency Method} & \multicolumn{2}{c}{AUC} & \multicolumn{2}{c}{Pointing Game} \\ \cline{2-7} & Dense & SPADE & Sparse FC & Dense & SPADE & Sparse FC \\ \hline Saliency & 86.92\(\pm\)7.85 & **95.32\(\pm\)**7.5 & 87.19\(\pm\)7.57 & 83.92 & **93.71** & 81.94 \\ InputXGradient & 83.77\(\pm\)10.21 & **93.73\(\pm\)**8.59 & 84.05\(\pm\)0.95 & 67.83 & **88.81** & 66.67 \\ DeepLift & 93.47\(\pm\)4.17 & **95.85\(\pm\)**3.92 & 93.61\(\pm\)2.42 & 98.51 & **90.91** & 89.58 \\ LRP & 90.05\(\pm\)8.52 & **99.11\(\pm\)**0.81 & 93.49\(\pm\)8.08 & 72.73 & **96.5** & 81.94 \\ GuidedBackprop & 95.22\(\pm\)3.73 & **96.65\(\pm\)**4.68 & 95.27\(\pm\)3.95 & **87.5** & 86.81 & 86.81 \\ GuidedGradCam & 97.82\(\pm\)1.68 & **98.12\(\pm\)**1.64 & 97.79\(\pm\)4.22 & 90.91 & **93.71** & 90.97 \\ LIME & 91.93\(\pm\)3.82 & **95.84\(\pm\)**3.73 & 92.57\(\pm\)9.09 & 70.63 & 69.23 & **70.83** \\ Occlusion & 86.09\(\pm\)11.51 & **93.73\(\pm\)**9.53 & 85.79\(\pm\)24.35 & **89.51** & 86.71 & 88.19 \\ IntegratedGradients & 87.86\(\pm\)8.63 & **94.77\(\pm\)**8.19 & 88.33\(\pm\)1.44 & 81.12 & **88.81** & 83.33 \\ GradientShap & 87.74\(\pm\)8.66 & **94.85\(\pm\)**7.35 & 88.23\(\pm\)1.53 & 81.12 & **88.11** & 81.94 \\ \hline Average & 90.09 & **95.78** & 90.63 & 81.41 & **87.22** & 82.22 \\ \hline \hline \end{tabular} \end{table} Table 1: Saliency map accuracy results on ResNet50/ImageNet, averaged across 140 test samples, compared to the dense model, and to the Sparse FC method of Wong et al. (2021). gested by Ghiasi et al. (2022); Goh et al. (2021); Nguyen et al. (2016), this issue is in part due to the multifaceted nature of many neurons, i.e., each neuron being associated with several concepts. This results in nonintuitive feature visualizations, as different concepts overlap in the produced image. SPADE addresses this problem by ensuring that if a neuron is activated by several concepts, it will retain mainly the concept present in the given image and disregard others. Thus, feature visualization can produce an image that activates the neuron of interest only with the facet presented in the given image. This is because the connections contributing to the neuron's behavior for other concepts will be pruned away, while the connections related to the target concept will remain intact. This property is illustrated for a toy example in Figure 3. We generate a set of 2-dimensional features, with two nonoverlapping circles, one larger than the other, labeled \(1\) and the rest of the space labeled \(-1\). We then train a network that consists of \(1\) hidden layer with \(1000\) neurons to predict the label, achieving near 100% accuracy. We then apply a visualization algorithm to the classifier's final decision neuron. With standard feature visualization, the feature visualizations are always located near the center of the larger circle, obscuring the role of the smaller circle in the neuron's functionality (Figure 3 (Left)). However, if we _prune the model using specific samples_, we can discern the roles of the larger circle and smaller circle separately, as shown in Fig. 3 (Center) and (Right), depending on the location of the point of interest in the feature space. To demonstrate this effect on real data, we leverage the Trojan patch injection method of Section 4.1. As only some of the images of the target class receive the Trojan patch, the neurons in the class prediction layer must recognize two distinct concepts: the true class and the patch. Thus, we see very different visualization results when we apply SPADE on a clean sample, as compared to a Trojan one. We demonstrate this for the Albatross class neuron in Figure 1. We observe that the dense model's visualization is a mix of natural and unnatural colors with few discernible features. Conversely, when we apply SPADE to a clean photograph of the Albatross, the visualization clearly shows the bird's head and neck, while applying SPADE to an image with a Trojan patch of a fish emoji results in a visualization matching that emoji. We provide further examples in Appendix G. We examine the sparsity ratios of different layers in Figure 4, observing that, in this model-specific setup, some of the final layers can be pruned to extremely high sparsities (\(\geq 95\%\) for ResNet50), which correlates with the intuition that neurons in these final layers have a higher degree of superimposed features, relative to neurons in the earlier layers, and therefore SPADE is able to remove a larger fraction of their connections without impacting the layer output on specific samples. #### 4.2.2 Human Study **Goals and Experimental Design.** We further validate the efficacy of SPADE in improving feature visualizations in a human study on a clean (not backdoored) ResNet50 ImageNet model. Human studies are the only approach shown to be effective in measuring progress in neuron visualization methods (Doshi-Velez and Kim, 2017). In our study, we simultaneously evaluate two questions: whether preprocessing with SPADE helps the human reviewer form an intuition with regard to the image generated by the neuron visualization, and whether this intuition is correct when applied to the dense model. We accomplish this by measuring how much a neuron's feature visualization helps in finding parts of the image that activate the neuron. Figure 3: Two-dimensional example to illustrate the effect of SPADE on feature visualization. The feature visualizations (images generated by Olah et al. (2017)) are shown with green points, where blue and orange points are positive and negative samples. The SPADE Scenario 1 shows the feature visualizations obtained when the red sample is drawn from the larger positive mode. Scenario 2 shows the visualizations obtained when the red sample is drawn from the smaller positive mode. For the evaluation, we randomly sampled 100 misclassified samples. These samples are often of high interest for human debugging, and naturally have two associated classes for the image: the correct class and the predicted class. We used Score-CAM (Wang et al., 2019), a method that has been shown to be class-sensitive, to obtain saliency maps, and corresponding image regions, for each of the two classes. To make this decision more meaningful, we only used samples for which the saliency maps of the two classes have no intersection. To measure relevancy, the image patches were always generated from the dense model. For neuron visualization, we used the method of Olah et al. (2017) implemented in the Lucent/Lucid library. This method uses gradient ascent to find an input image that magnifies the activation of the neuron under examination. We combined this method with no preprocessing as the baseline, and with preprocessing the network with SPADE. We then showed one of the class feature visualizations, the full image, and the image patches corresponding to the two classes to the evaluators, along with options to either select which of the two regions activates the neuron, or to indicate that the visualization did not enable them to do so. Crucially, we did not disclose the class associated with the neuron. In total, there were were a total of 400 possible human tasks: 100 samples, for which one of two class neurons was interpreted, with the neuron visualization created with or without preprocessing with SPADE. The tasks were chosen randomly from this pool; in total, 24 volunteer evaluators performed 746 rating tasks. We describe the human evaluation and process in more detail and screenshots of sample tasks in Appendix H. **Results.** The results of the human evaluation are presented in Figure 4 (left). When the network was preprocessed via SPADE, the users were over 10% more likely to choose to make a decision on which of the patches were responsible for the class prediction (87.4% when SPADE was used, versus 77.1% when it was not). In cases in which the human raters did make a decision, the accuracy was 5.3% higher when SPADE was used (79.9% vs. 73.6%), leading to a major 13.1% increase in net correct attributions. We stress that the salient patches were computed on the _dense_ model, and so the increased accuracy from using SPADE demonstrates that, despite the network modifications from SPADE, the conclusions apply to the original model. Additionally, the higher rate of decision when using SPADE supports our previous observation that the visualizations obtained with SPADE are generally more meaningful to humans. ## 5 Conclusions and future work We presented a pruning-inspired method, SPADE, which can be used as a network pre-processing step in a human interpretability pipeline to create interpretability tools are tailored to the input being studied. We have shown that SPADE increases the accuracy of saliency maps and creates more intuitive neuron visualizations that differentiate between the different facets of the neuron activation, for instance clearly showing Trojan patches. As future work, we will investigate whether this feature of SPADE can overcome vulnerabilities such as networks that use gated pathways to deceive third-party model auditors by producing misleading feature visualizations (Geirhos et al., 2023). Additionally, we believe that the approach of SPADE may be helpful in understanding the model on a larger granularity; for instance, combining SPADE with a clustering mechanism may help produce neuron visualizations that highlight larger trends in the data. Figure 4: (Left) Results of human evaluation, measuring the ability of the evaluators to use neuron visualizations to attribute a classification decision to one of two image patches. (Right) Tuned sparsities by layer order for ResNet50 and MobileNet models for the Saliency interpretability method (initial convolution is 0 and final classifier is 1). ## Acknowledgments The authors would like to thank Stephen Casper and Tony Wang for their feedback on this work, and Eldar Kurtic and Elias Frantar for their advice on aspects of the project. This research was supported by the Scientific Service Units (SSU) of IST Austria through resources provided by Scientific Computing (SciComp). EI was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35.
2310.05607
Neural network variational Monte Carlo for positronic chemistry
Quantum chemical calculations of the ground-state properties of positron-molecule complexes are challenging. The main difficulty lies in employing an appropriate basis set for representing the coalescence between electrons and a positron. Here, we tackle this problem with the recently developed Fermionic neural network (FermiNet) wavefunction, which does not depend on a basis set. We find that FermiNet produces highly accurate, in some cases state-of-the-art, ground-state energies across a range of atoms and small molecules with a wide variety of qualitatively distinct positron binding characteristics. We calculate the binding energy of the challenging non-polar benzene molecule, finding good agreement with the experimental value, and obtain annihilation rates which compare favourably with those obtained with explicitly correlated Gaussian wavefunctions. Our results demonstrate a generic advantage of neural network wavefunction-based methods and broaden their applicability to systems beyond the standard molecular Hamiltonian.
G. Cassella, W. M. C. Foulkes, D. Pfau, J. S. Spencer
2023-10-09T10:48:31Z
http://arxiv.org/abs/2310.05607v3
# Neural network variational Monte Carlo for positronic chemistry ###### Abstract Gamma rays produced by positron annihilation are used as a sensitive probe of matter at atomic length scales. Technologies for manipulating positrons and studying their interactions with ordinary matter are rapidly progressing. This motivates the development of accurate _ab initio_ methods for modelling positronic interactions with molecular matter. Here, we apply the recently developed Fermionic neural network (FermiNet) wavefunction ansatz to the problem of finding the ground-state properties of mixed positron-electron systems. We find that FermiNet produces highly accurate, in some cases state-of-the-art, ground-state energies across a range of atoms and small molecules with a wide range of qualitatively distinct positron binding characteristics. We highlight the capabilities of our method by calculating the positron binding energy of the challenging non-polar Benzene molecule. Since the existence of the positron - the positively charged anti-particle of the electron - was first postulated by Dirac [1], many predictions have been made concerning the formation of bound states between positrons and ordinary matter. The formation of bound electron-positron states results in greatly enhanced annihilation rates. This process has recently been successfully exploited in positron annihilation spectroscopy experiments [2; 3; 4; 5; 6; 7; 8; 9], where the energy-dependent annihilation rate enables sensitive measurements of the positron binding energy [10]. Enhanced positron annihilation _in materio_ has valuable applications in medical physics [11], materials science [12; 13], and astrophysics [14]. Experimental apparatus for trapping large numbers of positrons continues to grow in sophistication, offering a glimpse into a world of exotic antimatter chemistry [15]. These advances motivate the development of a strong _ab initio_ description of positronic bound states to accelerate the continued development of new antimatter-based technology. This problem has been addressed using many standard computational chemistry tools. Positron binding to atoms and molecules has been studied using wavefunction expansions in explicitly correlated Gaussians [16; 17; 18; 19] (ECG), configuration interaction (CI) methods [20; 21; 22; 23], and quantum Monte Carlo (QMC) methods [24; 25; 26; 27; 28; 29; 30; 31]. The annihilation rate and lifetime of positrons in solids, particularly in the presence of defects, has been studied using density functional theory [32; 33] and quantum Monte Carlo methods [34; 35]. Despite the intense theoretical interest, describing the positronic wavefunction remains challenging for several reasons. Due to the repulsive potential between nuclei and positrons, expanding the positronic wavefunction in a basis often requires including diffuse basis functions and basis functions with large angular momenta. In many cases, it is not appropriate to consider the positronic wavefunction to be centred on the nuclei of a molecule, introducing additional difficulties in choosing appropriately centred basis functions. These difficulties have historically resulted in very slow convergence of CI calculations and limited accuracy for QMC methods which utilise wavefunction ansatze based on Hartree-Fock orbitals. The most successful description of positron binding to molecules comes from a recent work that develops a many-body theory of positron binding to molecules and produces positron binding energies in close agreement with experimental measurements [36]. In their work, Hofierka et al. highlight the shortcomings of QMC methods applied to positron binding - particularly, the lack of calculations for large non-polar molecules, which constitute the majority of experimentally studied systems. Here, we address this shortcoming. We propose a new approach to calculating the ground state properties of molecular positronic bound states, based on recently developed neural network wavefunction ansatze for QMC [37]. The Fermionic neural network (FermiNet) models the many-body wavefunction without referencing a set of basis functions. This conveniently sidesteps a number of aforementioned difficulties in describing positronic wavefunctions. We extend FermiNet to represent the positronic component of the wavefunction on an even footing with the electronic component. With a minimal alteration to the neural network architecture, we obtain a flexible and accurate ansatz for mixed electron-positron wavefunctions. We present results for a range of systems with qualitatively distinct mechanisms for positron binding and obtain state-of-the-art accuracy for the ground-state energy in these systems. Finally, we show that our method obtains the positron binding energy for Benzene in close agreement with the experimental value and the many-body theory of Hofierka et al [36]. ## I Results We benchmark the accuracy of FermiNet for a series of well-studied positronic systems. These are presented here in (approximate) order of increasing complexity. Unless otherwise specified, all results presented herein were obtained using the network architecture and training protocol detailed by Spencer [38]. We pre-train the electron orbitals to the Hartree-Fock solution of the bare molecule and do not pre-train the positron orbitals. Errors in energy expectation values are evaluated using a reblocking approach [39] to account for sequential correlations in the Metropolis-Hastings sampling. _Positronium hydride_ - The positronium (Ps) atom (consisting of a bound electron and positron) and hydrogen form a stable molecule. Here we work within the Born-Oppenhimer approximation, neglecting the proton's motion. Near exact ECG calculations are available for this system [16], yielding a ground-state energy of \(E_{0}=-0.7891794\) Hartrees. We obtain a ground state energy of \(E_{0}=-0.789144(3)\) Hartrees. _Sodium and magnesium atoms_ - The first ionisation energy of sodium, 0.1886 Hartrees, is smaller than the binding energy of the Ps atom, 0.25 Hartrees. The positronic sodium atom is then more accurately described as a bound complex of a positronium atom and a sodium cation. The binding energy of this complex is calculated as \(\epsilon=E\left([\mathrm{Na}^{+},\mathrm{Ps}]\right)-E\left(\mathrm{Na}^{+} \right)-0.25\). We fail to predict binding without variance matching, obtaining \(\epsilon=-0.37\) milliHartrees. Utilizing a variance matching procedure, we obtain \(\epsilon=0.32(15)\) milliHartrees, predicting binding in agreement with previous FCSVM (\(\epsilon_{\mathrm{FCSVM}}=0.4\) milliHartrees) and CI (\(\epsilon_{\mathrm{CI}}=0.21\) milliHartrees) calculations. We obtain a positron affinity of 0.01618(9) Hartrees for the magnesium atom. This agrees with previous FCSVM (0.015612 Hartrees) and CI (0.01615 Hartrees) results. _Lithium hydride and beryllium oxide_ - We have calculated the ground-state energy of LiH and its positronic complex for a range of interatomic separations (see Data Tables in Supplementary Material). Fitting these potential energy surfaces using Nesterov's algorithm, implemented in the MOLCAS package, we obtain equilibrium bond distances of 3.0196 Bohr, with ground-state energy -8.07050(1) Hartrees for LiH, and 3.371 Bohr with a ground state energy of -8.10774(1) Hartrees for [LiH, e\({}^{+}\)]. These deviate very slightly from the widely accepted literature values of 3.015 Bohr (LiH) and 3.348 Bohr ([LiH, e\({}^{+}\)]). We have calculated ground-state energies at the canonical separations for comparison with previous results, shown in Table 1. Our calculations are not only in excellent agreement with previous work but are seen to yield the most accurate variational result for [LiH, e\({}^{+}\)] reported to date. We have calculated the ground-state wavefunction of BeO over a range of interatomic separations - from below the equilibrium separation to dissociation. We fit the potential energy surfaces using MOLCAS and obtain an equilibrium bond distance of 2.515 Bohr with a ground-state energy of -89.90572(4) Hartrees for BeO, and 2.530 Bohr with a ground-state energy of -89.93082(3) Hartrees for [BeO, e\({}^{+}\)]. We compare our results against previous calculations in Table 1 and find that FermiNet yields the most accurate variational result for [BeO, e\({}^{+}\)]. The electronic ground state of BeO transitions from the spin-singlet configuration at the equilibrium interatomic separation to the spin-triplet configuration at the dissociative limit. We enforce the appropriate ground-state spin configuration by choosing \(S_{z}=1\) ( as FermiNet is a spin-assigned wavefunction) at wide interatomic separations. The resulting potential energy surfaces are plotted in Fig. 1. At \(\sim 4\) Bohr, the electronic ground-state transitions between the spin-singlet and spin-triplet, causing a sharp change in the ground-state dipole moment and the resulting positron binding energy with the molecular ground-state, which vanishes almost completely. At \(\sim 6\) Bohr, we see a smooth transition between two qualitatively distinct binding modes between the molecule and the positron - binding to the molecular dipole field below this separation and binding exclusively to the 'lone' beryllium atom beyond this separation. This is readily seen by visualizing the ground-state positron density on either side of the maximum, as shown in Fig. 1 for the triplet state, and corroborated by the dipole moment falling below the critical threshold for binding (\(\sim 1.625\) Debye) at the transition. _Dilithium_ - For the dilithium molecule, we obtain a vertical positron affinity of 66.75(2) milliHartrees at an equilibrium bond length of 5.015 Bohr. The molecular binding energy of Li\({}_{2}\) from our calculations is \(\sim 38\) milliHartrees, and the positron affinity of a lone lithium atom is known from the literature to be \(\sim 2\) milliHartrees, meaning this system is very stable against the \([\mathrm{Li}_{2},\mathrm{e}^{+}]\rightarrow[\mathrm{Li},\mathrm{e}^{+}]+ \mathrm{Li}\) dissociation channel. The one-particle density of the positronic ground-state of dilithium is shown in Fig. 2(a). _Benzene_ - The experimental positron binding energy of Benzene is 5.51 milliHartrees [10]. Utilizing our variance-matching procedure (described in detail for Benzene in the Supplementary Material), we obtain a finite positron binding energy of 4.1(3) milliHartrees. This falls in very close agreement with the binding energy obtained by Hofierka et al. of \(\sim\)4.26 milliHartrees [36]. The one-particle density of the positronic ground-state of benzene is shown in Fig. 2(b). ## Discussion The selection of systems presented here spans a broad range of positron binding phenomena: positronium formation, binding with an induced atomic dipole moment, binding with a static molecular dipole moment, and binding due to correlations with covalent bonding electrons in molecules. In all cases where benchmarks are available, FermiNet VMC produces excellent and, in some cases, state-of-the-art results for the positron affinity. From the density plots presented, it is clear that this performance is consistent between wavefunctions with very different qualitative characteristics. An identical ansatz is used for every system studied - we have not employed any fine-tuning or system-specific treatments. An important aspect of our approach is that, due to being a basis-set-free method, we do not provide any information about the nature of the positron binding in the input to the calculation (e.g. via the placement of basis set functions). Rather, the location and nature of the positron binding emerges naturally during optimization of the wavefunction. The high level of accuracy achieved across various systems shows that FermiNet offers a flexible and accurate ansatz for mixed electron-positron wavefunctions. To date, quantum Monte Carlo calculations of positron binding have focused on small, polar molecules. As pointed out by Hofierka et al. [36], accurate quantum Monte Carlo results for large non-polar organics, which comprise the majority of experimentally relevant systems, are lacking. We believe that the present work amends this gap. Our results for positronium hydride, sodium and magnesium atoms, Figure 1: **(a)** Ground-state positronic density, projected into the molecular plane, of a positron attached to a beryllium oxide molecule over a range of interatomic distances, accumulated via MCMC sampling. Image scale has been normalized by bondlength. The scale bar in the bottom left of each panel indicates the relative size of one Bohr radius, \(a_{0}\). **(b)** Energy (of the bare molecule and positronic complex), dipole moment (of the bare molecule), and positron affinity of beryllium oxide over a range of interatomic distances. Green (black) markers in the dipole moment and positron affinity plots indicate values accumulated for the electronic singlet (triplet) projected wavefunction. Solid lines and dots in the dipole moment plot indicate the corresponding values obtained by Buenker et al. [23]. Horizontal red lines in the dipole moment and positron affinity plots indicate the mean-field critical value for positron binding to the molecular dipole field and the positron affinity of a lone beryllium atom, respectively. and small diatomic molecules demonstrate that our approach can achieve state-of-the-art accuracy compared with previous work. Further, our results for the non-polar dilithium and benzene molecules demonstrate that this accuracy is retained when describing modes of positron binding governed entirely by strong electron-positron correlation effects. The results in Fig. 2 offer an intuitive understanding of the binding mechanism between non-polar molecules and positrons: correlation-dominated binding is facilitated by the presence of a centre of increased electronic density away from the atomic nuclei of a molecule. In dilithium, this is the covalent bond, and in benzene, this is the increased electronic density in the centre of the molecule from the delocalisation of the \(\pi\)-bonds in the ring. Our result for the positron binding energy of benzene fall within 'chemical accuracy' (\(\sim\) 1.6 milliHartrees) of the experimental value, but this level of accuracy is insufficient for other species of experimental interest. Many chemical species possess positron binding energies below chemical accuracy, and obtaining this level of accuracy represents a significant challenge to computational chemistry methodsis a signi. We believe that the necessary improvement in accuracy, and obviation of our need to utilise variance matching, can be achieved by adopting the recently introduced PsiFormer architecture [40], an improvement upon the FermiNet architecture utilising Transformer networks [41]. The PsiFormer obtains more \begin{table} \begin{tabular}{l c c c c c c} & \multicolumn{3}{c}{LiH} & \multicolumn{3}{c}{BeO} \\ \cline{2-7} Method & Bare & Positronic & Binding energy & Bare & Positronic & Binding energy \\ \hline FermiNet-VMC & -8.07051(1) & **-8.10775(1)** & 0.03723(2) & **-89.90572(4)** & **-89.93802(3)** & 0.02510(7) \\ SJ-VMC & -8.06307(3) [25] & -8.08034(4) [25] & 0.01727(7) & -89.3173(25) [30] & -89.3365(13) [30] & 0.0192(38) \\ SJ-FN-DMC & -8.0704(1) [25] & -8.10718(11) [25] & 0.03678(2) & -89.7854(13) [30] & -89.8134(12) [30] & 0.02800(25) \\ CISD & -8.03830 [20] & -8.05530 [20] & 0.017 & - & - & - \\ MRD-CI & -8.06827 [21] & -8.09764 [21] & 0.02937 & -89.759352 [23] & -89.773133 [23] & 0.013781 \\ ECG & **-8.07054**[18] & -8.10747 [18] & 0.03693 & - & - & - \\ GW & - & - & 0.039(1) [36] & - & - & - \\ \end{tabular} \end{table} Table 1: Ground state energy of LiH and BeO, and their positronic complexes, obtained via various computational methods at equilibrium bond-length. Statistical errors are omitted where they are smaller than the reported precision or otherwise omitted in the referenced source. The lowest variational energy in each column is bolded. Figure 2: Orthographic projections of the ground-state one-particle density for positronic **(a)** dlithium and **(b)** benzene molecules. Left/right columns show the electronic/positronic density, \(\rho_{e}\)-/\(\rho_{e^{+}}\). In the dilithium molecule, the positron is seen to be strongly localized to a torus wrapping the covalent bond. In the benzene molecule, the positronic orbital is extremely diffuse, resulting in a much noisier Monte Carlo estimate of the one-particle density. The positronic orbital resembles a \(p\)-orbital ‘sandwiching’ the aromatic ring. The scale bar in the bottom left of each subfigure indicates the relative size of one Bohr radius, \(a_{0}\). accurate total energies for the ground-state energy of the bare benzene molecule than FermiNet. Investigating the accuracy of the PsiFormer applied to positron binding energy calculations is a promising avenue for future work. As with previous work utilizing FermiNet, computational scaling remains an issue. Calculations involving \(\gtrsim 50\) particles are too expensive for presently available computational resources, prohibiting the application of the method to large molecules. One approach to calculating the ground-state wavefunction of large molecules is utilizing pseudopotentials to remove core electrons from the calculations. The positronic component of the wavefunction is often small near the atomic nuclei, so we would expect the correlation effect between the core electrons and the positron to be small. Pseudopotentials have been successfully employed in previous QMC calculations of positron lifetimes in solids [35]. As the architecture employed herein only extends the original FermiNet architecture to treat different particle types separately, we expect that future advances in the computational efficiency of FermiNet and related neural network wavefunctions will be able to be readily combined with our approach to model positron binding. ## Conclusion We have shown that the FermiNet ansatz for VMC calculations can be extended to include positrons naturally, treating positrons on an equal footing to electrons. Because this ansatz does not depend on a basis set, our treatment sidesteps traditional methods' issues in selecting and converging an appropriate basis set for describing positronic wavefunctions. Our method produces highly accurate results for several molecules with various binding mechanisms for positrons without any system-specific tuning. We expect that the simplicity of this method will lend itself to many challenging applications beyond those presented here, e.g. calculations involving multiple positrons. With additional computational effort, this method can provide accurate predictions for positron annihilation experiments. ## Methods We find the ground-state wavefunction and corresponding ground-state energy of the many-body Coulomb Hamiltonian in the Born-Oppenheimer clamped nuclei approximation, \[\mathcal{H}=-\frac{1}{2}\sum_{i}\nabla_{i}^{2}+\sum_{i,j}\frac{q_{i}Z_{j}}{ |\mathbf{r}_{i}-\mathbf{R}_{j}|}+\sum_{i>j}\frac{q_{i}q_{j}}{|\mathbf{r}_{i}- \mathbf{r}_{j}|}+\sum_{i>j}\frac{Z_{i}Z_{j}}{|\mathbf{R}_{i}-\mathbf{R}_{j}|}, \tag{1}\] where \(q_{i},\mathbf{r}_{i}\) are the particle charges and positions, and \(Z_{i},\mathbf{R}_{i}\) are the charges and positions of fixed nuclei. Here, and throughout, we utilise Hartree atomic units: \(\hbar=e=m_{e}=1\). For the mixed electron-positron systems considered here, \(q_{i}=\pm 1\). We solve for the many-body ground-state wavefunction using the variational Monte Carlo (VMC) method: [42] a many-body wavefunction \(\Psi_{\theta}\), parameterized by \(\theta\), is continuously updated via a gradient descent procedure to minimize the energy expectation value, \[\left\langle E\right\rangle_{\theta}=\frac{\int\Psi_{\theta}^{*}(\mathbf{r}) \mathcal{H}\Psi_{\theta}(\mathbf{r})d\mathbf{r}}{\int\Psi_{\theta}^{*}(\mathbf{r})\Psi_{ \theta}(\mathbf{r})d\mathbf{r}}, \tag{2}\] where \(\mathbf{r}=(\mathbf{r}_{1},\dots,\mathbf{r}_{N})\). This integral, and its gradient with respect to \(\theta\), are evaluated via Monte Carlo integration. The FermiNet represents the many-body wavefunction as a sum of block-diagonal determinants containing many-particle orbitals which depend upon the coordinates of all particles in a permutation equivariant manner. This is written \[\Psi(\mathbf{r})=\sum_{k}^{n_{\text{fold}}}\prod_{\chi}\text{det}\left[\psi_{i}^{ \chi\chi}(\mathbf{r}_{j}^{\chi};\{\mathbf{r}_{j}^{\chi}\};\{\mathbf{r}_{j}^{\chi}\})\right], \tag{3}\] where the set \(\{\mathbf{r}_{/j}\}\) includes all particle coordinates except \(\mathbf{r}_{j}\), and \(\chi=(\sigma,q)\) labels species of particles which are distinguished by their spin \(\sigma\in(\uparrow,\downarrow)\) and charge \(q\in(+,-)\). Here we have made a slight abuse of notation for the sake of brevity: permutation invariance for the set \(\{\mathbf{r}_{j}^{\chi}\}\) is only maintained between particles of the same species. We emphasise that these are not the dense determinants discussed in recent works extending FermiNet [43], with the exception of the Benzene calculation for which dense determinants were used for the electronic component of the wavefunction. The many-particle orbitals \(\psi_{i}\) are represented by a deep neural network [37] (architecture described in the Supplementary Material). Multiplicative coefficients are omitted from the sum as they are trivially absorbed into the orbitals. Gradient descent is performed via the Kronecker-factored approximate curvature (KFAC) algorithm, an approximation of natural gradient descent [44] which scales well to large neural networks [45]. Natural gradient descent is closely related to the stochastic reconfiguration method, well-known in the quantum Monte Carlo literature [46]. The present work introduces two alterations to the original FermiNet architecture. Firstly, we have included positronic orbitals as additional species, i.e. additional blocks in the determinant. Secondly, we utilise distinct weights in the neural network layers for every species (unlike the original FermiNet, where spin up and down electronic orbitals shared weights). A single determinant of the form in Eq. (3) can represent any fermionic many-body wavefunction [47]. In practice, the argument for the universality of FermiNet determinants depends upon the representation of discontinuous functions which cannot be constructed using realistic neural networks. Despite this, FermiNet-VMC calculations obtain state-of-the-art accuracy in ground state energy calculations for a range of molecules and solids [48; 49; 37; 43; 43; 50; 38]. We only consider calculations involving a single positron in the present work. We have discussed the treatment of the positronic spin coordinate only to demonstrate how our technique may be extended to calculations involving many positrons, as such systems have recently attracted theoretical interest. Ground-state wavefunctions for bare and positronic molecules are not guaranteed to be similarly converged af ter an equal number of gradient descent steps. This introduces uncontrolled error in calculating positron binding energies via VMC. Previous work has shown that Fermi-VMC calculations yield ground-state energies within chemical accuracy (\(\sim 1.5\) milliHartrees) of exact results for many small molecules [37; 38]. With this level of accuracy, the uncontrolled error will be negligible for molecules with a large positron binding energy. However, for molecules with very small binding energies, or large molecules for which the uncontrolled error may become large compared to the positron binding energy, there is no guarantee that an accurate estimate of the positron binding energy will be obtained by comparing ground-state calculations of different quality. In these cases, we employ the variance matching technique described by Entwistle _et al._[51], addressed in the Supplementary Material. ###### Acknowledgements. This work was undertaken with funding from the UK Engineering and Physical Sciences Research Council (EP/T51780X/1) (GC). Calculations were carried out with resources provided by the Baskerville Accelerated Compute Facility through a UK Research and Innovation Access to HPC grant, and we acknowledge PRACE for awarding us access to JUWELS at GCS@FZJ, Germany. Via his membership of the UK's HEC Materials Chemistry Consortium, which is funded by EPSRC (EP/R029431), Foulkes used the UK Materials and Molecular Modelling Hub for computational resources, MMM Hub, which is partially funded by EPSRC (EP/T022213).
2308.04753
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
Deep neural networks (DNNs) demonstrate outstanding performance across most computer vision tasks. Some critical applications, such as autonomous driving or medical imaging, also require investigation into their behavior and the reasons behind the decisions they make. In this vein, DNN attribution consists in studying the relationship between the predictions of a DNN and its inputs. Attribution methods have been adapted to highlight the most relevant weights or neurons in a DNN, allowing to more efficiently select which weights or neurons can be pruned. However, a limitation of these approaches is that weights are typically compared within each layer separately, while some layers might appear as more critical than others. In this work, we propose to investigate DNN layer importance, i.e. to estimate the sensitivity of the accuracy w.r.t. perturbations applied at the layer level. To do so, we propose a novel dataset to evaluate our method as well as future works. We benchmark a number of criteria and draw conclusions regarding how to assess DNN layer importance and, consequently, how to budgetize layers for increased DNN efficiency (with applications for DNN pruning and quantization), as well as robustness to hardware failure (e.g. bit swaps).
Edouard Yvinec, Arnaud Dapogny, Kevin Bailly, Xavier Fischer
2023-08-09T07:45:51Z
http://arxiv.org/abs/2308.04753v2
# SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference ###### Abstract Deep neural networks (DNNs) demonstrate outstanding performance across most computer vision tasks. Some critical applications, such as autonomous driving or medical imaging, also require investigation into their behavior and the reasons behind the decisions they make. In this vein, DNN attribution consists in studying the relationship between the predictions of a DNN and its inputs. Attribution methods have been adapted to highlight the most relevant weights or neurons in a DNN, allowing to more efficiently select which weights or neurons can be pruned. However, a limitation of these approaches is that weights are typically compared within each layer separately, while some layers might appear as more critical than others. In this work, we propose to investigate DNN layer importance, _i.e._ to estimate the sensitivity of the accuracy w.r.t. perturbations applied at the layer level. To do so, we propose a novel dataset1 to evaluate our method as well as future works. We benchmark a number of criteria and draw conclusions regarding how to assess DNN layer importance and, consequently, how to budgetize layers for increased DNN efficiency (with applications for DNN pruning and quantization), as well as robustness to hardware failure (e.g. bit swaps). Footnote 1: The database and loading scripts are publicly available on github. ## 1 Introduction Empirical evidence shows the remarkable predictive capabilities of deep neural networks (DNNs). For instance, in computer vision, from image classification [14] to object detection [15] through semantic segmentation [16], deep neural networks achieve state-of-the-art performance. For a number of applications such as medical imaging [20] or autonomous driving [17], however, being able to closely understand and monitor the internal behavior of DNNs is of paramount importance. Broadly speaking, this has been related in the literature as DNN explainability [1]. First, at train time, explainability encompasses the theoretical study of the learning dynamics and the generalization capacities of DNNs [18, 19, 16]. Second, during deployment, explainability also implies diagnosing _why_ a DNN took a particular decision, predicted one class rather than another, or generated one particular sequence of words, conditionally to its inputs. The study of this input-output relationship is often called visual explanation in the context of computer vision, or, more generally, attribution [20, 21, 22, 23, 24, 25, 26, 27]. Perhaps the most straightforward way to compute and understand attribution for one pixel of an image [1] is to set its value to \(0\) and measure the accuracy loss induced by this change. However, beyond highlighting the most relevant _pixels_ or dimension of an input for prediction, attribution methods can be used to diagnose which _weights_ can-or can not-be removed in a DNN for pruning [20]. One limitation of such approach, however, lies in the fact that weights or neurons can be compared within the same layer only. In the continuity of this work, we propose to study the cross-layer relative importance with respect to the final accuracy of the model. Having effective methods for layer importance ranking for deep neural networks should open new perspectives on neural networks predictive abilities. Stemming from attribution techniques for input sensitivity understanding and pruning for neuron level sensitivity measurement, we study the effectiveness of these techniques to tackle the challenge of cross-layer sensitivity assessment. This study paves the way for exciting future perspectives for DNN optimization including, but not limited to: * **DNN compression (quantization and pruning):** On the one hand, prior work on neural network compression highlighted the necessity to adapt the compression rates to specific layers. For instance, in quantization where floating point operations are converted into low-bit fixed point operations, prior works [12, 13, 14] have shown that quantizing the first and last layers to larger bit-width lead to low latency cost while baring significant benefits in terms of accuracy preservation. Generally speaking, the search for a well suited bit-width assignment per per neuron is called mixed-precision [23, 14, 15, 24]. However, in practice, it is often either a very costly process based upon a set of simple heuristics [10, 20] or reinforcement learning [22, 23]. Similarly, in the context of DNN pruning, where one seeks to remove blocks of computations in order to reduce both the inference runtime and the memory footprint of the model, previous work have employed simple strategies to assign a pruning rate per-layer [19, 18, 20]. These empirical results suggest that the correct assignment of the compression rates is an unsolved important aspect of neural network inference acceleration. * **Robust Inference:** On the other hand, multiple critical applications [20, 21] of deep neural networks require strong guarantees on the robustness of the predictive function w.r.t. random bit-flips caused by hardware failures. These attacks can give rise to incorrect predictions and lead to system failures that can be catastrophic in the case of critical systems. For example, as pinpointed in [10], it is possible to attack an hardware and induce one bit swap every 350ms on a stream of 500Mbit/s during data transfers in memory. Similarly, on DDR2 memory, hardware manufacturers have measured that on average 22696 errors occur every year. The baseline strategy [17] to detect these failures and discard the corresponding computations consists in performing said computations twice and ensuring that the results are identical. However, with the growing size of neural networks [20], such solution could lead to unsustainable energy consumption. Therefore, from a robustness standpoint, there is a growing need to not only be able to compare DNNs neurons or weights _inside_ a specific layer, but also to compare the layers _themselves_ to only induce redundant calculations to the most important parts of the networks. In what follows, we investigate the challenge of layer-wise sensitivity assessment through the lens of attribution methods. Specifically, we adapt and benchmark a number of attribution methods for fine-grained weight relevance estimation, as well as a number of reduction methods to derive layer-wise importance criteria. In order to evaluate these criteria, we then implement two tests. First, we constructed a dataset of DNN models and their corresponding layers ranking obtained _via_ an exhaustive search. Second, we apply these methods in a straightforward manner to DNN compression and robustness to hardware failures in realistic scenarios. Our empirical results enabled us to draw conclusions on best practices regarding cross-layer sensitivity measurement. ## 2 Layer-wise sensitivity assessment Let's consider a trained neural network \(f\) and each layers \(f_{l}\) with weights \(w_{l}\). In this study, we seek to rank the importance of the layers \((f_{1},...,f_{L})\) of \(f\) using only computations of \(f\) on a small, unlabelled, calibration set \(\mathcal{X}=(x_{1},...,x_{n})\) (with \(n=256\) in practice), akin to [20]. Let \(\epsilon_{l}\) denote a perturbation applied to each weight (or alternatively, to its activations) of layer \(f_{l}\), such that \(f^{\epsilon_{l}}\) is the function disturbed at layer \(l\) and \(f^{\epsilon_{l}}_{l}\) its \(l\)-th layer. With this convention, we define the importance of layer \(l\) as: \[\mathcal{I}(f,\epsilon_{l})\triangleq-\mathbb{E}_{X_{\text{test}}}\left[||f^{ \epsilon_{l}}-f||\right] \tag{1}\] where \(X_{\text{test}}\) is a test set. Intuitively, if a specific perturbation \(\epsilon_{l}\) applied to layer \(l\) causes large changes in the predictive function (as compared to applying the same kind of perturbation to other layers), then this layer is likely to be particularly important. Consequently, our goal is to find a criterion \(C\) which predicts the importance ranking of any layer \(l\) with respect to a perturbation \(\epsilon_{l}\) of said layer, _i.e._: \[\forall i,j\quad C(f,\mathcal{X})_{i}\leq C(f,\mathcal{X})_{j} \Leftrightarrow\mathcal{I}(f,\epsilon_{i})\leq\mathcal{I}(f,\epsilon_{j}) \tag{2}\] Simply put, computing \(C\) upon \(\mathcal{X}\) for each layer of \(f\) is sufficient to assess the importance of the layers w.r.t. a considered perturbation. Moreso, we are particularly interested in finding importance criteria \(C\) that assess layer-wise sensitivity in a general sense, and hence that do not depend on the nature of the perturbations \(\epsilon_{l}\). Such criteria can thus be calculated solely by evaluating statistics of \(f\) on the calibration set \(\mathcal{X}\). Furthermore, we propose to search for criteria that can be written as: \[C(f,\mathcal{X})_{l}=\psi\circ(\phi(f,\mathcal{X})_{l}) \tag{3}\] Where \(\phi\) denotes a function that extracts fine-grained sensitivity information (_i.e._, at the level of a layer's weight), and \(\psi\) reduces this information to an ordered set such as \(\mathbb{R}\) to derive a ranking for the layers. Below we first describe existing candidates for function \(\phi\). ### Fine-grained weight relevance estimation Authors in [20] showed that assessing the relevance of a predictive function w.r.t. its weights is a problem that bears similarity with attribution techniques. Inspired by this, we adapt and benchmark several such techniques to design candidates for the weight relevance estimation function \(\phi\). Existing candidates include zero and first order criteria that use the weight and gradient values, the higher order methods, as well as the methods derived from the integrated gradients method [20], and lastly, recent black box techniques. #### Zero and First Order Criteria Weights:measuring the weights of a neural network in order to estimate their contribution to the predictive function has been widely studied in pruning [19]. The resulting function, denoted \(W\), offers the advantage of being fairly simple and can be computed without data. However, it does not account for inter-layer relationships. \[W:(f,\mathcal{X})\rightarrow(w_{1},...,w_{L}) \tag{4}\] Gradients:in GradCam [20], attribution is computed as the gradients of function \(f\) w.r.t. each pixel of the image or feature map. This can be adapted by computing the gradients of \(f\) w.r.t. each _weight_ instead: \[\nabla:(f,\mathcal{X})\rightarrow\left(\mathbb{E}_{\mathcal{X}}\left[\frac{ \partial f}{\partial w_{1}}\right],...,\mathbb{E}_{\mathcal{X}}\left[\frac{ \partial f}{\partial w_{L}}\right]\right) \tag{5}\] #### Weight \(\times\) gradients: combining these two approaches [20, 21] usually leads to a slight improvement in practice. \[\text{W}\times\nabla:(f,\mathcal{X})\rightarrow\left(\mathbb{E}_{\mathcal{X}} \left[w_{l}\times\frac{\partial f}{\partial w_{l}}\right]\right)_{l\in\{1,...,L\}} \tag{6}\] ### Higher Order Criterion GradCam++:we also consider the most widely used high order attribution technique(Chattopadhay et al., 2018) and adapt it to weight values as follows \[\text{GCam++:}\left(f,\mathcal{X}\right)\rightarrow\!\!\left(\mathbb{E}_{ \mathcal{X}}\!\!\left[\frac{\left(\frac{\partial f}{\partial w_{l}}\right)^{2} }{2\!\left(\frac{\partial f}{\partial w_{l}}\right)^{2}+w_{l}\!\left(\frac{ \partial f}{\partial w_{l}}\right)^{3}\right]}\right)_{l\in\{1,\dots,L\}} \tag{7}\] Integrated Gradients CriteriaA known pitfall (Yvinec et al., 2022) of the previously mentioned gradient methods is that the measured importance is by definition local and do not hold when considering important modification of the weight values (such as, for instance, bringing this weight to \(0\)). Integrated gradients (IG):To address this, authors in (Sundararajan, Taly, and Yan, 2017; Yvinec et al., 2022) propose to measure the gradients for several values of a considered input (or weight, in our case) on a path towards 0 (_i.e._ for a weight matrix \(w_{l}\) at layer \(l\), we consider \(\lambda.w_{l}\) with \(\lambda\in[0,1]\)). This so-called integrated gradients criterion can be written as: \[\text{IG}:(f,\mathcal{X})\rightarrow\left(\mathbb{E}_{\mathcal{X}}\left[\sum _{\lambda\in[0;1]}\frac{\partial f}{\partial\lambda w_{l}}\right]\right)_{l\in \{1,\dots,L\}} \tag{8}\] Guided integrated gradients (GIG)(Kapishnikov et al., 2021) is a refinement of the IG criterion that consists in shrinking, for each integrated gradient iteration, only the least important values, as defined by their gradient magnitudes \(||\frac{\partial f}{\partial\lambda w_{l}}||\). Important direction guided integrated gradients (IDGI)(Yang, Wang, and Bilgic, 2023) is another recent improvement over the IG method, that consists in using the direction of the gradients, weighted by the difference between the outputs at each integrated gradients iteration. Statistical CriteriaStatistical approaches improve first order criteria by estimating the sensitivity of \(f\) within the neighborhood of the weights, as defined by a small additive random noise \(\mathcal{N}\). SmoothGrad(Smilkov et al., 2017) criterion, on the one hand, consists in computing the expected value of the gradient magnitude within this neighborhood: \[\text{Smooth}\nabla:(f,\mathcal{X})\rightarrow\left(\mathbb{E}_{X_{\text{ inst}},\mathcal{N}}\left[\frac{\partial f}{\partial w_{l}+\mathcal{N}} \right]\right)_{l\in\{1,\dots,L\}} \tag{9}\] VarGrad(Adebayo et al., 2018) criterion, on the other hand, measures the variance rather than the expectation over the weights neighborhood: \[\text{Var}\nabla:(f,\mathcal{X})\rightarrow\left(\mathbb{V}_{X_{\text{ inst}},\mathcal{N}}\left[\frac{\partial f}{\partial w_{l}+\mathcal{N}} \right]\right)_{l\in\{1,\dots,L\}} \tag{10}\] Black Box CriterionOriginally, black box attribution methods aimed at iteratively explaining the sensitivity of a DNN without accessing intermediate activations and weights, hence not using gradients, simply by zeroing out pixels and observing the induced accuracy drop. These methods have however fallen out of flavor due to high computational costs. Hsic(Novello, Fel, and Vigouroux, 2022) is a recent attempt at designing faster black-box attribution methods, that consists in modelling the dependencies between images regions or patches and variations of the predictive function. We propose to adapt this method by grouping together weights belonging to different neurons, and denote this operation hsic: \[\text{HSIC}:(f,\mathcal{X})\rightarrow\left(\mathbb{E}_{\mathcal{X}}\left[ \text{hsic}(w_{1})\right],...,\mathbb{E}_{\mathcal{X}}\left[\text{hsic}(w_{L}) \right]\right) \tag{11}\] ### Reduction Methods In addition to the choice of a criterion for fine-grained weight relevance estimation, we also need to propose candidates for the reduction function \(\psi\) in Equation 3. This function needs to be a projection to an ordered set such as \(\mathbb{R}\). In our experiments, we studied several reduction options, the first and simplest of which is simply the average of the absolute values for each dimension of the computed fine-grained criterion. For instance, if we choose \(W\) as function \(\phi\), \(C(f,\mathcal{X})_{l}\) boils down to computing the mean absolute value among weights for each layer \(l\). Similarly, we investigate using percentiles as well as \(l_{1}\), \(l_{2}\) and \(l_{\infty}\) norms. ## 3 Experiments Our empirical evaluation of the proposed criteria is three-fold. First, we propose a novel testbed for evaluating the relevance of each criterion to measure the layer-wise sensitivity, and rank the layers accordingly (equation 2). Second, we demonstrate the importance of layer-wise sensitivity assessment for designing stronger baselines for DNN compression (pruning and quantization) and robustness to hardware failures in realistic scenarios. ### Layer Importance Ranking Synthetic dataset:to evaluate the criteria proposed in Section 2 for layer-wise sensitivity assessment, we constructed a dataset of models and their corresponding layers ranking ground truth. Specifically, we considered a simple binary classification task on Moon dataset (Pedregosa et al., 2011). We consider various DNN designs, including vanilla sequential networks (similar to VGG (Simonyan and Zisserman, 2014) architecture), networks that include skip-connection (_skip_) blocks (such as ResNets network family (He, Zhang et al., 2016), with and without stochastic depth (_skip_+_SD_) (Huang et al., 2016)) as well as transformer (_transfo_) architectures. For each of these architectural designs, we randomly sample the number of blocks or layers (uniformly between 2 and 6) as well as the layer width (uniformly between 8 and 128) for each layer. We trained each network using ADAM optimizer with learning rate \(0.01\) for 6 epochs. Every network reached approximately \(100\%\) test accuracy. Layer importance ground truth generation:to generate the ground truth layer ranking, we apply a perturbation to the weights or activations and measure its impact on the final accuracy, _i.e._ we directly measure \(\mathcal{I}(f,\epsilon_{l})\) (Equation 1) induced by specific layer-wise perturbations \(\epsilon_{l}\). These perturbations were sampled from different distribution that simulate several scenarios. For each noise distribution, we varied the signal-to-noise ratio in order to evaluate the behavior of the model w.r.t. more or less difficult settings. * Multiplicative impulse (pepper), denoted \(\mathcal{U}\) where a uniformly drawn random subset (corresponding to a proportion between 0 and \(100\%\)) of weights or activations are set to 0. This corresponds to unstructured (_i.e._ at the weight level) or structured (_i.e._ at the channel or neuron level) respectively. * Additive Gaussian noise \(\mathcal{N}(0,\sigma)\), with \(\sigma\in]0,max(w\in w_{l})]\) applied to either weights or activations. This setting bears similarities with quantization process as small, additive perturbations are added to each weight or activation. * Additive impulse or Dirac \(\mathcal{D}\) noise where a large perturbation (between 0 and \(max(w\in w_{l})\)) is applied to a proportion between 0 and \(100\%\) of uniformly drawn random weights or activations. This bears similarity with random bit swaps as a restricted set of weights or activations undergoes significant, non-zero changes. In what follows, we evaluate the capacity of each criterion (as a combination of a fine-grained criterion and reduction method, both of which will be assessed separately) to guess the correct order (as indicated by \(\mathcal{I}(f,\epsilon_{l})\) for all above mentioned perturbations \(\epsilon_{l}\)): this setting is however very challenging as each order has to be retrieved exactly. Lastly, we ensured that the dataset was not biased _i.e._ that there is sufficient diversity in term of layer ranking for the different architectures and perturbations. These elements can be found in the dataset online description. Empirical criteria validation and comparison:First, a comparison between the different weight relevance criteria is summarized in Table 1. First, on average it appears that zeroth and first order attribution techniques such as gradients (\(\nabla\)) and weight \(\times\) gradient (\(W\times\nabla\)) achieve the highest results on par with more sophisticated criterion like GradCam++ at lower computational cost. Second, when looking specifically at more complicated DNN architectures such as networks with skip-connections and transformers, we observe that the more naive techniques do not work well while more recent and complex techniques such as GradCam++, SmoothGrad, Vargrad and HSIC achieve the best results. However, no method truly achieves satisfactory results on transformers in this challenging setting. Second, Table 2 shows results for the different reduction methods, averaged among all weight relevance criteria. Using \(l_{1}\) norm achieves the worst results as, contrary to the simple average reduction method, it does not normalize the importance measurement w.r.t. the layer width. Overall the best results are obtained using \(l_{\infty}\) norm, _i.e._ intuitively considering the largest sensitivity (_i.e._ fine-grained weight relevance) across all neurons in the layer. Generally speaking, There is no all-around best fine-grained weight relevance method that works for all architectures and perturbations. However, we can highlight a few key takeaways: * The \(\nabla\), \(W\times\nabla\), GradCam++ and IDGI methods are generally speaking solid candidates criteria for weight relevance estimation. * Statistical criteria such as Var\(\nabla\) are good choices for residual architectures. * GradCam++, Smooth\(\nabla\) and Var\(\nabla\) are the best for transformers though not as reliable as for other architectures. * \(l_{\infty}\) norm is the best reduction method in all tested cases. Bearing this in mind, in what follows, we apply layer-wise importance ranking to DNN compression as well as robust inference. ### Applications to DNN Compression Experiments on pruning From layer-wise relevance to pruning budgetization:Given a target pruning rate \(\gamma\) for DNN \(f\) with weights \(w\), we ought to remove \(\gamma\sum_{l}\Omega(w_{l})\), where \(\Omega(w_{l})\) denotes the number of weights in \(w_{l}\). We simply assign the per-layer pruning rates \(\gamma_{l}\) based on the weighting given by the importance score: \[\gamma_{l}=\alpha*\gamma*C(f,\mathcal{X})_{l} \tag{12}\] with \(\alpha\) a normalizing constant such that \(\gamma\sum_{l}\#w_{l}=\sum_{l}\gamma_{l}\#w_{l}\). DNN pruning results:in the frame of TinyML perr challenge (community 2021) we use the proposed criteria to set the budget and prune a ResNet-8 network, removing \(20\%\) of neurons (_i.e._ structured pruning). Table 3 shows the accuracies of the models pruned using all the criteria introduced in Section 2 for layer-wise relevance estimation and budgeting (in rows, with \(l_{\infty}\) as the reduction method) and the same criteria for selecting neurons to prune within each layer (in columns). First, overall, we observe a wide discrepancy in the accuracies averaged among columns (last line) as well as among rows (last column): this suggests that, while the method for intra-layer neuron pruning, onto which the Figure 1: Mixed precision assignment using IDGI (Yang, Wang, and Bilgic 2023) and PowerQuant (Yvinec et al. 2023). community has been focusing so far, is very important, the inter-layer relevance and budgetization is also of paramount importance. Second, we observe that \(\nabla\) and \(Var\nabla\) performs the best overall for layer-wise relevance in that context. In particular, \(\nabla\) for layer-wise relevance assessment combined with IDGI as neuron pruning criterion offers the best performance, with GradCam++, \(Smooth\nabla\) and \(Var\nabla\) also working well in tandem with integrated gradient (IG, GIG, IDGI) neuron selection criteria, echoing the results from [22]. This confirms the results obtained in Section 3.1. Overall, these results motivate further research and distinction between inter and intra-layer importance evaluation criteria. ### Experiments on DNN quantization #### From layer-wise relevance to budgeting layers for quantization: given a ranking of layers \(f_{1},...,f_{L}\) we assign a layer-wise quantization bit-width based on a target average bit-width \(b\). For instance, if \(b=4\), we assign \(3\) bits to the least important third of the layers, \(5\) bits to the most important third and \(4\) to the remaining layers. Furthermore, as in [1] we quantize activations to 8 bits. Also, contrary to prior work [23, 1], we do not apply any arbitrary quantization bit-width to the first or last layer as our goal is to show the ability of the proposed criteria to properly rank layers without manual intervention or ad hoc heuristics. #### Quantization results: We implement the aforementioned mixed precision scheme (using the proposed criteria to set the bit-width budget for each layer) on a ResNet-50 pretrained on ImageNet and compare with a baseline constant precision quantization in four settings: W6/A8 using two state-of-the-art methods DFQ [19] and SQuant [10], as well as W4/A4 using PowerQuant [22] and Adaround [19]. Table 4 shows the test accuracies of the quantized networks. Once again, we observe significant discrepancies between the different criteria, with e.g. \(W\) providing results significantly below the baseline and \(\nabla\) and Var\(\nabla\) systematically improving over it. This shows that estimating layer relevance is crucial to the performance of mixed precision quantization. Overall, as in previous experiments, \(\nabla,W\times\nabla\), GradCam++ as well as Var\(\nabla\) perform well on this benchmark, with the addition of IDGI with PowerQuant. Figure 1 illustrates the budget obtained using using IDGI [22] as layer-wise relevance criterion over PowerQuant [22] as the baseline quantization method in W4/A8. What's interesting is that our approach sets a high budget to the first layer of the network (as well as a decreasing bit-width budget trend towards the end of the network): this confirms the importance of assigning a larger bit-width to the first layer of the model, as already empirically remarked in [19, 19]. To sum it up, these results on DNN pruning and quantization suggest that assessing layer-wise relevance and using it to budgetize layers in a simple, straightforward way is already enough to improve existing pruning and quantization tech \begin{table} \begin{tabular}{|c|c|c|c|c||c|c|c|c|c|c|} \hline Archi & Noise & W & \(\nabla\) & W \(\times\nabla\) & **GCam++** & IG & GIG & IDGI & Smooth\(\nabla\) & Var\(\nabla\) & Hsic \\ \hline Vanilla & \(\mathcal{U}\) on W & 25 & 76 & 76 & 76 & 59 & 49 & 71 & 71 & 28 & 38 \\ Vanilla & \(\mathcal{N}\) on W & 42 & 68 & 68 & 68 & 59 & 56 & 66 & 66 & 36 & 47 \\ Vanilla & \(\mathcal{D}\) on W & 32 & 60 & 60 & 33 & 41 & 60 & 48 & 60 & 43 & 60 \\ \hline Vanilla & \(\mathcal{U}\) on act & 82 & 82 & 82 & 33 & 67 & 78 & 78 & 53 & 29 & 77 \\ Vanilla & \(\mathcal{N}\) on act & 82 & 82 & 82 & 33 & 67 & 78 & 78 & 53 & 29 & 63 \\ Vanilla & \(\mathcal{D}\) on act & 82 & 82 & 82 & 33 & 67 & 77 & 77 & 55 & 28 & 49 \\ \hline \hline skip & \(\mathcal{U}\) on W & 46 & 47 & 43 & 64 & 44 & 43 & 47 & 52 & 77 & 28 \\ skip & \(\mathcal{N}\) on W & 23 & 31 & 32 & 71 & 27 & 33 & 28 & 28 & 39 & 34 \\ skip & \(\mathcal{D}\) on W & 30 & 30 & 32 & 72 & 28 & 31 & 27 & 31 & 47 & 16 \\ \hline \hline skip+SD & \(\mathcal{U}\) on W & 42 & 42 & 42 & 25 & 41 & 42 & 42 & 17 & 10 & 22 \\ skip+SD & \(\mathcal{N}\) on W & 21 & 24 & 24 & 53 & 23 & 23 & 22 & 22 & 32 & 40 \\ skip+SD & \(\mathcal{D}\) on W & 20 & 21 & 17 & 47 & 19 & 19 & 20 & 20 & 29 & 12 \\ \hline \hline transfo & \(\mathcal{U}\) on W & 0 & 0 & 0 & 22 & 0 & 0 & 0 & 6 & 12 & 0 \\ transfo & \(\mathcal{N}\) on W & 1 & 1 & 1 & 10 & 0 & 1 & 1 & 18 & 12 & 7 \\ transfo & \(\mathcal{D}\) on W & 0 & 0 & 0 & 7 & 0 & 0 & 0 & 5 & 14 & 0 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline avg & 35 & 43 & 43 & 43 & 36 & 39 & 40 & 37 & 31 & 33 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline Reduction Method & Avg Score & & & & & & & \\ \hline average & 37.086 & & & & & & & \\ best percentile & 38.636 & & & & & & & \\ \(l_{1}\) & 36.109 & & & & & & & & \\ \(l_{2}\) & 38.234 & & & & & & & \\ \(l_{\infty}\) & **39.809** & & & & & & & \\ \hline \end{tabular} \end{table} Table 2: Average score (number of correct complete orderings) for each reduction method. niques, which is remarkable considering that implementing successful post-training, few-shot mixed precision schemes is non-trivial in practice [23]. In what follows, we show that layer-wise ranking also find applications for robust inference. ### Robustness to Hardware Failure Layer ranking and robust inference:in this section, we consider the problem of ensuring robustness w.r.t. random bit-swaps caused by hardware failures that can occur at inference time e.g. during memory transfers or weight loading. A common solution to overcome this is to verify the computations performed by a layer, _i.e._ by performing these computations twice and comparing the results 2. To limit the computation overhead induced by redundant computations, we verify only the most important layers, as budgeted by one of the \begin{table} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{DFQ - ICCV ’19} \\ \hline method & quantization & accuracy \\ \hline W & & 73.282 \\ \hline \(\nabla\) & & 74.636 \\ \hline W \(\times\nabla\) & & 74.724 \\ \hline GCam++ & 74.546 \\ \hline IG & 74.710 \\ \hline IDGI & **74.816** \\ \hline SmoothV & & 65.284 \\ \hline VarV & & 74.284 \\ \hline HSIC & 73.902 \\ \hline - & W6/A8 & 73.904 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & & 74.864 \\ \hline IG & 74.212 \\ \hline IDGI & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.212 \\ \hline IDGI & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.710 \\ \hline IDGI & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.706 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.706 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.706 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.710 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.496 \\ \hline SGamothV & 74.616 \\ \hline SGamothV & 74.616 \\ \hline \(\nabla\) VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.706 \\ \hline IG & 74.496 \\ \hline \(\nabla\) SmoothV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.800 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.462 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74.776 \\ \hline W\(\times\nabla\) & 74.706 \\ \hline IG & 74.710 \\ \hline IG & 74.496 \\ \hline SmoothV & 74.616 \\ \hline VarV & 74.616 \\ \hline VarV & 74.848 \\ \hline HSIC & 74.600 \\ \hline - & W6A8 & 74.596 \\ \hline - & full-precision & 75.000 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{SAQuant - ICLR ’22} \\ \hline method & quantization & accuracy \\ \hline W & 74.626 \\ \hline W & 74.776 \\ \hline W\(\times\nabla\) & 74. aforementioned criteria. Intuitively, if we rank the layers by importance and gradually increase the number of layers left unchecked under random bit swaps, the accuracy will decrease: in such a case, the slower it decreases, the better the ranking criterion for robust inference. Robust inference results:In Figure 2, we report the evolution of the accuracy of the model under random bit-swaps with respect to the number of layers that remain unchecked (starting from 1). Similarly to what precedes, we observe the good performance of the \(W\times\nabla\) and GradCam++ criteria. However, in this case, \(\nabla\) is the least performing criterion, which can be attributed to the fact that gradients only measure local changes and specifically target the weights, while the bit-swaps are randomly applied to both weights and activations and can induce huge changes that break this locality principle. Conversely, other methods such as \(W\times\nabla\), GradCam++ and IDGI explicitly use information on both weights and gradients for a more relevant sensitivity assessment in this context. Nonetheless, we show that it is possible to preserve the accuracy without more that 1% accuracy loss while only checking 17 layers out of 52 which significantly reduces the redundancy overhead. ## 4 Conclusion In this work, we study and pave the way for future research on layer-wise sensitivity of Deep Neural Nets. In particular, we stem from DNN attribution techniques for input sensitivity understanding, and adapt these to derive candidates for fine-grained sensitivity assessment of the whole predictive function w.r.t. particular weights in a layer. We also list a number of candidates reduction methods to integrate these fine-grained information into a layer-wise measurement. In order to evaluate these methods and future works, we designed a synthetic dataset of neural network architectures from sequential to more complex designs with an exhaustive study of the sensitivity of the predictive function w.r.t. perturbations applied to each layer as the ground truth. We experimentally demonstrated that it is possible to retrieve the correct layer ranking in this setting, as well as derive best practices for layer-wise sensitivity assessment. We then applied this framework for two practical applications: First, DNN compression _via_ pruning and quantization (mixed precision). In this setup, we show that, with little effort, we can improve the performance of these methods by straightforwardly translating cross-layer relevance measurements into budgets for compression. Second, for robust inference, we can apply our layer-wise sensitivity assessment to check only the most relevant layers and avoid random bit-swaps caused by hardware failures. Limitations and future Work:First, one limitation of the proposed work is that the all the proposed candidates criteria fail to achieve satisfactory performance on transformers which are currently taking over NLP and computer vision domains. Perhaps the use of more complex and specific criteria shall be considered to solve this issue. Second, the ideas proposed in this paper could offer strong benefits towards more efficient DNN design: for instance, balancing per-layer importance and runtime cost (e.g. in a framework similar to [2]) could lead to more practical architectures, as well as pruning and quantization schemes. Furthermore, the study of intermediate levels of granularity (e.g. neurons, group of neurons or computational blocks) could lead to even more efficient inference as well as less costly monitoring of hardware failures.
2305.04963
From Relational Pooling to Subgraph GNNs: A Universal Framework for More Expressive Graph Neural Networks
Relational pooling is a framework for building more expressive and permutation-invariant graph neural networks. However, there is limited understanding of the exact enhancement in the expressivity of RP and its connection with the Weisfeiler Lehman hierarchy. Starting from RP, we propose to explicitly assign labels to nodes as additional features to improve expressive power of message passing neural networks. The method is then extended to higher dimensional WL, leading to a novel $k,l$-WL algorithm, a more general framework than $k$-WL. Theoretically, we analyze the expressivity of $k,l$-WL with respect to $k$ and $l$ and unifies it with a great number of subgraph GNNs. Complexity reduction methods are also systematically discussed to build powerful and practical $k,l$-GNN instances. We theoretically and experimentally prove that our method is universally compatible and capable of improving the expressivity of any base GNN model. Our $k,l$-GNNs achieve superior performance on many synthetic and real-world datasets, which verifies the effectiveness of our framework.
Cai Zhou, Xiyuan Wang, Muhan Zhang
2023-05-08T18:00:50Z
http://arxiv.org/abs/2305.04963v1
# From Relational Pooling to Subgraph GNNs: ###### Abstract Relational pooling (RP) is a framework for building more expressive and permutation-invariant graph neural networks (GNN). However, there is limited understanding of the exact enhancement in the expressivity of RP and its connection with the Weisfeiler-Lehman (WL) hierarchy. Starting from RP, we propose to explicitly assign labels to nodes as additional features to improve graph isomorphism distinguishing power of message passing neural networks. The method is then extended to higher-dimensional WL, leading to a novel \(k,l\)-WL algorithm, a more general framework than \(k\)-WL. We further introduce the subgraph concept into our hierarchy and propose a localized \(k,l\)-WL framework, incorporating a wide range of existing work, including many subgraph GNNs. Theoretically, we analyze the expressivity of \(k,l\)-WL w.r.t. \(k\) and \(l\) and compare it with the traditional \(k\)-WL. Complexity reduction methods are also systematically discussed to build powerful and practical \(k,l\)-GNN instances. We theoretically and experimentally prove that our method is universally compatible and capable of improving the expressivity of any base GNN model. Our \(k,l\)-GNNs achieve superior performance on many synthetic and real-world datasets, which verifies the effectiveness of our framework. Machine Learning, Knowledge-theoretic, Graph Neural Networks, Graph Neural Networks ## 1 Introduction Graph-structured data has recently revealed a significant importance in many fields, including bio-informatics, combinatorial optimization and social-network analysis, among which graph neural networks (GNNs) achieve great successes (Bronstein et al., 2016; Klicpera et al., 2020; Dai et al., 2017). Message passing neural network (MPNN) is one of the simplest and most commonly used GNNs (Zhou et al., 2018), whereas its expressivity in distinguishing non-isomorphic graphs is bounded by the one-dimensional Weisfeiler-Lehman test (1-WL) (Xu et al., 2018; Morris et al., 2018). Therefore, designing GNNs with stronger expressivity has aroused increasing attention. Numerous approaches have been proposed to enhance GNN's expressivity. Relational Pooling (RP) (Murphy et al., 2019; Chen et al., 2020) is a framework to build powerful permutation-invariant models by symmetrizing expressive permutation-sensitive base models. Concretely, RP first feed adjacency matrix to a powerful permutation-sensitive model, like Multi-Layer Perceptron (MLP), to achieve strong expressivity. Then the permutation invariance is guaranteed by averaging or summing over representations under all permutations of node IDs (hence all permutations of adjacency matrix). However, RP is impractical for most real-world graphs due to the \(O(n!)\) complexity, where \(n\) is the number of nodes. Based on RP, Chen et al. (2020) further introduce a local version called Local Relational Pooling (LRP), which performs permutation and averaging within an induced sub Figure 1: The expressivity hierarchy of \(k,l\)-WL. The blue arrows indicate Theorem 5.5 and Theorem 5.6, showing that increasing \(k\) and \(l\) will strictly increase expressivity. The yellow arrows imply Theorem 5.7, which states that \(k+1,l\)-WL is strictly more powerful than \(k,l+1\)-WL when \(k\geq 2\). graph. LRP's time complexity is reduced to the number of subgraphs \(O(n^{l})\), where \(l\) is the subgraph size. But so far, there is a lack of theoretical analysis on the expressivity of LRP. In this paper, we propose ID-MPNN, a variation of LRP that avoids the time complexity of RP. Instead of using MLP as the base encoder, ID-MPNN runs MPNN on the whole graph. Meanwhile, to improve expressivity, \(l\) nodes in the whole graph are labeled with \(1,2,...,l\). Through the lens of ID-MPNN, we establish a connection between (local) Relational Pooling and subgraph GNNs. Furthermore, by replacing MPNN with more powerful base encoders \(k\)-WL, we propose \(k,l\)-WL, a universal framework for many expressive GNNs (shown in Figure 2). Intuitively, \(k,l\)-WL can be viewed as running \(k\)-WL on a graph with \(l\) nodes labeled and symmetrizing over \(l\). We theoretically analyze the expressivity of \(k,l\)-WL and build a complete expressivity hierarchy of the algorithms with different \(k,l\) as shown in Figure 1. As a universal framework, \(k,l\)-WL incorporates a wide range of existing algorithms and GNN models, including relational pooling, the original \(k\)-WL, many subgraph GNNs, and some other GNN extensions. In summary, the organization of this paper and our main contributions are as follows. 1. Section 4 proposes ID-MPNN to improve LRP. ID-MPNN is further extended to a general framework \(k,l\)-WL, which incorporates a majority of existing WL and GNN variations, including RP and many subgraph GNNs. 2. Section 5 theoretically analyzes the algorithm's expressivity and builds a strict \(k,l\)-WL expressivity hierarchy, which is more general than the \(k\)-WL hierarchy. 3. Section 6 discusses practical issues in our \(k,l\)-WL framework and proposes techniques to improve scalability. 4. Section 7 evaluates \(k,l\)-WL with extensive experiments on both synthetic and real-world datasets. Our models achieve state-of-the-art results on several tasks and significantly outperforms previous works based on RP. ## 2 Related Work Graph Neural Network and Weisfeiler-Lehman test Weisfeiler-Lehman tests are a classical family of algorithms to distinguish non-isomorphic graphs. Previous works have built connections between the expressivity of GNNs and WL hierarchy (Xu et al., 2018; Frasca et al., 2022; Morris et al., 2019, 2020). We propose \(k,l\)-WL hierarchy, which is finer than \(k\)-WL and covers a wide range of existing models. Subgraph GNNsSubgraph GNNs encode a set of subgraph instead of the original graph for graph representation learning. Through careful designs, they can have both strong expressivity and good scalability. Many subgraph GNNs sample a subgraph for each node (You et al., 2021; Zhang et al., 2021; Bouritsas et al., 2023; Sun et al., 2021; Zhao et al., 2021). Frasca et al. (2022); Zhang et al. (2023) upper bound the expressivity of these subgraph GNNs by 3-WL. \(I^{2}\)-GNN extracts a subgraph for each connected node pair and boosts the cycle counting power (Huang et al., 2022). Qian et al. (2022) further extract a subgraph for each \(l\)-tuple of nodes and propose \(l\)-OSAN, which is equivalent to our ID-MPNN and \(2,l\)-WL. We propose a more general framework \(k,l\)-WL to incorporate most existing subgraph GNNs. ## 3 Preliminary Given an undirected graph \(G=(V,E,X)\), where \(V,E\) are the node set and edge set respectively, and \(X_{i}\) is the node feature of node \(i\), let \(N(v,G)=\{u\in V|(u,v)\in E\}\) denote the set of neighbors of node \(v\) in graph \(G\). Let \([n]\) denote the set \(\{1,2,...,n\}\). Given \(k\)-tuple \(\mathbf{a}\in V^{k}\) and \(l\)-tuple \(\mathbf{b}\in V^{l}\), let \(\mathbf{a}|\mathbf{b}\) denote a \(k+l\)-tuple, the concatenation of \(\mathbf{a}\) and \(\mathbf{b}\). Let \(\mathbf{a}_{i}\) denote the \(i\)-th element in tuple \(\mathbf{a}\), \(\psi_{i}(\mathbf{a},u)\) denote a tuple produced by replacing \(\mathbf{a}_{i}\) with \(u\). Let \(\mathbf{a}_{a:b}\) denote the slice of tuple \(\mathbf{a}\) containing \(a,a+1,...,b-1\)-th elements, where \(a\) is omitted if \(a=1\), \(b\) is omitted if \(b=|\mathbf{a}|+1\), and \(|\mathbf{a}|\) is the length of \(\mathbf{a}\). Weisfeiler-Lehman test (1-WL) is a common graph isomorphism test, which also bounds the expressivity of message passing neural networks (MPNNs) (Xu et al., 2018). It assigns a color \(c_{1}^{0}(v,G)\) to each node \(v\) in graph \(G\) initially according to \(X_{v}\). If the graph has no node feature, the colors of all nodes are the same. Then, 1-WL iteratively updates the node colors. The \(t\)-th iteration is as follows. \[c_{l}^{t}(v,G)\!=\!\text{Hash}(c_{l}^{t-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(c_{1}^{t}(v,G)\) is the color of node \(v\) at the \(t\)-th iteration. The color of \(v\) is updated by its original color and the colors of its neighbors. The color of the whole graph is the multiset of the node colors \[c_{1}^{t}(G)=\text{Hash}(\{\{c_{1}^{t}(v,G)|v\in V(G)\}\}). \tag{2}\] There still exist non-isomorphic graphs that 1-WL cannot differentiate. \(k\)-dimensional Weisfeiler-Lehman test has stronger expressivity. It assigns colors to all \(k\)-tuples and iteratively updates them. The initial color \(c_{k}^{0}(\mathbf{v},G)\) of tuple \(\mathbf{v}\in V(G)^{k}\) is determined by the isomorphism type of tuple \(\mathbf{v}\)(Maron et al., 2019) (see Appendix E). At the \(t\)-th iteration, the color updating scheme is \[c_{k}^{t}(\mathbf{v},G)=\text{Hash}(c_{k}^{t-1}(\mathbf{v},G),(\{\{\] \[c_{k}^{t-1}(\psi_{i}(\mathbf{v},u),G)|u\in V(G)\}\}|i\in[k])), \tag{3}\] where \(\psi_{i}(\mathbf{v},u)\) means replacing the \(i\)-th element in \(\mathbf{v}\) with \(u\). The color of \(\mathbf{v}\) is updated by its original color and the color of its high-order neighbors \(\psi_{i}(\mathbf{v},u)\). The color of the whole graph is the multiset of all tuple colors, \[c_{k}^{t}(G)=\text{Hash}(\{\{c_{k}^{t}(\mathbf{v},G)|\mathbf{v}\in V(G)^{k}\}\}). \tag{4}\] Note that \(k\)-WL (\(k\geq 2\)) takes a different form from \(1\)-WL. Our discussion mainly focuses on \(k\geq 2\) cases. Since \(1\)-WL has the same expressivity as \(2\)-WL, we can directly apply the conclusion of \(2\)-WL to \(1\)-WL. ## 4 \(k,l\)-WL: A Universal Framework ### Message passing with labels: enhancement by asymmetry The expressivity of models built by Relational Pooling (RP) depends on the power of the base encoder before symmetrization. Some previous works use MLP (Chen et al., 2020) and RNN (Huang et al., 2022) to capture relations between nodes. They have high expressivity but little inductive bias for graph data. Moreover, in practical settings, where Local Relational Pooling (LRP) is used, they can only encode induced subgraphs and lose the global information of graph. To solve these problems, we introduce asymmetry to MPNN by assigning nodes unique labels (which are additional features, different from the node indices only to name different nodes) and use _MPNN with labels_ as the base encoder. Given an input graph \(G\), MPNN with labels first assigns label \(i\) (node ID) to each node \(i\) as an additional feature and then runs standard message passing on the labeled graph. MPNN with full labels is expressive enough to encode the full graph information: MPNN can encode the multiset of neighbors into node representations. With node ID labels, each node's representation can identify the neighboring nodes connected to it. Therefore, the representation of the whole graph can identify the connectivity between nodes in the whole graph and thus enable distinguishing non-isomorphic graphs. Moreover, the standard message passing introduces inductive bias on graph data. MPNN with labels can also be easily adapted to the LRP setting. Instead of assigning all nodes unique labels, MPNN with labels can assign \(1,2,..,l\) to only \(l\) nodes and run message passing on the whole graph. Therefore, MPNN with labels can still capture global graph feature, unlike standard LRP only taking induced subgraphs as input. Our ID-MPNN combines MPNN with labels with LRP. An ID-MPNN parameterized by \(l\) (called \(l\)-IDMPNN) explicitly assigns \(l\)**unique** labels (IDs) to \(l\) nodes (can be duplicated, thus \(n^{l}\) labeled graphs in total) as an additional feature. Then, a standard message passing is performed on each labeled graph. Finally, the representations of these labeled graphs are aggregated to produce the original graph representation. A contemporary work by Qian et al. (2022) also proposes similar models. However, they neither connect ID-MPNN with LRP nor extend ID-MPNN to a more general framework \(k,l\)-WL as in the following section. ### \(k,l\)-WL: enhancement by higher dimension So far, it is natural to ask: what if we replace the MPNN with other more powerful GNNs? Equivalently, can 1-WL be replaced by higher-dimensional WL tests on the labeled graphs? When all nodes are assigned with unique labels, even MPNN can distinguish all non-isomorphic graphs and thus using more powerful models is meaningless. However, when the number of nodes labeled is fixed, we give a positive answer: if we run \(k\)-WL (\(k\geq 3\)) with \(l\) labels, it will be more powerful than \(1\)-WL (with \(l\) labels). We name running \(k\)-WL on labeled graphs with \(l\) IDs as \(k,l\)-WL, which is formally defined as follows. 1. Given an \(l\)-tuple of nodes \(\mathbf{v}\) in graph \(G=(V,E,X)\), the labeled graph is \(G^{\mathbf{v}}=(V,E,X^{\mathbf{v}})\), where \(\forall u\in V,X_{u}^{\mathbf{v}}=\text{Hash}\left(X_{u},\{[i|\mathbf{v}_{i}=u,i\in[l] \}\right)\}\). In other words, node \(\mathbf{v}_{i}\) will have an extra label \(i\). 2. \(k,l\)-WL then runs \(k\)-WL on each labeled graph \(G^{\mathbf{v}}\). * \(c_{k}^{0}(\mathbf{u},G^{\mathbf{v}})\), the color of \(k\)-tuple \(\mathbf{u}\) in graph \(G^{\mathbf{v}}\) is initialized by the isomorphism type of \(\mathbf{u}\) in \(G^{\mathbf{v}}\). * The tuple color at the \(t\)-th iteration: \[c_{k}^{t}(\mathbf{u},G^{\mathbf{v}})=\text{Hash}(c_{k}^{t-1}(\mathbf{u},G^{\mathbf{v}}),\] \[(\{c_{k}^{t-1}(\psi_{i}(\mathbf{u},w),G^{\mathbf{v}})|w\in V\})|i\in[k])).\] (5) * The full graph color at the \(t\)-th iteration: \[c_{k}^{t}(G^{v})=\text{Hash}(\{\{c_{k}^{t}(\mathbf{u},G^{v})|\mathbf{u}\in V^{k}\}\}).\] (6) 3. The color of the whole graph is produced by aggregating the colors of all labeled graphs. \[c_{k,l}^{t}(G)\!=\!\text{Hash}\big{(}\{\{c_{k}^{t}(G^{v})|\mathbf{v}\in V^{l}\}\} \big{)}.\] (7) Here, we briefly explain the \(k,l\)-WL algorithm, a more general and powerful form of \(k\)-WL. The key difference between \(k,l\)-WL and traditional \(k\)-WL lies in the initialization process. In \(k\)-WL, by applying a hash function, two \(k\)-tuples will get the same initial color if and only if they are from the same isomorphism class. However, this initialization results in limited initial colors due to the limited size of isomorphism classes within \(k\)-tuples. For example, \(1\)-WL assigns all nodes the same color since there is only one isomorphism type of \(1\)-tuple, which further restricts the expressivity of the following steps in the algorithm. In comparison, at the initialization of \(k,l\)-WL, unique labels are assigned to \(l\)-nodes. We then assign colors to these \(k\)-tuples according to their isomorphism types concerning the labeled tuples. See Appendix E for details and the mathematical forms. We will show this initialization makes \(k,l\)-WL more expressive than \(k\)-WL. Then the update scheme in \(k,l\)-WL is the same as that of \(k\)-WL. The method to explicitly assign IDs to certain nodes aligns well with the original \(k\)-WL hierarchy, and we will show in Section 5 that \(k,l\)-WL is **strictly** more powerful than \(k\)-WL when \(l>0\). We refer our readers to Appendix E for detailed proofs, Appendix G for an example which helps to understand better the effect of explicitly introducing IDs, and Appendix H for more insights. Additionally, note that any \(k,l\)-WL algorithm has a corresponding GNN implementation, which we name as \(k,l\)-GNN. Theoretically, if the instance contains a base GNN encoder equivalent to \(k\)-WL, explicitly embeds IDs to \(l\)-tuples, and uses an injective pooling function, \(k,l\)-GNN is as powerful as \(k,l\)-WL. To align with existing methods, we design two practical network architectures to implement \(k,l\)-GNN as shown in Figure 3. ### Unifying existing hierarchies Here we briefly discuss the connection between our framework and previous work, including subgraph GNNs, relational pooling, GNN extensions mentioned in (Papp and Wattenhofer, 2022), and other methods. We refer our readers to Appendix F for more details. Firstly, \(k,l\)-WL can incorporate all relational pooling (RP) and Local Relational Pooling methods, since node marking is the most general and expressive form and can simulate all other extensions (Papp and Wattenhofer, 2022). Secondly, \(k,l\)-WL incorporates a wide range of subgraph GNNs. Zhang et al. (2023) shows that all node-based subgraph GNNs fall in one of 6 equivalent classes of Subgraph Weisfeiler-Lehman Tests. Remarkably, SWL is exactly \(1,1\)-WL (and equivalently, \(2,1\)-WL) in our framework, which reveals the connection between our work and many other subgraph GNNs unified by Zhang et al. (2023). Moreover, \(k,l\)-WL also incorporates some subgraph GNNs out of the scope of SWL (Zhang et al., 2023), such as \(I^{2}\)-GNN (Huang et al., 2022). Our \(1,2\)-WL is a slightly more powerful version than \(I^{2}\)-GNN since we consider all \(2\)-tuples to label, while \(I^{2}\)-GNN only considers those connected \(2\)-tuples. \(1,2\)-WL can distinguish some non-isomorphic graph pairs that SWL and 3-WL fail to discriminate, and the algorithm becomes even more powerful as we increase \(k\) or \(l\). Thirdly, while a number of works such as OSAN (Qian et al., 2022) are a strict subclass of our framework, there are still some works cannot be incorporated directly. For example, \(k,l\)-WL operates independently on different labeled graphs and does not include interaction between labeled (sub)graphs as in Zhao et al. (2021) and those more expressive SWL variants in Zhang et al. (2023). Introducing inter-labeled-graph message passing will increase the expressivity of \(k,l\)-WL, but at the price of additional computation cost. It will also be complicated to analyze its theoretical expressivity if we introduce labeled graph interactions, which we leave for future work. Finally, our framework incorporates all four kinds of GNN extensions in Papp and Wattenhofer (2022): higher-order WL, counting substructures, injecting local information, and marking nodes. Due to the limited space, the detailed discussion is in Appendix F. ## 5 The Expressivity Hierarchy of \(k,l\)-Wl In this section, we theoretically analyze the expressivity of \(k,l\)-WL. We first define the comparison between algorithms' expressivity: For any algorithm \(A\) and \(B\), we denote the final color of graph \(G\) computed by them as \(c_{A}(G)\) and \(c_{B}(G)\), we say: * \(A\) is **more powerful** than \(B\) (\(B\preceq A\)) if for any pair of graphs \(G\) and \(H\), \(c_{A}(G)=c_{A}(H)\Rightarrow c_{B}(G)=c_{B}(H)\). Otherwise, there exists a pair of graphs that \(B\) can differentiate while \(A\) cannot, denoted as \(B\not\preceq A\). * \(A\) is **as powerful as**\(B\) (\(A\cong B\)) if \(B\preceq A\wedge A\preceq B\). * \(A\) is **strictly more powerful** than \(B\) (\(B\prec A\)) if \(B\preceq A\ \wedge\ A\not\cong B\), i.e., for any pair of graphs \(G\) and \(H\), \(c_{A}(G)=c_{A}(H)\Rightarrow c_{B}(G)=c_{B}(H)\), and there exists at least one pair of graphs \(H,G\) s.t. \(c_{B}(G)=c_{B}(H),c_{A}(G)\neq c_{A}(H)\). * \(A\) and \(B\) are **incomparable** (\(A\nsim B\)) if \(A\npreceq B\ \land B\not\preceq A\). In this case, \(A\) can distinguish a pair of non-isomorphic graphs that cannot be distinguished by \(B\) and vice versa. ### Connection with existing hierarchies A special case of \(k,l\)-WL is that \(l=0\) and no extra labels is attached to the graph. We have **Theorem 5.1**.: \(\forall k\geq 2\)_, \(k,0\)-WL\(\cong k\)-WL._ Another special case of \(k,l\)-WL is the \(k=1\) case. With no label, \(2\)-WL is of the same expressivity to \(1\)-WL. We find that with \(l\) labels, the equality still holds. **Theorem 5.2**.: \(\forall l\geq 0,\,1,l\)-WL\(\cong 2,l\)_-WL_ where \(1,l\)-WL is just \(l\)-OSAN (Qian et al., 2022). A variant of \(k\)-WL test is \(k\)-Folklore Weisfeiler-Lehman (\(k\)-FWL) test. It is known that \(\forall k\geq 1\), \(k\)-FWL \(\cong k+1\)-WL. With labels, the equality still holds: **Theorem 5.3**.: \(\forall k\geq 1,l\geq 0,\,k,l\)_-FWL\(\cong k+1,l\)-WL_ where \(k,l\)-FWL runs \(k\)-FWL on \(l\)-labeled graphs. See Appendix A for more details. \(1,1\)-WL is equivalent to the vanilla subgraph Weisfeiler-Lehman test (SWL(VS)) proposed by Zhang et al. (2023). **Theorem 5.4**.: \(1,1\)_-WL\(\cong\) SWL(VS)._ SWL(VS) further unifies various subgraph GNNs, like Nested GNN (Zhang and Li, 2021) and ID-GNN (You et al., 2021). ### Expressivity hierarchy of \(k,l\)-WL Similar to WL tests, we can establish a hierarchy for \(k,l\)-WL in terms of distinguishing non-isomorphic graphs. The full hierarchy is shown in Figure 1. \(\forall k\geq 2,l\geq 0,\,k,l\)-WL essentially produces colors for \(|V|^{k+l}\) tuples. Intuitively, increasing \(k\) and \(l\) will boost expressivity as more tuple colors will be computed. We show increasing \(k\) and \(l\) both **strictly** increases the expressivity. **Theorem 5.5**.: \(\forall k\geq 2,l\geq 0,\,k,l\)_-WL\(\prec k+1,l\)-WL_ **Theorem 5.6**.: \(\forall k\geq 1,l\geq 0,\,k,l\)_-WL\(\prec k,l+1\)-WL_ However, with fixed \(k+l\) and number of colors, a larger \(k\) will lead to more message passing processes between tuples and stronger expressivity, shown in the following. **Theorem 5.7**.: \(\forall k\geq 2,l\geq 0,\,k,l+1\)_-WL\(\prec k+1,l\)-WL_ With these theorems, we can prove a lot of useful corollaries. For example, \(k,l\)-WL is less expressive than \(k+l\)-WL: **Corollary 5.8**.: \(\forall k\geq 2,l\geq 1\)_, \(k,l\)_-WL\(\prec k+l\)-WL_ Moreover, \(l+1\)-WL is not more powerful than \(2,l\)-WL (Qian et al., 2022). **Theorem 5.9**.: \(\forall l\geq 1,\,2,l\)_-WL\(\not\preceq l+1\)-WL_ When \(l=2\), the above result recovers the known result of \(I^{2}\)-GNN (Huang et al., 2022). Besides graph isomorphism power, counting power is also an important measure of the representation capability of GNNs. We conclude that \(k,l\)-WL is able to count all connected substructures within \(k+1\) nodes, see also Yan et al. (2023). This is also verified by our experiments on substructure counting. ## 6 Practical \(k,l\)-GNN: Improving Scalability and Compatibility In this section, we will discuss \(k,l\)-GNN, the neural implementation of \(k,l\)-WL. \(k,l\)-GNN runs GNNs with the same expressivity as \(k\)-WL (Xu et al., 2018; Maron et al., 2019) on \(l\)-labeled graphs. We will discuss practical issues affecting the performance and scalability as well as our solutions. Our framework can also be applied to any other expressive GNNs to improve expressivity. ### Implementation of \(k,l\)-Gnn Our \(k,l\)-WL runs \(k\)-WL on \(l\)-labeled graphs. Therefore, in implementation, we run a base encoder on all labeled graphs and pool their representations. Generally speaking, we can adopt any architecture as our base encoder and improve its expressive power via labeling. Specifically, when the base encoder has the same expressivity as \(k\)-WL, it reduces to our standard \(k,l\)-GNN. The base encoder, however, does not need to have exactly the same expressive power as certain \(k\)-WL in practice, since we can always upper bound the expressive power of the lifted model by some \(k,l\)-WL. In other words, our method can lift the expressivity of many existing architectures through labeling. To make our framework applicable to any model, we propose two architectures to lift the expressive power of a base model. For convenience, we first suppose that the base encoder has \(k\)-WL-equivalent expressivity, and we will lift its expressivity to \(k,l\)-WL. _Architecture (a)_ in Figure 3 (a) exactly simulates the original form of \(k,l\)-WL: The input node feature matrix (represented by blue rectangles) is replicated to the number of labeled graphs and concatenated (or by other combination methods like add) with the corresponding label features. Then the original node features and label features are jointly learned by the same \(k\)-WL-equivalent GNN (base encoder). If the pooling function is injective (like Deepset (Zaheer et al., 2017)), architecture (a) fully preserves the expressivity of \(k,l\)-WL. _Architecture (b)_ in Figure 3 (b) is slightly different from the original \(k,l\)-WL, which learns original node features and new label features by two models separately. The \(k\)-WL-equivalent base encoder learns the original node features only once. An extra \(k^{\prime}\)-WL (\(k^{\prime}\neq k\) is permitted) equivalent _structure encoder_ learns label features without node features in each \(l\)-labeled graph. The \(l\)-labeled graph representations are then aggregated by an aggregator and concatenated to the representation of the original graph. We call it structure encoder since label features do not introduce information more than graph structure. Theoretical analysis and comparison between these two architectures are presented in Appendix I. In short, architecture (b) is less expressive but more scalable. As the number of labeled graphs is \(n^{l}\), and the complexity of \(k\)-WL-equivalent GNN can be \(n^{k}\), architecture (a) can preserve the expressivity of \(k,l\)-WL with a complexity of \(O(n^{k+l})\). Architecture (b) can use different structure encoder and base encoder, i.e. \(k^{\prime}<k\). Then the complexity of architecture (b) can be reduced to \(O(n^{k}+n^{k^{\prime}+l})\). Moreover, we experimentally find that architecture (b) tends to outperform (a). One intuition behind is that we may need two different sets of parameters to learn label feature and node features, respectively. In summary, architecture (b) is designed to reduce the complexity at the cost of losing part of expressive power compared with architecture (a), the original GNN implementation of \(k,l\)-WL. However, the decoupled architecture tends to perform better in real-world tasks. Despite their differences, we emphasize that both architectures can boost the expressivity. Below, we propose several \(k,l\)-GNN instances parameterized by different \(k\) and \(l\), including ID-MPNN and ID-PPGN. We as well lift some other base models through our framework, such as ID-Transformer. Note that the following instances can all adopt either architecture (a) or (b), depending on application scenarios. Id-MpnnAn \(l\)-IDMPNN is an instance of \(1,l\)-WL, since Message Passing Neural Network (MPNN) is equivalent to \(1\)-WL (Xu et al., 2018). The model is easy to implement but reveals strong expressivity as \(l\) increases. One can easily verify that ID-MPNN incorporates Identity-aware GNN (\(l=1\)), \(I^{2}\)-GNN (\(l=2\)) and \(l\)-OSAN. With our implementation, ID-MPNN outperforms the above models experimentally. Id-PpgnFor \(3\)-WL equivalent base encoders, we select PPGN (Maron et al., 2019) as our base encoder. An ID-PPGN with \(l\) labels is as powerful as \(3,l\)-WL, which is strictly more powerful than \(3\)-WL when \(l>0\). ID-TransformerWe also apply our framework to graph transformers. It is not a \(k,l\)-GNN strictly as graph transformer is not \(k\)-WL equivalent. However, our framework is general enough to take any base graph learning model. Additionally, many techniques can be applied in the implementation of \(k,l\)-GNN, such as positional encodings (PE) and structure encodings (SE) (Rampasek et al., 2022), GNN as kernel techniques (Zhao et al., 2021), etc. In conclusion, our method is a universal framework to improve the expressivity of base models while being compatible with many other methods and techniques. Please refer to Appendix J for more implementation details. In the next section, we discuss how to reduce the complexity when \(l\) is large. ### Labeled graph selection As \(k,l\)-GNN runs a base encoder on each labeled graph, the total complexity is proportional to the number of labeled graphs. This section focuses on how to select a subset of labeled graphs. We also discuss how to segregate a subgraph Figure 3: Two architectures of \(k,l\)-GNN. (a) The input graph feature (represented by blue rectangle) is duplicated for each labeled subgraph, and one base encoder jointly learns ID features (represented by squares of different colors) and graph features. (b) The graph features and ID features are parallelly learned by the base encoder and a structure encoder, respectively, which are then aggregated together and passed to downstream architectures. from the labeled graph and thus reduce the size of labeled graphs in Appendix B, as subgraph size also affects the scalability and reduced subgraph size is important for the successes of many subgraph GNNs (Zhang and Li, 2021). We list some labeled graph selection strategies as follows. Among them, random sampling and node-based policies are commonly used. We propose two new sampling policies: constraint-based policy and hierarchical policy. * Random sampling. This is one of the most common methods in existing subgraph GNNs. Statistically, the graph representation is permutation invariant and unbiased but with a variance among different samplings. We can conduct parallel samplings for variance reduction and use mean/median/voting as the final representation. The complexity of randomly sampling \(l\)-tuples is \(O(\alpha n^{l})\), where \(\alpha\) is the sampling rate. * Node-based policies. This is another family of invariant sampling methods. For example, we can extract a K-hop ego-net for each root node and select all \(l\)-tuples in the ego-net with the root node always being the first node in each \(l\)-tuple. The complexity is \(O(nm^{l-1})\), where \(m\) is the average size of ego-nets. * Constraint-based policies. These methods search within all possible \(l\)-tuples and filter out those failing to meet certain constraints. For example, if we upper-bound the shortest path distance \(3\) between any pair of nodes in a \(6\)-tuple, we can sample many \(6\)-rings while excluding any \(6\)-paths. Compared with node-based policy, constraint-based methods do not sample the same induced subgraphs repetitively and enjoy a higher design freedom. In implementation, this can be efficiently implemented by dynamic programming, e.g., Floyd-Warshall algorithms. The number of subgraphs depends on the graph and constraints. * Hierarchical policies. Theses methods hierarchically select subgraphs. We can use algorithms like min-cut or node clustering (e.g., spectral clustering) to divide the graph into clusters with an average size \(m\). Label nodes are then selected only within each cluster, resulting in an average complexity of \(O(m^{l}.\frac{n}{m})\). Since \(m\ll n\), the hierarchical policy can significantly reduce the number of subgraphs while still being able to encode sufficient local structure information. In most real-world task experiments, we use constraint-based and hierarchical policies, achieving impressive experimental results at a low computation complexity. See Appendix L for an ablation study on different sampling policies. Learn to sample and Learn to labelAside from traversing or sampling according to certain rules, Qian et al. (2022) use Implicit-MLE framework (Niepert et al., 2021) (which allows us to back-propagate through continuous-discrete architecture) to sample subgraphs in a data-driven fashion. With this method, \(k,l\)-GNN can also learn to label tuples and sample subgraphs to minimize the target loss function in a data-driven manner. See Appendix L for more details and experimental results. ## 7 Experiments In this section, we conduct experiments on both synthetic and real-world tasks to verify our models' expressivity and real-world performance. ### Graph isomorphism task DatasetWe select two synthetic datasets, EXP and SR25, to empirically verify our models' expressivity for graph isomorphism tasks. EXP (Abboud et al., 2020) contains 600 pairs of non-isomorphic graphs that 1-WL and 2-WL fail to distinguish. SR25 (Balcilar et al., 2021) contains 15 non-isomorphic strongly regular graphs (i.e., 105 non-isomorphic pairs) that 3-WL fails to distinguish. An accuracy of \(50\%\) on EXP and \(6.67\%\) on SR25 suggest the model fails to distinguish any non-isomorphic graphs in the dataset. ModelsFor baseline models, we choose GIN, PNA (Corso et al., 2020), Identity-aware GNN (You et al., 2021), GIN-AK+ (Zhao et al., 2021) and PPGN (Maron et al., 2019). In comparison, we choose our ID-MPNN and ID-PPGN to understand the expressivity hierarchy better. ResultsThe results are shown in Table 2. When the number of IDs \(l\geq 2\), ID-MPNN and ID-PPGN achieve perfect performance on the two datasets. In comparison, all other models fail on the SR25 dataset. By comparing the results of GIN and \(l\)-IDMPNN as well as PPGN and \(l\)-IDPPGN, we verify that our framework can improve expressivity. Concretely, we have the following conclusions: * \(1,1\)-WL is more powerful than \(1\)-WL and \(2\)-WL. * \(1,2\)-WL and \(3,2\)-WL can distinguish some non-isomorphic graph pairs that are indistinguishable by \(3\)-WL. These results are consistent with our theoretical analysis in Section 5. ### Substructure counting DatasetTo verify our model's expressivity of counting substructures, we evaluate on random regular graph dataset (Chen et al., 2020). There are four target substructures: triangle, tailed triangle, star and chordal-cycle. Test MAE measures the results. ModelsWe choose GCN, KP-GIN+ (Feng et al., 2022), GIN-AK+ (Zhao et al., 2021), and DeepLRP (Chen et al., 2020) as the baseline models. We use \(l\)-IDMPNN, and additionally restricts message passing only on the labeled tuples, where \(l\) is the size of the target substructure. ResultsThe results are shown in Table 3. Our model achieves state-of-the-art performance on all tasks, and the test MAE is nearly \(0\). This verifies our theoretical results that \(1,l\)-GNN can completely count substructures within \(l\) nodes. follow a commonly used training/validation/test split ratio of 0.8/0.1/0.1, and the results of the first 12 targets are reported. ZINC12k is a subset of ZINC250k containing 12k molecules. The task is also molecular property (constrained solubility) regression. ogbg-molhiv contains 41k molecules for graph binary classification (whether a molecule inhibits HIV virus replication or not). We use the official split for ZINC and ogbg-molhiv. ModelsFor QM9 dataset, MPNN, DTNN (Wu et al., 2017), DeepLRP (Chen et al., 2020), PPGN (Maron et al., 2019) and Nested GNN (Zhang and Li, 2021) are chosen as baseline models. For ZINC12k, we choose GIN (Xu et al., 2018), PNA (Corso et al., 2020), DeepLRP (Chen et al., 2020), OSAN (Qian et al., 2022), KP-GIN+ (Feng et al., 2022), GNN-AK+ (Zhao et al., 2021), CIN (Bodnar et al., 2021) and GPS (Rampasek et al., 2022) for comparison. For ogbg-molhiv, PNA (Corso et al., 2020), DeepLRP (Chen et al., 2020), NGNN (Zhang and Li, 2021), KP-GIN (Feng et al., 2022), \(I^{2}\)-GNN (Huang et al., 2022), CIN (Bodnar et al., 2021) and SUN(EGO) (Frasca et al., 2022) are selected. ResultsOur \(4\)-IDMPNN achieves superior performance in 7 out of 12 tasks on QM9 dataset (Table 1), while results for the remaining targets are also highly competitive. On ZINC12k (Table 4) and ogbg-molhiv (Table 5), although IDMPNN does not achieve the best results, it is still comparable to the state-of-the-art models. Moreover, we do not use any additional features or pretraining in any datasets, reflecting the power of our model. This suggests that our method can effectively enhance the performance of base encoders for real-world tasks in addition to increasing the expressivity.While our \(k,l\)-GNN framework captures DeepLRP, GSN and OSAN, we find instances such as \(4\)-IDMPNN that greatly surpass these works in real-world tasks. ## 8 Conclusions In this work, we establish a novel \(k,l\)-WL framework that explicitly assigns labels to \(l\) nodes while running a \(k\)-WL algorithm. We theoretically analyze the expressivity hierarchy of \(k,l\)-WL, which incorporates many existing relational pooling methods and subgraph GNNs. Due to its strong compatibility, our framework can improve the expressivity of any base model by just augmenting ID features on (sampled) subgraphs. Various acceleration methods are also discussed to build practical, effective models. Some of our \(k,l\)-GNN instancees achieve state-of-the-art performance on several synthetic and real-world tasks, verifying the power of our framework. ## Acknowledge This project is supported in part by the National Key Research and Development Program of China (No. 2021ZD0114702).
2303.15538
GlassNet: a multitask deep neural network for predicting many glass properties
A multitask deep neural network model was trained on more than 218k different glass compositions. This model, called GlassNet, can predict 85 different properties (such as optical, electrical, dielectric, mechanical, and thermal properties, as well as density, viscosity/relaxation, crystallization, surface tension, and liquidus temperature) of glasses and glass-forming liquids of different chemistries (such as oxides, chalcogenides, halides, and others). The model and the data used to train it are available in the GlassPy Python module as free and open source software for the community to use and build upon. As a proof of concept, GlassNet was used with the MYEGA viscosity equation to predict the temperature dependence of viscosity and outperformed another general purpose viscosity model available in the literature (ViscNet) on unseen data. An explainable AI algorithm (SHAP) was used to extract knowledge correlating the input (physicochemical information) and output (glass properties) of the model, providing valuable insights for glass manufacturing and design. It is hoped that GlassNet, with its free and open source nature, can be used to enable faster and better computer-aided design of new technological glasses.
Daniel R. Cassar
2023-03-27T18:36:41Z
http://arxiv.org/abs/2303.15538v3
# GlassNet: a multitask deep neural network for predicting many glass properties ###### Abstract A multitask deep neural network model was trained on more than 218k different glass compositions. This model, called GlassNet, can predict 85 different properties (such as optical, electrical, dielectric, mechanical, and thermal properties, as well as density, viscosity/relaxation, crystallization, surface tension, and liquidus temperature) of glasses and glass-forming liquids of different chemistries (such as oxides, chalcogenides, halides, and others). The model and the data used to train it are available in the GlassPy Python module as free and open source software for the community to use and build upon. As a proof of concept, GlassNet was used with the MYEGA viscosity equation to predict the temperature dependence of viscosity and outperformed another general purpose viscosity model available in the literature (ViscNet) on unseen data. An explainable AI algorithm (SHAP) was used to extract knowledge correlating the input (physicochemical information) and output (glass properties) of the model, providing valuable insights for glass manufacturing and design. It is hoped that GlassNet, with its free and open source nature, can be used to enable faster and better computer-aided design of new technological glasses. Keywords: non-metallic glasses, artificial neural networks, property prediction ## 1 Introduction Glasses are particularly interesting materials for data-driven modeling. First, there is no need for crystal structure descriptors, because these materials are noncrystalline. Second, commercial glass properties are highly dependent on glass chemical composition, because the vast majority of these glasses are manufactured by the same process, the melt and quench technique [1]. These facts are often exploited in the literature with good to great results [2]-[9]. Recently, Le Losq et al. [10] reported a new model called i-Melt, which is a multitask deep neural network capable of predicting 18 different properties of melts and glasses in the K\({}_{2}\)O--Na\({}_{2}\)O--Al\({}_{2}\)O\({}_{3}\)--SiO\({}_{2}\) system. A multitask model is a single model that has more than one output [11]. The expectation is that learning how to predict one output can help in predicting another related output. This work is inspired by i-Melt. The goal is to test whether multitask learning can be beneficial to glass modeling when we increase the scope of the model, both the input (more glass chemistries) and the output (more properties). In the literature, single-output neural networks are viable algorithms for glass modeling [12]-[22]. However, multitask models have not been explored except for the work of Le Losq et al. [10]. The main hypothesis investigated in this work was \(\mathbf{H}_{1}\): multitask learning improves the performance of predictive models of glass properties. ## 2 Materials and methods ### Data acquisition and processing The SciGlass database, which is openly available to the community as a GitHub repository [23], provides the data used in this paper. This database consists of several dataframes. For this work, all the data in the main dataframe were collected. The processing steps for the raw data were as follows: 1. Only glasses with a sum of the molar fractions of the compounds between 0.99 and 1.01 were taken into account. Sum of molar fractions that are too far away from unity are probably due to typing errors. 2. Glasses with non-zero amounts of any compound in {"Al2O3+Fe2O3", "MoO3+WO3", "CaO+MgO","FeO+Fe2O3","Li2O+Na2O+K2O","Na2O+K2O","F2O-1","FemOn","HF+H2O", "R2O","R2O3", "R2O3","RO","RmOn"} were removed. These compounds cannot be converted to atomic mass (see next step). 3. The chemical features were converted from compound molar fraction to atomic molar fraction. The atomic fractions were then rebalanced so that their sum is 1 for all examples. 4. Only the elements between atomic numbers 1 and 83 (hydrogen and bismuth included) were considered, excluding promethium and the noble gases. Glasses containing non-zero amounts of excluded elements were removed. This strategy will become clear in the next section when discussing feature extraction. 5. A total of 85 target properties were considered. Glasses without values for at least one property were removed. These properties and related information will be presented later. 6. Some of the properties were processed by setting an acceptable minimum or maximum value. These limits are shown in the Supplementary Material. After these steps, the processed dataset had 281,093 examples with 919,164 filled target cells. Many target cells are unfilled, as expected. This is not an issue for inducing a neural network model, because the unfilled cells do not contribute in the backpropagation step [24]. The next step was deduplication, the removal of duplicate entries to avoid data leakage [25]. For this step, the atomic fraction was first rounded to the third decimal place, and then glasses with the same composition were grouped together and collapsed into a single entry. The resulting target value was the median of the targets from the duplicate groups, ignoring the unfilled cells. If all cells for a given target were unfilled, then the final collapsed cell was also unfilled. The median was chosen instead of the mean, because it is robust to outliers in small datasets. After deduplication, the final dataset had 218,533 examples of different glasses, with 795,298 filled target cells. Finally, the data were shuffled and 10% were randomly selected to be part of the holdout dataset. These data were not used for anything else, except at the very end to test the predictive ability of the selected models (simulating what happens when the model sees new data). Hereafter, any reference to the dataset refers to the 90% of the data that was _not_ selected for the holdout dataset, unless otherwise stated. ### Feature extraction, feature selection, and data scaling Inspired by the works of Ward et al. [26], Hu et al. [27], and Nakamura et al. [28], [29], we extracted physicochemical features from the chemical information of the glasses, following a similar procedure reported in a previous communication [19]. This procedure involved three steps. The first step was to collect the physicochemical properties of the elements considered in this work (see step 4 of data processing in the previous section). A total of 55 elemental properties were collected using the Python modules mendeleev[30] and matminer[31]. The only restriction at this stage was that the property had to be available for all elements studied (justifying why promethium and noble gases were not considered in the previous section). See Table 1 for 25 of those 55 properties that were selected. The Supplementary Material lists the physicochemical properties that were considered but not selected. The second step was using the physicochemical properties to compute new features. Let us consider one glass as an example. Let \(\mathbf{C}=[x_{\mathrm{H}},x_{\mathrm{Li}},\cdots,x_{\mathrm{Bi}}]\) be a vector of the atomic mole fractions of the chemical elements that make this glass. Let \(\mathbf{S}=[s_{\mathrm{H}},s_{\mathrm{Li}},\cdots,s_{\mathrm{Bi}}]\) be a vector of a certain physicochemical property \(s\) (atomic radius, for example). Each vector element of \(\mathbf{C}\) and \(\mathbf{S}\) correspond to the atomic mole fraction or physicochemical property of a chemical element. Having these vectors, we can compute the weighted features with \[w=f(\mathbf{C}\circ\mathbf{S}) \tag{1}\] and the absolute features with \[a=f(\lceil\mathbf{C}\rceil\circ\mathbf{S}). \tag{2}\] In the equations above, \(f\) is an aggregator function, and \(\circ\) is the Hadamard product, also known as the element-wise product. The aggregator functions considered in this work are \(\{\mathrm{sum},\min,\max,\mathrm{mean},\mathrm{std}\}\). The third step was feature selection. After the previous step, the dataset had 627 features: 77 representing the atomic mole fraction of the chemical elements, 275 representing the weighted physicochemical features (55 physicochemical properties times 5 aggregator functions), and 275 representing the absolute physicochemical features. The first procedure was to remove features with extremely low variance, defined as those with standard deviation lower than \(10^{-3}\). The second procedure was to remove features with high multicolinearity using the Variance Inflation Factor (VIF) [32]. The steps were: 1. Compute the VIF for all remaining features; 2. If all values of VIF are below 5, then stop; 3. Otherwise, remove the feature with the highest VIF and return to step 1. The rationale of feature extraction is to add more relevant information to the problem to be leveraged by the algorithm when inducing the predictive model. The benefit of feature selection is twofold: reduce the computation power to induce the model (more features require more computation) [33], and reduce multicolinearity, which improves convergence of the model. Finally, the dataset was scaled using a min-max scaler. This is a linear transformation that converts all features and targets to values between zero and one. This strategy is often used when training neural networks to improve convergence and reduce the difference in magnitude between features and targets. ### Designing and training a multitask neural network An artificial neural network (NN) algorithm was used to induce the predictive models of this work. More specifically, it was a multitask feedforward neural network (also known as a multilayer perceptron, MLP). This is a well-known algorithm in the machine learning field, and the formalism can be found elsewhere [34]. All NN models in this work were trained with pytorch[35] using lightning[36]. Backpropagation to adjust the NN weights and biases was performed using the weighted Adam optimizer. This optimizer was chosen based on the results of a previous publication [19]. At the start of training, 10% of the training data was randomly selected and reserved as the validation dataset. These data were not used to change the parameters (weights and biases) of the NN, but instead were used for the early stopping routine. This routine reduces overfitting of the NN by stopping its training after a certain number of epochs without improvement in the validation loss transpire. This number of epochs without improvement is a tunable hyperparameter called _patience_. An epoch is when all the training data "passes" through the neural network (forward pass) and then through backpropagation, and can be considered as one cycle of the training process. Some of the hyperparameters (HP) of the algorithm were not part of the HP tuning; the loss function is one of them. We used the multitask loss function of Liebel and Korner [37], which combines the single-task loss (mean squared error) for each predicted property into a single number. This multitask loss function includes a regularization term with individual weights for each target. These weights are learnable parameters and their purpose is to avoid giving too much weight to those properties with much more data than others (appropriate for the unbalanced database studied). Other hyperparameters of the algorithm were tuned and a total of 1000 different HP sets were tested. Table 2 shows the search space. The search was performed with the ray[tune] Python module [38], using an Asynchronous Successive Halving Algorithm (ASHA) scheduler [39] with a grace period of 20 and a reduction factor of 4. Search space navigation was performed with suggestions from a Tree-structured Parzen Estimator algorithm [40]. Each layer of the NN was allowed to have its own activation function (that is, the activation function of the hidden layers could be different). The activation functions considered were hyperbolic tangent, sigmoid, ReLU, Leaky ReLU, Softplus, GELU, ELU, PReLU, SiLU, SELU, and Mish. The maximum number of epochs for HP tuning was set to 1000. After this initial search, 10 HP sets were selected for the next step, which was a 10-fold cross-validation. These 10 HP sets were manually selected from the best scoring sets (using multitask loss as the score), taking care to select _sufficiently different_ network architectures. The rationale is that the top positions of these experiments commonly consist of neural networks that are far too similar; therefore, manually selecting different architectures increases the diversity of architectures to be tested. After cross-validation, the selected HP set was the one with the lowest mean loss considering the 10 local test datasets. This selected architecture will be discussed in the results section, but the reader can already check its hyperparameters in the far right column of Table 2. The NN obtained is a multilayer perceptron. However, the network that inspired this work (i-Melt) is a multi-headed feedforward NN. Fortunately, one can be easily converted from the other within the pytorch framework. This gives us the opportunity to test a secondary hypothesis of this work \(\mathbf{H}_{2}\): for the induction of multitask predictive models of glass properties, multi-headed feedforward neural networks have a better performance than multilayer perceptrons. To test H\({}_{2}\), we used a trained NN with the selected HP set and converted it into a multi-headed NN. We did so by replacing the last layer (output layer) with 85 new layers of 10 neurons each (ReLU activation function) in parallel, one layer for each property. These new layers are relatively small and highly specialized for the prediction of their respective property. The performance of the models was tested against the holdout dataset to see how they predicted new data. To test H\({}_{1}\), we compared these metrics with another model. The algorithm used to induce this comparative model was a random forest trained using the scikit-learn Python module with the default set of hyperparameters. The models induced by this algorithm had great performance in previous publications [4], [8]. After observing a reasonable performance of the NNs (discussed in the Results section), we proceeded to train the final model using the chosen architecture and considering _all_ the dataset (training _and_ holdout). This final model was given the name GlassNet. Finally, relevant information for the glass community was obtained by computing the SHap-ley Additive exPlanation (SHAP) values using the shap Python module [41]-[43]. This is an analysis that allows the trained model to be interpreted in search of patterns that can then be used by glass scientists and engineers when designing new glasses. Unfortunately, it was not possible to obtain the interaction SHAP values recently used by the community [9], because this calculation cannot be performed for neural network-based models. ### Modeling the temperature-dependence of viscosity Knowledge of the temperature-dependence of shear viscosity is essential for glass manufacturing, because it is used to adjust process variables such as melting, working, and annealing temperatures. Recent publications have already reported on data-driven models for predicting this property, ViscNet [19] and i-Melt [10] being two examples of free and openly available data-driven viscosity models. Many of the properties investigated in this work are directly or indirectly related to shear viscosity. Furthermore, many of these viscosity-related properties were better predicted by the multitask NNs than by the baseline model (discussed in detail in the Results section). Here, we exploited this fact by using the trained models to generate (temperature, viscosity) data tuples, and then using these data points to perform a non-linear regression of the MYEGA equation (Eq. 3), which is a physical model of viscosity [44]. \[\log_{10}(\eta(T))=\log_{10}(\eta_{\infty})+\frac{T_{12,\mathrm{M}}}{T}[12- \log_{10}(\eta_{\infty})]\exp\left(\left[\frac{m}{12-\log_{10}(\eta_{\infty} )}-1\right]\left[\frac{T_{12,\mathrm{M}}}{T}-1\right]\right) \tag{3}\] In the previous equation, \(\eta\) is the equilibrium shear viscosity, \(\eta_{\infty}\) is the asymptotic viscosity (\(\eta_{\infty}\equiv\lim_{T\rightarrow\infty}\eta(T)\)), \(m\) is the liquid fragility index [45], and \(T_{12,\mathrm{M}}\) has the same definition as \(T_{12}\) (\(\eta(T_{12})\equiv 10^{12}\,\mathrm{Pa}\). s), but it is written with a different notation to indicate that it comes from a non-linear regression of the MYEGA equation. Using this approach, GlassNet can predict three additional properties. Knowing that the prediction of material properties by machine learning models is much more susceptible to noise (compared with prediction from experimental data), using a robust non-linear regression of Eq. (3) is a good strategy to compensate for this disadvantage. In this work, we used a Cauchy loss for the least squares regression. The mathematical equation for this loss is \(\rho(z)=\ln(1+z)\), where \(z\) is the standard least squares loss. To test the viscosity prediction capabilities of GlassNet, a set of 143,219 data points of composition, temperature, and measured viscosity were collected from the SciGlass database. Entries containing thorium or uranium were removed, because GlassNet cannot predict glasses with either of these elements. Entries with \(\log_{10}(\eta)\) outside the [-5, 12] range, measured at temperatures above 3000 K, or measured at temperatures below the glass transition (predicted by GlassNet) were also removed, because these data are prone to higher measurement errors. Composition (in atomic fraction) was rounded to the third decimal place and temperature (in Kelvin) was rounded to the second decimal place before duplicate entries were merged into one with the median value of \(\log_{10}(\eta)\). Finally, entries with the same chemical composition as one of the glasses used to train GlassNet were removed, to make this experiment another test of GlassNet's predictive power for unseen compositions. ## 3 Results and discussion ### Data analysis and selected physicochemical features Table 1 lists the symbols used in this work, their meaning, and their units. Table 3 shows the descriptive statistics (count, minimum, mean, and maximum) of the 85 targets of GlassNet. These numbers reflect the whole dataset after data processing, just before the holdout split. Of the 85 properties, 26 had more than 10k instances when all data were considered. In this group we find the glass transition temperature, density at ambient temperature, refractive index, Abbe number, Young's modulus, microhardness, linear coefficient of thermal expansion below \(T_{g}\), crystallization peak, crystallization onset, and others. In contrast, 21 out of 85 properties had less than 1k examples. This group includes maximum crystal growth velocity, density and surface tension at temperatures above ambient, heat capacity at constant pressure, and others. This representation problem regarding the number of examples of different targets can be an issue if left unattended. For example, the NN may choose to ignore the properties with less representation in favor of those with more data, defeating the purpose of having a multitask model. This problem is minimized by using the multitask loss function of Liebel and Korner [37], as discussed in the Materials and Methods section. Table 4 shows the descriptive statistics (count, mean, and standard deviation) and elemental mole fraction information (first quartile, median, third quartile, and maximum) for the chemical elements in the glasses used to train GlassNet. These values reflect the full dataset just before the holdout split, similar to the data shown in Table 3. Considering all the data, 25 of the 72 elements were present in more than 10k examples. In this group, we find the most common chemical elements of inorganic glasses such as oxygen, silicon, boron, sodium, aluminum, calcium, potassium, and others. Similarly, 14 of the 72 elements were present in fewer than 1k examples. In this group we find dysprosium, hafnium, europium, terbium, and others. Remarkably, some elements in Table 4 have a maximum elemental mole fraction of 1, in other words, these are monoatomic examples present in the dataset, and it is known that most of them are quite resistant to glass formation [1]. While the original authors' rationale for including these particular examples in the SciGlass database is not clear, the information in these entries is typically related to liquidus temperature, which (by definition) is a property of crystalline materials and not glasses. Given the small number of these data points and the importance of the liquidus temperature for glass making [1], these data points were not excluded. The induced models will inevitably be better at predicting the properties of glasses with chemistries that are more represented in the training dataset. The expectation is that the strategy of physicochemical feature extraction from the data (see methods) will reduce this problem. Finally, Table 5 shows the 98 selected features out of the 627 considered. Of these 98 features, 64 are elemental mole fractions, 12 are weighted physicochemical features (see Eq. 1), and 22 are absolute physicochemical features (see Eq. 2). Some remarks on these selected features: the aggregator function "mean" is not present; out of a total of 77 elemental mole fraction features, 13 were not selected (e.g., silicon and oxygen are in this group of 13); out of a total of 55 physicochemical properties considered, only 25 are part of the selected features. ### Neural network The last column of Table 2 shows the hyperparameters selected after HP tuning. This is a deep neural network with four hidden layers. Each hidden layer has a different activation function. All hidden layers have a dropout probability that decreases from the first to the second layer and then keeps increasing until the last layer. Batch normalization is only present in the first two hidden layers. The root mean squared error (RMSE) metrics of the NN models trained with the selected set of hyperparameters are shown in Table 3, one metric for each property. The values are the prediction metrics for the holdout dataset (_not_ used to train the model) and the standard deviation is that obtained in the 10-fold cross-validation experiment. The RMSE gives an estimate of the prediction error in the same units and magnitude as the target. The same table also shows the RMSE for the random forest models (baseline model, used for comparison). A t-test (95% confidence) was used to compare how the three models performed; these results are also reported in Table 3. The multi-headed NN model outperforms the multilayer perceptron model for 13 properties, 12 of which are properties with more than 10k examples. There was only one property for which the multi-headed model had a worse prediction, and that was \(D(1073\,\mathrm{K})\). This result supports H\({}_{2}\); a multi-headed NN improves performance for targets with a large number of examples, without losing generalization power for targets with fewer examples. Comparing the NN models with the random forest models produced mixed results: for 37 properties, there was no statistical difference between the performance of the random forest models and the NN models. The multilayer perceptron outperformed the random forest in 27 properties and was outperformed by the random forest in 21 properties. The multi-headed NN produced slighly better results: it outperformed the random forest in 30 properties and was outperformed by the random forest in 18 properties. The properties that the NN predicted better than the random forest were mostly targets with more than one measurement at different temperatures, such as density measured at 1273 and 1673 K. Most of these properties are targets related to viscosity. Properties predicted better by random forest than NN were mostly targets with more than 10k examples. Overall, neither supporting nor rejecting hypothesis H\({}_{1}\) is possible. On the one hand, a multitask NN improves the prediction of glass properties for targets with measurements in different experimental conditions (e.g., different temperatures), compared to specialized random forest models. Such improved prediction occurs because the shared hidden layers of a multitask NN can exploit the connections between the targets better than specialized models that "know" only one target. On the other hand, the specialized random forest models perform better when more data is available. A computational advantage of the multitask NN over the random forest model is that it requires less memory to store the trained parameters. The parameters of the NN can be stored in an uncompressed binary file of less than 4 MiB, while the compressed files of all the random forest models require more than 1 GiB of storage. Inference is also slower for random forest models. One question that arises from these results is whether the performance of the multitask neural network can be improved by target selection. For example, if one of the properties where the NN underperformed the random forest were removed, would the prediction of the remaining targets improve? A conservative hypothesis is that no, it would not improve performance, because the removed data is probably relevant to the overall learning. However, testing this hypothesis is not the focus of this communication. ### Interpreting the trained model Figure 1 shows the SHAP value violin plots of three properties: \(C_{p}(1073\,\mathrm{K})\), \(T_{\max(U)}\), and \(\log_{10}(\rho(1073\,\mathrm{K}))\). These plots show the 10 most important features (those with the highest mean absolute SHAP) on the \(y\)-axis, with SHAP values on the \(x\)-axis. The base value (marked as a vertical gray line) is the mean value of the property (considering the entire dataset) and has a SHAP value of zero. The width of the violins represents the number of examples with the same SHAP value, while the color within the violins represents the value of the feature. See the Supplementary Material for the violin plots for the other properties. As shown in Figure 0(a), \(C_{p}(1073\,\mathrm{K})\) increases with increasing amounts of sodium, boron, magnesium, or calcium. Sodium is the element with the greatest impact on this property. Increasing the standard deviation of the boiling point of the elements present in the glass also increases this property. One can decrease this property by increasing the standard deviation of the FCC lattice parameter, the maximum atomic radius, the standard deviation of the effective nuclear charge, and the sum of the number of filled f valence orbitals. The heat capacity is a property related to the fluctuations in the glasses [46]. Figure 0(b) shows that sodium and boron are the most important features for modeling \(T_{\max(U)}\). The modeling of this rather complex property involves many physicochemical features. With the exception of the melting enthalpy, the other physicochemical features shown in Figure 0(b) are not often discussed in the crystal growth literature. The temperature of the maximum crystal growth velocity is related to the glass-forming ability [47], and is one of key properties for glass-ceramics design, together with the glass transition temperature and the maximum crystal growth velocity [48]. Finally, Figure 0(c) shows sodium once again as the most important feature, this time for \(\log_{10}(\rho(1073\,\mathrm{K}))\). Notably, 8 of the 10 most important features are elemental features for this property, the opposite of what was observed for \(C_{p}(1073\,\mathrm{K})\) and \(T_{\max(U)}\). This list includes elements that are well known in the electrical property community, like potassium, lithium, and vanadium. The SHAP analysis allows us to answer other questions: What are the most relevant features overall (considering all the properties studied here)? And what are the most relevant features for each group of properties? Examining the frequency of the 10 most important features for each property is one strategy for answering these questions. Table 6 shows the most frequent features overall and for property groups. Overall, we can see that some of the chemical elements that are most commonly used in glass making are also the most important ones in our analysis. Sodium, boron, lead, lithium, aluminum, potassium, and calcium are on this list. One might think that this is just a list of the most abundant chemical elements in the dataset, but this is not the case, as barium, magnesium, phosphorus, zinc, titanium, and fluor are not on this list (and they are all more abundant than lead in the dataset). Melting enthalpy, number of unfilled valence orbitals, and number of filled d orbitals are physicochemical features that are also collectively relevant to glass properties. The most relevant features for viscosity and relaxation, optical properties, electrical and dielectric properties, mechanical properties, density, thermal properties, crystallization, and surface tension are also listed in Table 6. Again, sodium and boron are relevant features for most of these property groups. Lithium and lead are also relevant features, appearing many times in this table. Interestingly, the model understood that the optical properties are highly dependent on the filled valence orbitals (where the absorption process takes place) and on lead, bismuth, titanium, niobium, lanthanum, and germanium, known elements used for optical glasses. These violin plots and the table of relevant features provide valuable insights for glass manufacturers and designers to help them fine-tune the properties of their products. ### Viscosity modeling The final viscosity dataset (after all the procedures mentioned in the Materials and Methods section) consisted of 134,976 examples. For each of these examples, the viscosity at the measurement temperature was predicted using GlassNet. To evaluate this prediction, the RMSE between the reported and predicted values of \(\log_{10}(\eta)\) was calculated (with \(\eta\) in units of Pa.s). Glassnet had a RMSE of 0.65, performing better than ViscNet [19], another deep learning model used to predict viscosity, which had an RMSE of 1.1 for unseen data. However, GlassNet performed worse than i-Melt (RMSE: 0.4) [10] and the unnamed model reported by Tandia et al. (RMSE: 0.04) [3], two highly specialized models, trained and tested on liquids with much more restricted chemistries than GlassNet. The increase in RMSE can be considered the price paid for increasing the scope of GlassNet. ### Model and data availability The model reported here is called GlassNet and is available to the community, along with the training data, in the GlassPy Python module, a free and open source module for researchers working with glass materials. See the official repository at [https://github.com/drcassar/glasspy](https://github.com/drcassar/glasspy) for instructions on installing and using this module. In GlassPy, the user has the option to load the MLP or the multi-headed version of GlassNet. GlassPy can also use the random forest models, for those properties where random forest models outperformed the NNs (although, when doing this, the inference is slower). GlassNet can also predict the temperature-dependence of viscosity and the MYEGA parameters, as discussed in this paper. GlassPy also provides an easy way to load SciGlass data into a pandas DataFrame, including (but not limited to) the GlassNet training dataset. These pandas DataFrames are state of the art Python objects for data analysis. Two advantages of using GlassPy as a frontend for exploring SciGlass data are 1. GlassPy already translates the SciGlass data into a ready-to-use DataFrame with an intuitive naming scheme; 2. GlassPy does not require installing any legacy non-free software (the oficial SciGlass repository only provides the database files in a legacy proprietary Microsoft Access format). ## 4 Summary and conclusion In this work, we collected more than 218k different glass compositions with more than 795k data points on 85 properties. A new multitask neural network was designed and trained with this rich dataset. We observed that the trained multitask model outperformed specialized models (i.e., random forests) in predicting targets of the same property measured under different conditions (e.g., different temperatures). However, specialized models outperformed the multitask NN for some properties with many data points (more than 10k). An advantage of the proposed model, compared to other models reported in the literature, is that it is not limited to a specific inorganic glass chemistry and can be used to predict properties of oxides, chalcogenides, halides, and other types of glasses. We show how the purely data-driven predictions of the proposed model can be used together with a physical model to predict the temperature dependence of viscosity. This approach was tested with about 135k viscosity data points and yielded better results than another general viscosity model (ViscNet [19]), but worse results than models designed for specific chemistries. Additionally, useful insights for glass manufacturing and design were obtained from the trained model using SHAP analysis. The obtained model was named GlassNet and is free software, available to the community (together with the training data) in the Python module GlassPy. This free and open source suite for inorganic glass data and property prediction is expected to benefit the community, improve the reproducibility of data-driven publications, and accelerate the development of new and exciting glasses and glass-ceramics (e.g., by using GlassNet with inverse design tools such as GLAS [20]). In addition, the open nature of GlassNet allows the community to build on it to solve other property prediction problems using transfer learning [34], [49], [50]. ## Acknowledgements The author acknowledges funding from the CNPq / INCT - Materials Informatics project. The author also thanks Carolina B. Zanelli for the English revision. ## Declaration of Generative AI and AI-assisted technologies in the writing process During the preparation of this work the author used Grammarly and DeepL in order to improve readability and grammar. After using these tools, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.
2310.17571
Inside the black box: Neural network-based real-time prediction of US recessions
Long short-term memory (LSTM) and gated recurrent unit (GRU) are used to model US recessions from 1967 to 2021. Their predictive performances are compared to those of the traditional linear models. The out-of-sample performance suggests the application of LSTM and GRU in recession forecasting, especially for longer-term forecasts. The Shapley additive explanations (SHAP) method is applied to both groups of models. The SHAP-based different weight assignments imply the capability of these types of neural networks to capture the business cycle asymmetries and nonlinearities. The SHAP method delivers key recession indicators, such as the S&P 500 index for short-term forecasting up to 3 months and the term spread for longer-term forecasting up to 12 months. These findings are robust against other interpretation methods, such as the local interpretable model-agnostic explanations (LIME) and the marginal effects.
Seulki Chung
2023-10-26T16:58:16Z
http://arxiv.org/abs/2310.17571v3
# Inside the black box: Neural network-based real-time prediction of US recessions ###### Abstract Feedforward neural network (FFN) and two specific types of recurrent neural network, long short-term memory (LSTM) and gated recurrent unit (GRU), are used for modeling US recessions in the period from 1967 to 2021. The estimated models are then employed to conduct real-time predictions of the Great Recession and the Covid-19 recession in US. Their predictive performances are compared to those of the traditional linear models, the logistic regression model both with and without the ridge penalty. The out-of-sample performance suggests the application of LSTM and GRU in the area of recession forecasting, especially for the long-term forecasting tasks. They outperform other types of models across 5 forecasting horizons with respect to different types of statistical performance metrics. Shapley additive explanations (SHAP) method is applied to the fitted GRUs across different forecasting horizons to gain insight into the feature importance. The evaluation of predictor importance differs between the GRU and ridge logistic regression models, as reflected in the variable order determined by SHAP values. When considering the top 5 predictors, key indicators such as the S&P 500 index, real GDP, and private residential fixed investment consistently appear for short-term forecasts (up to 3 months). In contrast, for longer-term predictions (6 months or more), the term spread and producer price index become more prominent. These findings are supported by both local interpretable model-agnostic explanations (LIME) and marginal effects. keywords: Forecasting, Recession, Business cycle, LSTM, GRU, SHAP, LIME _JEL:_ C01, C45, C51, C52, C53, C71, E37 ## 1 Introduction Recession forecasting has been a longstanding challenge for policymakers and market practitioners, as it enables them to make timely decisions that could mitigate the impact of a recession. However, due to the intricate and interconnected nature of modern economics, this task has proven to be difficult and has had limited success. Traditional methods such as probit or logit regression models, along with their variations, have been widely employed to address this forecasting issue. Conversely, in recent decades, machine learning techniques and artificial neural networks have become increasingly popular among economists and have been applied to certain macroeconomic forecasting problems. Nevertheless, it has yet to be demonstrated whether these modern approaches outperform traditional statistical models when it comes to predicting recessions. Since the pioneering work of Mitchell and Burns (1938), which identified 21 variables as potential economic indicators for business cycles out of a larger set, researchers have been engaged in selecting indicators and developing theoretical frameworks or predictive models to link these indicators with business cycles. However, most studies have primarily focused on linear frameworks. Many of them have utilized probit regression models or their extensions to generate recession forecasts. Estrella and Mishkin (1996) and Estrella and Mishkin (1998) compiled a combination of financial and macroeconomic variables and conducted recession forecasting using a probit framework. Their findings revealed that stock prices exhibit greater short-term predictability, while the slope of the yield curve performs better for longer-term predictions. Wright (2006) demonstrated that probit models incorporating the federal funds rate and term spread as predictors outperform models solely relying on the term spread. Dueker (1997) and Dueker (2002) expanded the standard probit model by incorporating Markov regime switching within the probit framework, allowing for variation in coefficients. Chauvet and Potter (2005) introduced several specifications to the probit model that accounted for different business cycle dependencies and autocorrelation errors, concluding that their more complicated extensions improved the accuracy of recession forecasts. Fornari and Lemke (2010) as well as Nyberg (2014) incorporated vector autoregression components into the probit model to capture the endogenous dynamics of the predictors. While probit regression and its extensions continue to be widely used in business cycle forecasting, there has been a growing interest in exploring the forecasting capabilities of nonlinear models, including machine learning methods. This interest stems from the recognition that macroeconomic data often exhibit nonlinear patterns. Empirical evidence supports this notion, such as the work of Tiao and Tsay (1994), who demonstrate that a threshold autoregressive model outperforms a linear autoregressive model for predicting GDP growth. Maasoumi et al. (1994) examine multiple macroeconomic time series and confirm their nonlinear nature. Puglia and Tucker (2021) highlight the attractiveness of machine learning methods as an alternative to probit regression and its extensions, noting that probit methods typically require additional parameters for flexibility, whereas flexibility is inherent in machine learning methods. Stock and Watson (1998) compare the forecasting performance of 49 univariate linear and nonlinear models across 215 macroeconomic time series. They find that some of the nonlinear models perform poorly compared to linear models. Jaditz et al. (1998) explore the use of nearest neighbor regression models for forecasting industrial production but observe only marginal improvements in predictive performance. Vishwanathan and Murty (2002) present an iterative algorithm for support vector machines in classification problems. Ng (2014) applies a tree ensemble classifier to a large panel of predictors. Fornaro (2016) combines a Bayesian methodology with a shrinkage prior within the probit framework to predict recessions using extensive sets of predictors. More recently, Holopainen and Sarlin (2017), Bluwstein et al. (2020), and Vrontos et al. (2021) employ various machine learning methods for economic event forecasting. Vrontos et al. (2021) provide empirical evidence supporting the application of machine learning over traditional econometric techniques in the context of recession prediction. Neural networks, as a subset of machine learning, have garnered significant attention and application in fields like finance, primarily due to their ability to establish flexible mappings between variables and exhibit high pattern recognition capabilities (Zhang et al. (1998)). Neural networks, being nonlinear and nonparametric models, can approximate almost any functional form accurately, as stated by the universal approximation theorem (Hornik et al. (1989)). Consequently, they are valuable modeling tools, particularly when there is limited prior knowledge about the appropriate functional relationships. However, the utilization of neural networks in macroeconomic studies has been relatively limited due to the small sample sizes and low-frequency nature of macroeconomic data. Swanson and White (1997) compare artificial neural networks with linear models in terms of predictive performance for nine macroeconomic variables, revealing only marginal improvements in forecast accuracy. Moshiri and Cameron (2000) apply neural networks to inflation forecasting using a dataset of 300 observations spanning 25 years of monthly data. Tkacz (2001) compares mul tivariate neural networks with linear and univariate models, finding minor forecast improvements in the short term but more pronounced benefits for longer horizons, such as a one-year forecast. Qi (2001) employs a simple feed-forward neural network to predict US recessions using a range of financial and economic indicators, identifying some indicators as useful for prediction. More recently, Puglia and Tucker (2021) compare neural networks with probit regression in forecasting US recessions using the term spread and other macro-financial variables, finding little difference between the models when evaluated. Wang et al. (2022) employ a specific type of recurrent neural network, namely a Bi-LSTM with autoencoder, along with other machine learning models to predict the beginning and end of economic recessions in the US. Their results suggest that the Bi-LSTM with autoencoder is the most accurate model. The novelty of this paper is twofold: Firstly, it focuses on two special types of recurrent neural network models, LSTM and GRU, which address the limitations of standard recurrent neural networks related to the exploding and vanishing gradient problems. The performance of these models is compared to simple feed-forward neural networks that suffer from the key limitation of having to specify the temporal dependence upfront in the design of the model and traditional linear models in the context of recession forecasting. Secondly, the paper applies the SHAP (SHapley Additive exPlanations) method to GRU to explore the importance of features for different forecast horizons. In the context of recession forecasting, Puglia and Tucker (2021) and Delgado et al. (2022) also use the SHAP method to decompose recession forecasts, but they are applied to other models than LSTM and GRU. The three main findings can be summarized as follows: Firstly, the out-of-sample performance strongly supports the application of LSTM and GRU in the area of recession forecasting, especially for the long-term forecasting tasks. They outperform other types of models across 5 forecasting horizons with respect to different types of statistical performance metrics. Secondly, GRU and ridge logistic regression models differ in assessing predictor importance, evident in the variable order based on SHAP values. Lastly, while the leading predictors for GRU and ridge logistic regression models slightly differ, key indicators like S&P 500 index, real GDP, and private residential fixed investment consistently emerge for short-term predictions (up to 3 months). For longer-term forecasts (6 months or more), the term spread and producer price index take precedence. These results are corroborated by local interpretable model-agnostic explanations (LIME) and marginal effects, respectively. The remainder of the paper is as follows. Section 2 explains the data used. Section 3 describes the models and forecast evaluation metrics and outlines the research methodology. Section 4 reports the main results, and Section 5 concludes. ## 2 Data Prior studies in business cycle forecasting often rely on macroeconomic indicators, which are subject to revisions after their initial estimates. Stark and Croushore (2002) demonstrate that the accuracy of forecasts is influenced by using the most up-to-date data instead of real-time data. Therefore, when comparing forecasts from new models to benchmark forecasts, it is crucial to ensure that the comparisons are based on real-time data. In my research, the focus is primarily on assessing the real-time predictability of the Great Recession and the Covid-19 recession in the United States. This necessitates working with real-time data since the information that is available in hindsight was not accessible prior to the recessions. To evaluate the predictability of the recessions, I utilize the same data that actually was available to real-time forecasters for out-of-sample forecasting. The dataset employed for prediction consists of 194 real-time vintages of macroeconomic and financial market variables, covering the period from February 1967 to October 2021. The out-of-sample forecasting commences in November 2006, utilizing the real-time vintage of data. Taking into account previous studies like Vrontos et al. (2021) and the availability of real-time data, a set of 25 predictors is chosen. A detailed list with descriptions of these variables can be found in Table 1. The selected predictors cover a wide range of categories, including output, income, prices, the labor market, the housing market, money and credit, and the financial market. The data frequency varies from daily to quarterly. To facilitate model estimation, higher frequency data is aggregated into monthly data using the mean. Quarterly frequency variables are transformed into monthly equivalents using natural cubic spline interpolation, following the methodology outlined in Vrontos et al. (2021). Specifically, at each month, all available data up to that point are used to calculate the interpolating cubic spline. This spline curve is then utilized to generate monthly frequency data points between the given data points. The majority of monthly data snapshots for the variables are obtained from ALFRED (Archival Federal Reserve Economic Data). However, there are a few selected predictors for which the real-time data prior to 2013 is not available in ALFRED. The first set of variables, including real personal income excluding transfer receipts and real manufacturing and trade sales, along with total non-farm payroll employment and the industrial production index, are used by Chauvet and Piger (2008) to identify business cycle dates in real-time. The real-time data for these series, provided by Jeremy Piger on his website, ends in August 2013. However, it can be easily extended beyond 2013 by using the data in ALFRED. The second set of variables consists of real M1 and M2 money stock, for which the earliest available real-time data in ALFRED is from January 2014. \begin{table} \begin{tabular}{l l l l l l} \hline Nr. & Predictive variable & Abbreviation & Category & Transformation & Frequency \\ \hline 1 & Average hourly earnings of production and nonsupervisory employees & AHETPI & Income & Percent change & Monthly \\ 2 & Average weekly hours of production and nonsupervisory employees & AWHNNOAG & Labor market & percent change & Monthly \\ 3 & Moody’s BAA yield & BAA & Money and credit & First-order difference & Monthly \\ 4 & Moody’s BAA yield relative to 10-Year treasury yield & BAA10YM & Money and credit & First-order difference & Monthly \\ 5 & Real manufacturing and trade industries sales & CMRMTSPL & Output & Log growth rate & Monthly \\ 6 & Corporate profits after tax & CP & Income & Log growth rate & Quarterly \\ 7 & Real disposable personal income & DSPIC96 & Income & Log growth rate & Monthly \\ 8 & Effective federal funds rate & FEDFUNDS & Financial market & First-order difference & Monthly \\ 9 & Real gross domestic product & GDPC1 & Output & Log growth rate & Quarterly \\ 10 & Privately-owned housing units started & HOUST & Housing market & Log growth rate & Monthly \\ 11 & Industrial production index & INDPRO & Output & Log growth rate & Monthly \\ 12 & Real M1 money stock & M1REAL & Money and credit & First-order difference & Monthly \\ 13 & Real M2 money stock & M2REAL & Money and credit & First-order difference & Monthly \\ 14 & Non-farm payroll total & PAYEMS & Labor market & Log growth rate & Monthly \\ 15 & Real personal consumption expenditures & PCEC96 & Prices & Log growth rate & Monthly \\ 16 & Privately-owned housing units permitted & PERMIT & Housing market & Log growth rate & Monthly \\ 17 & Producer price index by all commodities & PPIACO & Prices & Log growth rate & Monthly \\ 18 & Private residential fixed investment & PRFI & Housing market & Log growth rate & Quarterly \\ 19 & S\&P 500 index & SP500 & Financial market & Log growth rate & Daily \\ 20 & 3-month treasury bill rate & TB3MS & Financial market & First-order difference & Monthly \\ 21 & Term spread - 5-year treasury yield minus 3-month treasury bill rate & T5Y3MM & Financial market & First-order difference & Monthly \\ 22 & Consumer Sentiment - University of Michigan & UMCSENT & Prices & Log growth rate & Monthly \\ 23 & Unemployment rate & UNRATE & Labor market & First-order difference & Monthly \\ 24 & Producer price index by commodity: final demand: finished goods & WPSFD49207 & Prices & Log growth rate & Monthly \\ 25 & Real personal income excluding current transfer receipts & W875RX1 & Income & Log growth rate & Monthly \\ \hline \end{tabular} \end{table} Table 1: Overview of predictors To address the absence of real-time vintages for real M1 and M2 stock before this date, nominal M1 and M2 stock are adjusted for inflation using the consumer price index, which has real-time vintages readily accessible in ALFRED. To identify recession periods in the United States, I rely on the business cycle expansion and contraction dates determined by the National Bureau of Economic Research (NBER). NBER is widely regarded as the standard reference for US business cycles in the existing literature. In this study, recession months are defined as the period following the peak and continuing until the trough, while all other months are considered periods of economic expansion. The earliest available NBER recession indicator vintage in ALFRED is from September 2014. For the monthly vintages preceding that date, I manually collect and construct them based on the official announcements made by the NBER business cycle dating committee1. However, it's worth noting that one major practical challenge with NBER business cycle dates is that they are often announced with significant publication delays. Footnote 1: [https://www.nber.org/research/business-cycle-dating/business-cycle-dating-committee-announcements](https://www.nber.org/research/business-cycle-dating/business-cycle-dating-committee-announcements) Table 2 presents the peak and trough dates of the US business cycle from 1980 to 2021, along with their corresponding announcement dates. The publication lags for the most recent six recessions range from 5 to 21 months, with troughs being identified later than peaks on average. While the NBER business cycle dates remain unchanged once finalized, the presence of publication lags complicates the creation of real-time versions of the NBER recession indicator. To address this, the NBER recession indicator in ALFRED is constructed under the assumption that the previous state remains unchanged until a new turning point is officially announced. ## 3 Econometric methodology In this section, the focus is on the technical aspects of the models used to predict the two most recent recessions in the United States. The paper specifically explores neural networks, which are complex nonlinear models composed of interconnected nodes arranged in multiple layers. These networks have the ability to approximate any linear or nonlinear continuous functions, as stated by the universal approximation theorem Hornik et al. (1989), given that the network is wide or deep enough. In this paper, three types of neural networks are employed for recession forecasting: feedforward neural network (FNN), long short-term memory (LSTM), and gated recurrent unit (GRU). Section 3.1 provides a detailed description of these models. Model specifications, including estimation and prediction techniques, are discussed in Section 3.2. Furthermore, Section 3.3 introduces statistical measures that effectively evaluate the performance of the predictions. ### Neural Networks #### 3.1.1 Feedforward neural network The feed-forward neural network (FFN) is a widely used and straightforward type of artificial neural network. It operates by transmitting information in a unidirectional manner, where data flows from the input layer to the output layer without feedback loops. Figure 1 illustrates, based on the NN-SVG2, a three-layer FFN designed for binary classification. It comprises an input layer with eight nodes, a hidden layer with four nodes, and an output layer with one node. The top \begin{table} \begin{tabular}{l l l l} \hline \hline Date & Type & Duration & Announcement \\ \hline 1980:01 & Peak & 6 & 1980:06(+5) \\ 1980:07 & Trough & 12 & 1981:07(+12) \\ 1981:07 & Peak & 16 & 1982:01(+6) \\ 1982:11 & Trough & 92 & 1983:07(+8) \\ 1990:07 & Peak & 8 & 1991:04(+9) \\ 1991:03 & Trough & 120 & 1992:12(+21) \\ 2001:03 & Peak & 8 & 2001:11(+8) \\ 2001:11 & Trough & 73 & 2003:07(+20) \\ 2007:12 & Peak & 18 & 2008:12(+12) \\ 2009:06 & Trough & 128 & 2010:09(+15) \\ 2020:02 & Peak & 2 & 2020:06(+4) \\ 2020:04 & Trough & ongoing & 2021:07(+15) \\ \hline \hline \end{tabular} The table reports NBER business cycle dates, including the type of cycle (contraction or expansion), the duration in months, and the time of announcement. The data covers the period from 1980 to 2021, and publication lags in months are indicated in parentheses. \end{table} Table 2: US Business Cycle dates nodes in both the input and hidden layers serve as bias nodes. In this configuration, data from the input layer is passed through the hidden layer, which transforms the data. The values obtained from the hidden layer are then forwarded to the output layer, which translates them into desired outputs based on the problem at hand. In the case of binary classification, the last node in the output layer utilizes the sigmoid activation function, producing a value between 0 and 1. This value represents the probability of an event occurring. Figure 1: The architecture of a feed-forward neural network Depending on the number of explanatory variables in the data and of nodes in the hidden layer the unknown underlying function \(f\) for an output node can be written as \[f(X)=g_{2}\Bigg{[}\alpha_{0}+\sum_{j=1}^{k}\alpha_{j}g_{1}\Bigg{(}\beta_{0j}+\sum _{i=1}^{n}\beta_{ij}x_{i}\Bigg{)}\Bigg{]}+\epsilon, \tag{1}\] where \(n\) is the number of predictors, \(k\) is the number of units in the hidden layer, \(g_{1}\) is the activation function in the hidden layer, \(g_{2}\) is the activation function in the output layer, \(\beta_{ij}\) and \(\alpha_{j}\) represent the weight parameters from the input to the hidden layer and from the hidden to the output layer, respectively, and \(\epsilon\) is the error term. The weight parameters are estimated by the backpropagation process that seeks to repeatedly update the weights until convergence based on the derivatives of the cost function with respect to input and the hidden layers. #### 3.1.2 Long short-term memory FFNs are restricted to one-way signal flow, meaning that there is no feedback mechanism where the output of a layer can influence the same layer. Consequently, FFNs lack the ability to capture temporal dependencies in time series data. Although time series data can be fed into an FFN by incorporating additional input units representing previous time points, the main limitation lies in the fixed dimensionality of inputs and outputs. In other words, the precise length of temporal dependence must be predetermined, which is often unknown in real-world scenarios. This is where recurrent neural networks (RNNs) come into play. RNNs establish connections that form cycles, allowing for feedback loops where data can be fed back into the input before being forwarded again. This feedback loop enables RNNs to maintain an internal state or memory to process sequences of inputs. Theoretically, RNNs can retain all information over time and handle long-term dependencies. However, they face two computational challenges. Firstly, as input sequences grow longer, the backpropagation process relies heavily on the chain rule, which may lead to vanishing gradients. If any gradient approaches zero, all other gradients will diminish exponentially fast due to the multiplicative nature of the chain rule. This phenomenon, known as the vanishing gradient problem, prevents effective learning in the model. Secondly, depending on the length of input sequences, the gradient of the loss function can become excessively large and result in numerical instability, referred to as the exploding gradient problem. Hochreiter and Schmidhuber (1997) introduced the long short-term memory (LSTM) architecture and a corresponding learning algorithm to address the challenges of error back-flow and long term dependency in recurrent networks. Figure 2 illustrates a basic LSTM neural network comprising an LSTM layer and an output layer. The input data is represented by a three-dimensional tensor, with dimensions for batch size, number of features, and sequence length. The LSTM layer consists of LSTM cells, with each cell responsible for information retrieval at a specific time point in a time series. These cells contain hidden units comprising special nodes or gates that are pre-determined to process the information. Figure 3 provides a closer look at the hidden units of two commonly used variants of recurrent neural networks (RNNs): LSTM and gated recurrent unit (GRU). For both figure 2 and 3, the graphics of the hidden units are adapted from Chris Colah's Figure 2: The architecture of a long short-term memory network blog3. The LSTM unit on the left features three gates (forget, input, and output gates) denoted by red dotted lines, along with a cell state, which is a crucial distinction between RNNs and LSTM networks. The cell state functions like a conveyor belt that extends across the entire sequence, facilitating the retention of information over long periods. This mechanism closely resembles the long-term memory function of the human brain. The three gates regulate the flow of information to and from the cell state, enhancing the overall functionality of the network. Footnote 3: [https://colah.github.io/posts/2015-08-Understanding-LSTMs/](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) Upon the encounter of information from the previous time step \(h_{t-1}\) with new information \(x_{t}\), the forget gate evaluates which information from the cell state \(C_{t-1}\) should be discarded. Formally, this can be described as follows: \[f_{t}=\sigma(W_{f}\cdot[h_{t-1}\;\;x_{t}]+b_{f}) \tag{2}\] Figure 3: Hidden units of a LSTM and a GRU network with specialized gates Equation (1) presents a concise representation of the mathematical operations within the forget gate. The terms enclosed in brackets represent the linear operations within the activation function, introducing nonlinearity. The input vector \([h_{t-1}\;x_{t}]\) is multiplied by the weight matrix \(W_{f}\) and combined with the bias vector \(b_{f}\), which are then passed through the activation function, typically a sigmoid function. This ensures that the output values fall within the range of 0 to 1. Later, these values will be multiplied by each corresponding element in the previous cell state \(C_{t-1}\), determining the proportion of old information to be retained in the new cell state. The subsequent step involves identifying the portion of new information deemed valuable for storage in the cell state. This process occurs in two stages. Firstly, a node utilizing the hyperbolic tangent function proposes a vector of new candidate values for the cell state, denoted as \(\tilde{C}_{t}\): \[\tilde{C}_{t}=\tanh(W_{C}\cdot[h_{t-1}\;x_{t}]+b_{C}). \tag{3}\] Secondly, the input gate regulates the magnitude of the update, determining which values of \(\tilde{C}_{t}\) will be stored in the cell state \(C_{t}\): \[i_{t}=\sigma(W_{i}\cdot[h_{t-1}\;x_{t}]+b_{i}). \tag{4}\] Similarly, the input gate generates values ranging from 0 to 1, which, when multiplied by \(\tilde{C}_{t}\), determine the proportion of the new candidate that should be incorporated into the cell state: \[C_{t}=f_{t}\times C_{t-1}+i_{t}\times\tilde{C}_{t}/ \tag{5}\] Equation (5) provides a summary of the update process. The new cell state \(C_{t}\) is obtained by combining the previous state \(C_{t-1}\) multiplied by the forget gate \(f_{t}\), which discards irrelevant information from the old state, and the new candidate values \(\tilde{C}_{t}\) scaled by the input gate \(i_{t}\), retaining only the most important information from the new candidate state. The resulting new hidden state \(h_{t}\) is determined by the current cell state \(C_{t}\) and the output gate \(o_{t}\): \[o_{t}=\sigma(W_{o}\cdot[h_{t-1}\;x_{t}]+b_{o}). \tag{6}\] The output gate in equation (6) utilizes a sigmoid activation function to determine the portion of the new cell state that will be outputted: \[h_{t}=o_{t}\times\tanh(C_{t}). \tag{7}\] The cell state undergoes the hyperbolic tangent function and is subsequently multiplied by the output of the output gate, yielding the new hidden state \(h_{t}\). Therefore, \(h_{t}\) selectively captures the most pertinent information from the cell state, representing the short-term memory, while the cell state retains the long-term memory. #### 3.1.3 Gated recurrent unit In Figure 3, the right panel shows a gated recurrent unit (GRU) which differs from the LSTM in terms of its gate structure and quantity. GRU, introduced by Cho et al. (2014), aims to simplify the LSTM by combining two gates into a single gate and merging the cell state with the hidden state. Unlike LSTM, GRU eliminates the output gate and integrates the functions of the forget and input gates into a single gate known as the update gate. This update gate processes new information based on the previous hidden state, consequently updating the hidden state. Initially, the previous hidden state \(h_{t-1}\) and the current input \(x_{t}\) are introduced and passed through the reset gate, which determines the extent to which past information should be disregarded: \[r_{t}=\sigma(W_{r}\cdot[h_{t-1}\;\;x_{t}]+b_{r}). \tag{8}\] As the sigmoid activation function outputs values between 0 and 1, the resulting values can be interpreted as the proportion of past information to retain. This value, denoted as \(r_{t}\), is multiplied by \(h_{t-1}\) and passed through the hyperbolic tangent (tanh) activation function. This process yields a vector of new candidate values, represented as \(\tilde{h}_{t}\): \[\tilde{h}_{t}=\tanh(W_{h}\cdot[r_{t}\cdot h_{t-1}\;\;x_{t}]+b_{h}). \tag{9}\] The update gate governs the decision of which information to update in the next step: \[z_{t}=\sigma(W_{z}\cdot[h_{t-1}\ x_{t}]+b_{z}). \tag{10}\] Finally, the new hidden state \(h_{t}\) is then computed as a weighted average of the previous hidden state \(h_{t-1}\) and the new candidate values \(\tilde{h}_{t}\): \[h_{t}=(1-z_{t})\cdot h_{t-1}+z_{t}\cdot\tilde{h}_{t}. \tag{11}\] ### Model specifications Before estimating the model, it is necessary to prepare the chosen type of neural network. Figure 4 illustrates a simplified workflow chart outlining the process of forecasting using neural networks. Recession forecasting using neural networks involves five steps: Data must first undergo preprocessing before being fed into the model. Depending on the specific type of neural networks the network architecture is chosen. Hyperparameters are then fine-tuned using the blocked form of cross-validation. The second and third steps can be combined, treating the number of layers and units within those layers as hyperparameters. Once the data is prepared and the optimal set of hyperparameters is determined, the model is ready to be estimated and generate predictions. ### Data preprocessing The first step involves preprocessing the data before it can be utilized by the model. This preprocessing stage comprises multiple smaller steps. Initially, each series is evaluated and transformed individually to ensure stationarity. Zhang et al. (2020) argue that while advanced optimization algorithms like RMSProp and Adam are effective for non-stationary data, they incorporate historical gradients in the update calculations, which can result in a lack of relevance to past information when dealing with non-stationary data where distributions change over time. Therefore, it is Figure 4: Workflow chart of the forecasting process prudent to use stationary time series data. The second step involves seasonally adjusting and standardizing all explanatory variables. Lastly, a generator function is employed to generate data batches throughout the estimation process. Optimization algorithms based on stochastic gradient descent often utilize smaller batch sizes or minibatches. Bottou (2010) and Ge et al. (2015) highlight that minibatches offer the advantage of avoiding memory loss and introducing sufficient noise into each gradient update, aiding in escaping saddle points or local minima while achieving faster convergence. Additionally, the LSTM and GRU networks require a three-dimensional tensor as input, consisting of the batch size, number of features, and sequence length as axes. To accommodate LSTM and GRU networks, the generator function transforms the data into a suitable three-dimensional tensor format. #### Network architecture In Section 3.1, three neural network models are presented, each based on a different architecture. The number of nodes or units in the input layer is determined by the number of features. Depending on the desired number of time points to consider, the input dimension increases accordingly. For example, if the last 12 months of data from 25 predictors are to be used for future predictions, the input dimension would increase from 25 to \(25\times 12=300\). Given the relatively small dataset with a maximum of 657 observations across 25 explanatory variables, I allow a maximum of two hidden layers. The number of hidden layers and the number of nodes in those layers are treated as hyperparameters that should be optimized during the cross-validation process, which will be discussed in more detail later. Both the input and hidden layers use the rectified linear unit as the activation function for their units. The output layer consists of a single unit that uses the sigmoid activation function, producing a number between 0 and 1. This number can be interpreted as the probability of a recession. The LSTM neural network architecture consists of as many layers as the number of time steps to look back, along with an output layer. In the context of recession forecasting, I choose to examine the temporal variations of the predictors over the last 12 months, or 1 year, which corresponds to 12 lagged variables of the same predictor. For each time point from \(t-1\) to \(t-12\), there are 12 LSTM layers, each with a same number of hidden units ranging from 16 to 64. The first unit in the first LSTM layer is linked to the first unit in the second LSTM layer, and the same holds for the rest of the LSTM layers. This approach ensures that each chain of units explores different feature dimensions at different time points. Additionally, the network may require a second chain of layers, leading to a stacked LSTM. The optimal number of stacked layers is determined by the hyperparameter optimization process. The output layer in LSTM is the same as in the FFN. The GRU follows the same architectural structure as LSTM but differs in the structure and functionality of the hidden units within the GRU layer. #### Hyperparameter optimization Deep neural networks with multiple hidden layers and numerous parameters have the capacity to learn complex relationships. However, when training data is limited, some of these relationships may be the outcome of sampling noise and might not be present in real test data. This gives rise to overfitting, which transpires when a model fails to generalize from observed data to new data. Overfitting is a commonly acknowledged challenge in the supervised machine learning framework, and various methods employing different strategies have been devised to mitigate its effects and prevent the model from overfitting. Ying (2019) categorizes methods to address overfitting into four groups: network reduction, data augmentation, early stopping, and regularization. Network reduction involves reducing the depth and width of a neural network by decreasing the number of layers and units, thereby reducing the total number of parameters to estimate. This approach helps the model focus on capturing essential patterns in the training data, leading to a better generalization of test data. To implement network reduction, I limit the network to a maximum of two hidden layers, and choose a relatively small number of units in these layers. Additionally, through cross-validation, the model selects the optimal depth and width that minimizes validation loss, ensuring good generalization on unseen validation data. Complex models with numerous parameters require a substantial amount of data to distinguish meaningful patterns from noise. Therefore, expanding the training data is an effective approach to enhance the model's generalizability. Data augmentation involves not only acquiring more training data but also employing techniques directly applied to the existing data, such as random sampling or reshuffling. I am unable to address this issue due to two reasons. Firstly, I use the same data as in Chung (2022) to compare the performance of neural networks with traditional statistical models like logistic regression. Therefore, data augmentation is not suitable as it would alter the original dataset. Secondly, the temporal dependence of the time series data across multiple variables is a crucial aspect that needs to be explored. In this context, data augmentation strategies such as random sampling or reshuffling are less desirable. Instead, early stopping is implemented as a strategy to halt the training process when the gap between training loss and validation loss begins to widen. The weights are then set to the values corresponding to the smallest gap, preventing the model from excessively memorizing the training data. This helps in improving the model's performance on unseen data. To incorporate early stopping, I include a call during the estimation process that stops the iteration and restores the best weights when the validation loss does not decrease for 5 consecutive epochs during cross-validation and for 10 consecutive epochs during the final estimation, where an epoch means one complete pass of the training dataset through the algorithm. Regularization is a technique used to mitigate the impact of less important features in a model. There are two commonly applied categories of regularization methods in neural networks. The first category involves adding a penalty term or regularizer to the loss function. Two well-known types of regularizers, namely \(L1\) and \(L2\), are commonly used, similar to their application in penalized regression. \(L1\) regularization, known as Lasso regression (Tibshirani (1996)), assigns zero weights to unimportant features, effectively removing them from the model. This ensures that only influential features with significant effects on the variable of interest are retained. On the other hand, \(L2\) regularization, known as Ridge regression (Hoerl and Kennard (1970)), assigns lower weights to unimportant features rather than discarding them completely. This approach aims to extract as much relevant information as possible while controlling model complexity and reducing overfitting. In my model, I incorporate \(L2\) regularization to address these concerns, as Chung (2022) demonstrated the superiority of Ridge regression over Lasso regression in the context of recession forecasting. The second category of regularization methods involves using a technique called dropout. Dropout was introduced by Srivastava et al. (2014) and involves randomly dropping units and their connections from the neural network during training. A certain percentage of hidden units are randomly dropped to create a thinned network, which is then trained using stochastic gradient descent. After training, the dropped units are restored, and the process is repeated. Dropout can be seen as approximating the effects of averaging the estimates over multiple smaller networks, while also preventing overfitting. I leverage the dropout technique in my model, treating the dropout percentage as a hyperparameter to be optimized through cross-validation. Neural networks have hyperparameters that need to be determined prior to training, in addition to the parameters that are estimated during training. These hyperparameters include the number of layers, units in each layer, activation function, batch size, learning rate, dropout percentage, and more. The values of these hyperparameters must be specified before training can begin. There are various methods to search for the best combination of hyperparameters, known as hyperparameter optimization. One simple approach is manual search, where the researcher selects their own set of hyperparameters based on existing literature and evaluates the model's performance on a validation dataset. This process is repeated with different hyperparameter settings until the best combination is found. However, manual search can be time-consuming and does not guarantee finding the optimal hyperparameters. Grid search is another approach, where a predetermined set of values is specified for each hyperparameter. Every possible combination of values is then evaluated, resulting in a large number of trials. However, as indicated by Bellman (1961), the number of possible combinations grows exponentially with the number of hyperparameters, leading to the curse of dimensionality. This can make grid search impractical for models with many hyperparameters. A more effective approach, demonstrated by Bergstra and Bengio (2012), is random search. Random search involves randomly selecting values from predetermined ranges for each hyperparameter. The advantage of random search is that it can identify good or even better models within a smaller fraction of computation time compared to manual or grid search methods. Following this approach, I perform random search using predefined ranges of values for the hyperparameters. The specific ranges used for random search in the optimization process are reported in Table 3. Figure 5 illustrates the form of cross-validation, which I employ to determine the optimal set of hyperparameters. Each block of the training set is divided into two sections at every iteration, with the validation set always following the training split. This approach ensures that the natural order of observations is maintained within each block. For each block, the size of the validation set remains the same, while the training set gets larger as the validation set of the prior block is added to it. The number of iterations in the cross-validation setup depends on the size of each block. If the block size is small, the number of blocks and iterations increases. However, selecting an excessively small block size may result in certain blocks having insufficient or no recession data, leading to sampling bias. Considering the information presented in Table 1, which indicates the longest period between two recessions in US history prior to the Great Recession as 128 months, I set the length of each validation block to be 128 months to ensure that each validation set contains data from at least one recession. I let the length \(l\) of the first training block be equal to \(l=N_{total}-128\times(\lfloor\frac{n_{total}}{128}\rfloor-1)\), where \(N_{total}\) denotes the total number of observations in a data vintage. This guarantees that the first training block is always larger than the validation block. Subsequently, I employ an expanding window approach, increasing the length of the training set by 128 months at each step while maintaining a validation block of the same length (128 months). This methodology ensures that scarce recession data are fully utilized. For each combination of hyperparameters, I calculate the average validation loss across all cross-validations and store it. The set of hyperparameters with the lowest average validation loss is selected as the optimal choice. According to mathematical calculations by Bergstra and Bengio (2012), using random search with 60 trials is likely to yield a set of hyperparameters that falls within the top 5% interval around the optimal solution with a 95% probability. Therefore, I conduct 60 trials each time I train the model for new predictions. Once the optimal combination of hyperparameters is determined, I split the entire dataset into training and validation parts in the same proportion, maintaining the chronological order of the data. I then perform a final estimation and validation of the model to improve the accuracy of the weights and check for any \begin{table} \begin{tabular}{l|l} \hline Type & Range \\ \hline \# hidden layer & \(\in(1,2)\) \\ \hline \# unit & \(\in(16,32,64)\) \\ \hline batch size & \(\in(16,32,64)\) \\ \hline dropout & \(\in(0,0.1,0.2,0.3,0.4,0.5)\) \\ \hline recurrent dropout & \(\in(0,0.1,0.2,0.3,0.4,0.5)\) \\ \hline weight decay & \(\in(0,0.1,0.2)\) \\ \hline learning rate & \(\in(0.01,0.001)\) \\ \hline \end{tabular} The table provides the ranges of possible values for different hyperparameters of the neural networks used in the analysis. The optimal combination of these hyperparameters is determined using the time series cross-validation technique during the hyperparameter optimization process. \end{table} Table 3: Tuning hyperparameters signs of overfitting. #### Model estimation The initial dataset used to train the model consists of real-time data from October 2005. It contains the latest available data in October 2005 which stretches from February 1967 to August 2005. For each month in the forecasting period, a dataset is constructed containing the NBER recession indicator and other relevant time series data. This dataset is used to train the model and make predictions for the following month. The forecasting process assumes that predictions are made on the first day of each month. The forecaster incorporates all available data up to the end of the previous month to predict for the current month and beyond, based on the forecast horizon. Most of the data for a specific month is typically published by the last day of the following month. As a result, there is a lag of at least two months (depending on the forecast horizon) between the Figure 5: Cross-validation for time series latest available data and the month being predicted. Occasionally, data may have time delays or missing values. In such cases, a \(k\)-nearest-neighbor imputation method is used during the data pre-processing stage. Experimenting with alternative methods, such as the bagged tree imputation algorithm, does not affect the results significantly, although the latter is known for its superior performance on account of higher computational costs according to existing literature (Stekhoven and Buhlmann (2011)). Once the hyperparameters are optimized, the neural network is prepared to fit the training data. The data is fed into the model, processed, in the manner of equation (1) in case of a FFN, and transformed into an output. This output is then compared to the actual output, and the error signals are propagated backward through the network to adjust the weights accordingly. This iterative process is known as backpropagation and continues until a specific condition is satisfied. Although the concept appears straightforward, the estimation process involves extensive mathematical computations. The parameter values in equation (1) are estimated to minimize the following binary cross-entropy loss function: \[\text{Loss}=\frac{1}{n}\sum_{i=1}^{N}-[y_{i}\cdot\ln(p_{i})+(1-y_{i})\cdot\ln( 1-p_{i})]. \tag{12}\] In most cases, there is no analytical solution available for minimizing such problems, and therefore the parameters need to be estimated numerically. To perform this estimation, I utilize the Adam optimizer, which was introduced by Kingma and Ba (2014). The Adam optimizer is specifically designed for optimizing stochastic loss functions using first-order gradients. It combines the strengths of two popular optimization methods, AdaGrad (Duchi et al. (2011)) and RMSProp (Tieleman and Hinton (2012)). The Adam optimizer offers computational efficiency and requires minimal memory resources. #### Prediction To fully utilize the capacity of neural networks, I incorporate all 12 monthly lags of each predictor over the past year. De Veaux and Ungar (1994) argue that due to the overparameterization of neural networks, individual weights associated with multicollinearity become less influential. The final output of a neural network is a result of various combinations of activation functions that involve interactions among predictors, making the impact of multicollinearity typically insignificant. Moreover, the backpropagation algorithm used in neural networks does not require inverting matrices, which can be problematic when there is perfect or severe multicollinearity. Since the effectiveness of predictive variables may vary across different forecast horizons, I explore five distinct windows: nowcasting, immediate-term, short-term, medium-term, and long-term. These windows correspond to predicting the current month, one month ahead, three months ahead, six months ahead, and twelve months ahead, respectively. Accounting for the two-month publication lag, the forecast horizons represent 2-, 3-, 5-, 8-, and 14-steps-ahead forecasts using the latest set of predictive variables. For example, suppose it is November 2006, and I have data available up to September 2006, which was published in October 2006. If I aim to conduct nowcasting and forecast the recession for the current month, I use all available data up to September 2006 to predict for November 2006, resulting in a two-steps-ahead forecast. Similarly, for other forecast horizons, two extra months or steps need to be considered. After the models generate probability predictions, they are transformed into monthly zero-one indicators that represent the state of the economy, such as recession or boom, based on predefined cutpoints. The conventional approach is to use a fixed threshold, typically set at 0.5. Under this approach, if a probability prediction is equal to or greater than 0.5, it is classified as a recession, and vice versa. However, it is often mentioned that business cycles exhibit asymmetry, and therefore a 50% threshold may not be optimal for linear modeling methods. Berge and Jorda (2011) propose an optimal cutpoint within the range of 0.3 to 0.6 based on smoothed state probabilities estimated by Chauvet and Piger (2008). Ng (2014) suggests using different thresholds for different forecast horizons, ranging from 0.3 to 0.44. Vrontos et al. (2021) adopt a fixed threshold of 0.33 for classification and evaluation purposes. In the case of neural networks, they are capable of capturing the asymmetry and other nonlinear characteristics of business cycles. Therefore, I opt to follow the traditional approach and use a fixed threshold of 0.5. The out-of-sample forecasts are produced in a quasi-recursive fashion to capture dynamic structural changes in the data. To manage computational resources, a forecast window of 12 months is selected, during which the estimated parameters remain constant. This means that once a neural network model is built, it is used to predict the next 12 months using the latest available data. Afterward, the observed data for those 12 months is added to the existing sample, and the neu ral network model is reestimated using this augmented dataset. The updated parameters are then used to forecast the following 12 months, and so on. Although this recursive reestimation approach is time-consuming and costly, it mimics the real-time process of generating predictions using the most up-to-date data available. To clarify this procedure, I focus on the nowcasting forecasts. The out-of-sample period begins in November 2006, and we want to predict for that month. The available data in November 2006 goes up until September 2006. The training and validation set consists of 476 monthly observations from February 1967 to September 2006. Using the optimized hyperparameters obtained through blocked cross-validation, I compute the nowcasting forecast for November 2006. The predicted value for this month is then compared to the actual value from the January 2007 vintage. The same model uses the most recent data available each month to generate predictions for the next 12 months. For the next iteration, data of the last 12 months are added to the training and validation set. The model is reestimated and used to predict for November 2007 and onwards. This expanding window procedure is repeated, gradually increasing the size of the training and validation set, to generate out-of-sample forecasts for the testing period spanning from November 2006 to October 2021. The same approach is adopted for the other forecasting horizons. ### Performance evaluation The estimated models produce two types of forecasts. Firstly, the models generate probability predictions. In the second step, these probabilities are transformed into binary point predictions using a specified threshold. While the second type may not always be necessary, I include it to ensure the forecast is more easily interpretable for the end-users. The performance of both the probability predictions and point predictions is evaluated using various statistical measures. With the exception of metrics specifically related to probability predictions, the evaluation is based on the output of a contingency table, commonly known as a confusion matrix as illustrated in Table 4. This matrix provides a framework for assessing the accuracy of the predictions. The confusion matrix is composed of elements that represent the count of observations belonging to different categories. True positives (TP) and true negatives (TN) indicate the correct classification of positive and negative outcomes, respectively. False positives (FP) occur when the prediction is positive while the actual value is negative. Conversely, false negatives (FN) arise when the prediction is negative while the actual value is positive. By utilizing these counts, various performance evaluation measures are derived to assess the accuracy of the predictions. The performance evaluation of the forecasting model is assessed using the metrics listed in Table 5. The ROC (Receiver Operating Characteristics) curve displays the entire set of possible combinations of true positive rates \(TPR(c)=\frac{TP(c)}{\#\:Actual\:Positive}\) and false positive rates \(FPR(c)=\frac{FP(c)}{\#\:Actual\:Negative}\) for some cutpoint \(c\in(0,1)\) that maps the predicted probability to a binary cate \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Predicted} \\ \cline{3-4} \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Positive} & \multicolumn{1}{c|}{Negative} \\ \hline \multirow{3}{*}{Actual} & Positive & TP & FN \\ & Negative & FP & TN \\ \hline \end{tabular} The table exhibits an example of the confusion matrix containing elements representing the count of observations belonging to each category. True positives (TP) and true negatives (TN) indicate the correct classification of positive and negative outcomes, respectively. False positives (FP) occur when the prediction is positive, but the actual value is negative. Conversely, false negatives (FN) arise when the prediction is negative, but the actual value is positive. \end{table} Table 4: Confusion Matrix \begin{table} \begin{tabular}{|c|l|c|} \hline \hline Type & Metric & Formula \\ \hline \multirow{2}{*}{Probability} & Area under the ROC curve & \(\int_{0}^{1}ROC(c)\,dc\) \\ \cline{2-3} & Area under the PR curve & \(\int_{0}^{1}PR(c)\,dc\) \\ \hline \hline \multirow{5}{*}{Pediction} & Sensitivity & \(\frac{TP}{TP+FN}\) \\ \cline{2-3} & Specificity & \(\frac{TN}{FP+TN}\) \\ \cline{2-3} & Precision & \(\frac{TP}{TP+FP}\) \\ \cline{2-3} & Balanced accuracy & \(\frac{Sensitivity+Specificity}{2}\) \\ \cline{2-3} & Matthews correlation coefficient & \(\frac{TP\times TN-FP\times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}\) \\ \cline{2-3} & \(F_{1}\)-Score & \(\frac{2}{Sensitivity+\frac{1}{Precision}}\) \\ \hline \end{tabular} The table reports the metrics used for the performance evaluation of the forecasting model. They are divided into two groups, depending on which type of predictions they refer to. \end{table} Table 5: Performance evaluation metrics gory. Similarly, the PR (Precision-Recall) curve plots the complete set of possible combinations of precision and recall for \(c\in(0,1)\), where recall is a synonym for sensitivity. Both the area under the ROC curve and the PR curve increase with the underlying metrics for a given cutpoint. By aggregating over the entire set of cutpoints, these curves assess the overall predictive ability of the forecasting model, regardless of the specific cutpoint used. Tharwat (2021) provides a comprehensive overview of these metrics, while Chung (2022) briefly discusses their strengths and weaknesses, particularly in the context of imbalanced binary classification problems. ## 4 Empirical results Table 6 presents the out-of-sample forecast performance of the models for various forecast periods. A carefully selected set of statistical metrics is used to assess the performance from different perspectives. The metrics AUROC (Area Under the Receiver Operating Characteristic Curve) and AUPRC (Area Under the Precision-Recall Curve) are employed to measure the models' raw performance in the sense that these metrics do not require any threshold to convert probabilities into binary outcomes, making them suitable for evaluating the models' pure performance. Another set of metrics is derived from a contingency table, which necessitates the use of thresholds to convert predicted probabilities into binary outcomes. The performance values of the logistic and ridge regression models, obtained from Chung (2022), are included in the analysis and can be directly compared to the neural network models in this study since they are based on the same data and forecast horizons. In the nowcasting setup, there is little difference in predictive performance between the types of logistic regression models and of neural network models. Logistic regression models are not outperforming other models in any considered metric, but the difference is smaller compared to other forecast horizons. The ridge regression model performs similarly to the feed-forward neural network (FFN). Although the FFN performs slightly better, the differences are less than 0.1 for most of the metrics. The LSTM and GRU models show significantly better performance in terms of MCC and \(F_{1}\)-Score, mainly due to their high precision values. For instance, the LSTM correctly identifies 90% of recession months, while 75% of its positive predictions are accurate. In contrast, the ridge regression model has a precision of 51.5%, indicating that the LSTM accurately predicts recessions without generating too many false alarms. The AUPRC values also highlight \begin{table} \begin{tabular}{l c c c c c c c c} \hline Method & AUROC & AUPRC & BAcc & MCC & \(F_{1}\)-Score & Sensitivity & Specificity & Precision \\ \hline \multicolumn{10}{l}{Panel A: nowcasting setup} \\ \hline Logit & 0.853 & 0.365 & 0.815 & 0.506 & 0.556 & 0.750 & 0.881 & 0.441 \\ Ridge & 0.920 & 0.529 & 0.875 & 0.609 & 0.642 & 0.850 & 0.900 & 0.515 \\ FFN & 0.917 & 0.642 & 0.906 & 0.668 & 0.692 & 0.900 & 0.912 & 0.563 \\ LSTM & 0.899 & 0.754 & 0.931 & 0.797 & 0.818 & 0.900 & 0.962 & 0.750 \\ GRU & 0.890 & 0.837 & 0.928 & 0.778 & 0.800 & 0.900 & 0.956 & 0.720 \\ \hline \multicolumn{10}{l}{Panel B: immediate-term setup} \\ \hline Logit & 0.671 & 0.243 & 0.668 & 0.265 & 0.357 & 0.500 & 0.837 & 0.278 \\ Ridge & 0.931 & 0.555 & 0.894 & 0.693 & 0.723 & 0.850 & 0.937 & 0.630 \\ FFN & 0.860 & 0.652 & 0.825 & 0.623 & 0.667 & 0.700 & 0.950 & 0.636 \\ LSTM & 0.847 & 0.514 & 0.890 & 0.607 & 0.632 & 0.900 & 0.881 & 0.487 \\ GRU & 0.896 & 0.831 & 0.944 & 0.887 & 0.900 & 0.900 & 0.987 & 0.900 \\ \hline \multicolumn{10}{l}{Panel C: short-term setup} \\ \hline Logit & 0.524 & 0.170 & 0.591 & 0.165 & 0.267 & 0.300 & 0.881 & 0.240 \\ Ridge & 0.928 & 0.522 & 0.856 & 0.601 & 0.640 & 0.800 & 0.912 & 0.533 \\ FFN & 0.872 & 0.800 & 0.841 & 0.732 & 0.757 & 0.700 & 0.981 & 0.824 \\ LSTM & 0.858 & 0.395 & 0.784 & 0.508 & 0.565 & 0.650 & 0.918 & 0.500 \\ GRU & 0.885 & 0.544 & 0.931 & 0.797 & 0.818 & 0.900 & 0.962 & 0.750 \\ \hline \multicolumn{10}{l}{Panel D: medium-term setup} \\ \hline Logit & 0.632 & 0.174 & 0.565 & 0.097 & 0.226 & 0.350 & 0.780 & 0.167 \\ Ridge & 0.910 & 0.423 & 0.818 & 0.517 & 0.566 & 0.750 & 0.887 & 0.455 \\ FFN & 0.795 & 0.496 & 0.731 & 0.510 & 0.556 & 0.500 & 0.962 & 0.625 \\ LSTM & 0.853 & 0.519 & 0.903 & 0.655 & 0.679 & 0.900 & 0.906 & 0.546 \\ GRU & 0.731 & 0.250 & 0.647 & 0.266 & 0.356 & 0.400 & 0.893 & 0.320 \\ \hline \multicolumn{10}{l}{Panel E: long-term setup} \\ \hline Logit & 0.737 & 0.250 & 0.668 & 0.297 & 0.383 & 0.450 & 0.887 & 0.333 \\ Ridge & 0.818 & 0.388 & 0.712 & 0.353 & 0.431 & 0.550 & 0.874 & 0.355 \\ FFN & 0.873 & 0.391 & 0.640 & 0.294 & 0.368 & 0.350 & 0.931 & 0.389 \\ LSTM & 0.862 & 0.371 & 0.725 & 0.396 & 0.468 & 0.550 & 0.899 & 0.407 \\ GRU & 0.909 & 0.674 & 0.837 & 0.707 & 0.737 & 0.700 & 0.975 & 0.778 \\ \hline \end{tabular} The table reports the performance evaluation measures of forecasts obtained by logistic regression models, ridge regression models, feed-forward (FFN), long short-term memory (LSTM), and gated recurrent unit (GRU) neural networks over different time horizons for the out-of-sample period, November 2006 to October 2021: Panel (A) presents the nowcasts. Panel (B), (C), (D), and (E) display the 1-months-ahead forecasts, the 3-months-ahead forecasts, the 6-months-ahead forecasts, and the 12-month-ahead forecasts, respectively. \end{table} Table 6: Performance evaluation measures: A real-time assessment the superiority of LSTM and GRU models over the others, with GRU achieving an AUPRC of 0.837 compared to 0.642 for FFN and 0.529 for ridge regression. Davis and Goadrich (2006) argue that precision-recall curves give a more informative picture of an algorithm's performance, when dealing with highly imbalanced dataset. In fact, the ratios of the number of months in boom to the number of months in recession in 194 real-time datasets range from 4.5 to 6.7 with a mean value of 6, which means that the period of booms is in average 6 times longer than that of recessions. The percentage of recessions in the datasets fluctuates around 0.2 across the vintages. Although not extremely skewed, the datasets are certainly unbalanced, and the difference in AUPRC values suggests that LSTM and GRU models handle recessions better in the presence of class imbalance. Moving to the immediate-term setup, logistic regression models perform significantly worse, with MCC and \(F_{1}\)-Score approximately 40% lower than before. Conversely, the other models show similar or slightly improved performance. Among them, the GRU exhibits the highest performance across various summarizing metrics such as balanced accuracy, MCC, and \(F_{1}\)-Score. In the short-term setup, although the ridge regression model demonstrates better predictive performance than LSTM, on average, neural network models, particularly GRU, outperform the other models. GRU maintains a 90% accuracy in classifying recession months with higher specificity and precision, resulting in higher values of balanced accuracy, MCC, and \(F_{1}\)-Score. In the medium-term setup, LSTM performs slightly better than the other models, while GRU performs poorly, and thus the difference between the two groups is minimal. However, LSTM achieves 90% sensitivity and over 90% specificity, while maintaining precision above 0.5. The overall average performance of the models is the lowest among different forecast horizons. In the long-term setup, GRU surpasses the other models by a significant margin, with an \(F_{1}\)-Score of 0.737 compared to 0.423 for the ridge regression model. LSTM accurately predicts 70% of recession months, with 77.8% of its positive predictions being correct. Neural network models, especially LSTM and GRU, show a significant improvement in predictive performance compared to logistic regression models. Across all metrics considered, neural network models consistently outperform logistic regression models by a significant margin, except for the nowcasting setup. This superiority is evident in both pure performance metrics such as AUROC and AUPRC, as well as threshold-based point prediction performances. The disparity is particularly prominent in AUPRC, which is a more reliable indicator when dealing with imbal anced data, highlighting the neural network models' ability to handle class imbalance effectively. Despite logistic regression models employing more advanced techniques to adjust thresholds in the presence of class imbalance while neural network models use a fixed threshold of 0.5, the neural network models consistently outperform logistic regression models. The performance difference diminishes when comparing them to ridge regression models. Ridge regression models exhibit higher AUROC values compared to other model specifications, except for the long-term setup. However, in terms of AUPRC, neural networks again prove to be a better choice for capturing class imbalance. On average, neural network models perform better than ridge regression models in terms of these metrics. The gap widens further when considering only the two recurrent neural network models. Depending on the forecast horizons, either LSTM or GRU may lead the competition, with occasional substantial differences between them and the other models. The ROC and PR curves of the models for five different forecast horizons are presented in Figures 6 and 7, respectively. Panel B and C of the ROC curves highlight the noticeable difference in forecast performance between the standard logistic regression model (represented by the black line) and the other models. In Panel D and E, although the lines are closer together, there is still some discernible gap between each of the models. Overall, the ridge regression, LSTM, and GRU models exhibit superior performance across various forecasting horizons. When dealing with class imbalance, it is beneficial to complement ROC curves with PR curves since the latter focuses on precision. Precision refers to the fraction of true positives among all predicted positives. PR curves are specifically designed for detecting rare events and determine if the classifier can accurately classify most instances of the minority class. Figure 7 demonstrates that, on average, the green, blue, and purple lines lie above the red line and, even more so, above the black line. This finding supports the argument that neural network models, compared to linear models, are more capable of handling class imbalance in the data. Figure 8 displays the out-of-sample forecasts in probabilities from November 2006 to September 2021, accompanied by shaded areas representing the Great Recession and the Covid-19 recession. Across all panels of Figure 8, similar patterns emerge: the graphs start at high levels before the Great Recession, either remain high or increase during that period, decrease afterward to a range between 0 and 0.25 for an extended period, and then exhibit significant fluctuations around the Covid-19 recession. The models successfully indicate economic and financial downturns during the The figure illustrates receiver operating characteristic (ROC) curves for the out-of-sample period from November 2006 to October 2021: Panel (A) shows the ROC curves for the nowcasting forecast horizon, comparing the logistic regression model, ridge regression model, FFN, LSTM, and GRU models. Panels (B), (C), (D), and (E) present the ROC curves for the immediate-term (1-months-ahead), short-term (3-months-ahead), medium-term (6-months-ahead), and long-term (12-months-ahead) forecast horizons, respectively. Figure 6: Receiver Operating Characteristic (ROC) curves The figure depicts precision recall (PR) curves for the out-of-sample period from November 2006 to October 2021: Panel (A) shows the PR curves for the nowcasting forecast horizon, comparing the logistic regression model, ridge regression model, FFN, LSTM, and GRU models. Panels (B), (C), (D), and (E) present the ROC curves for the immediate-term (1-months-ahead), short-term (3-months-ahead), medium-term (6-months-ahead), and long-term (12-months-ahead) forecast horizons, respectively. Figure 7: Precision Recall (PR) curves The figure presents out-of-sample recession probabilities for the period from November 2006 to October 2021:: Panel (A) displays the predicted recession probabilities for the nowcasting forecast horizon, comparing the logistic regression model, ridge regression model, FFN, LSTM, and GRU models. Panels (B), (C), (D), and (E) show the predicted recession probabilities for the immediate-term (1-months-ahead), short-term (3-months-ahead), medium-term (6-months-ahead), and long-term (12-months-ahead) forecast horizons, respectively. The grey shaded areas depict NBER recession months. Figure 8: Out-of-sample recession probabilities Great Recession, but an external event like the Covid-19 recession remains largely unpredictable. Depending on the forecast horizon, once the model learns from the impact of the Covid-19 pandemic on the economy, it struggles to interpret this abrupt change and generates alternating forecasts. ## 5 Feature importance To gain a better understanding of the underlying factors driving the results, it is important to identify the main variables or features that have the most significant impact on the output. While linear models can utilize estimated coefficients or marginal effects to measure feature importance, more complex models like neural networks require different approaches. Molnar (2020) provides a comprehensive list of methods that enable the interpretation of complex machine learning models. One such method is Local Interpretable Model-Agnostic Explanations (LIME), which suggests the use of local models to estimate effects. For an instance of interest, LIME constructs a new dataset by altering samples feature-wise, drawing from a normal distribution with sample mean and sample standard deviation, and obtains corresponding predictions from the black box model. This modified dataset is then used to train an interpretable model, such as Lasso or a decision tree, which is weighted by the proximity of the sampled instances to the instance of interest. The interpretable model serves as an approximation of the black box model's predictions at a local level, even though it may not accurately represent the global approximation. Another solution stems from cooperative game theory, particularly the Shapley value introduced by Shapley (1953). The Shapley value method involves assigning payouts to players based on their contribution to the overall payout. This cooperative framework resembles a game where players form coalitions and receive profits based on their cooperation. In the context of model predictions and their interpretability, the game pertains to the prediction task, and the payout represents the difference between the actual prediction for that data point and the average prediction across all data points. The players correspond to the feature values of the data point, working together to achieve the payout, which signifies predicting a specific value. To calculate the Shapley value for a specific feature value, all possible coalitions of feature values, excluding the feature of interest, are formed for each data point. The values of features outside a coalition are substituted with random values of those features from data to generate a prediction. The predictions are computed for each coalition, both with and without the feature value of interest. The difference between these predictions represents the marginal contribution of the feature value for that coalition. This computation is repeated for all possible coalitions, and the average of the marginal contributions across all coalitions yields the Shapley value for that feature value. Finally, the Shapley values for a feature can be averaged across data points to assess the relative importance of features in comparison to each other. The Shapley value stands out as the sole explanation method supported by a robust theory that satisfies the axioms of efficiency, symmetry, dummy, and additivity to allow for fair distribution of the contributions among the features. Conversely, methods like LIME rely on the assumption of local linear behavior of the machine learning model, but lack a theoretical justification for why this approach is effective. Lundberg and Lee (2017) propose an alternative estimation method, the Shapley additive explanations (SHAP), which uses a kernel-based estimation approach for Shapley values (KernelSHAP). SHAP introduces an innovative aspect by presenting the Shapley value explanation in the form of an additive feature attribution method, which can be viewed as a linear model. This perspective establishes a connection between LIME and Shapley values. In this context, SHAP defines the explanation model as follows: \[f(c^{\prime})=\alpha_{0}+\sum_{j=1}^{K}\alpha_{j}c^{\prime}_{j}\] In the provided equation, the explanation model is denoted as \(f\), the coalition vector as \(c^{\prime}\in\{0,1\}^{K}\), the maximum coalition size as \(K\), and the Shapley value for a feature \(j\) represented by \(\alpha_{j}\). The coalition vector \(c\) indicates the presence or absence of feature values, with a value of 1 representing presence and 0 indicating absence. KernelSHAP is a method that estimates the contributions of individual feature values to the predictions by first generating a random coalition \(c^{\prime}\in\{0,1\}^{K}\) consisting of K members by flipping a coin multiple times until we obtain a sequence of 0's and 1's. The sampled coalition of size \(K\) is then used as a data point for the regression model. In this model, the target is the prediction for a coalition, whereby in case of \(c_{k}=0\), the absent feature value is substituted with random feature values from the data. This process is repeated for each data point. Then Shapley compliant weights are computed according the the SHAP kernel that is proposed by Lundberg and Lee (2017). Finally, we fit a weighted linear regression model on the modified data, and the estimated coefficients of the model are the Shapley values. The main distinction from LIME lies in how instances are weighted in the regression model. In LIME, the weighting is determined based on the proximity of instances to the original instance. The closer an instance is, the higher its weight in LIME. On the other hand, SHAP assigns weights to sampled instances based on the weight they would receive in the Shapley value estimation. Small coalitions (with fewer 1's) and large coalitions (with many 1's) receive the highest weights in SHAP. The underlying intuition is that we gain the most knowledge about individual features when we can study their effects in isolation. For a coalition consisting of a single feature, we can observe the isolated main effect of that feature on the prediction. When a coalition includes all features except one, we can learn about the total effect of that particular feature, including both the main effect and feature interactions. However, if a coalition comprises half the features, it provides limited insight into the contribution of an individual feature due to the numerous possible coalitions with half of the features. The SHAP values for logistic regression models are computed by the KernelSHAP method using KernelExplainer from SHAP package in python, whereas the SHAP values for deep learning models are approximated by an enhanced version of the DeepLIFT (Deep Learning Important FeaTures) algorithm introduced by Shrikumar et al. (2017) using DeepExplainer from the same package. DeepLIFT is a method used to analyze the output prediction of a neural network for a specific input. It achieves this by backpropagating the contributions of all neurons in the network to each feature of the input. DeepLIFT compares the activation of each neuron to its reference activation and assigns contribution scores based on the difference between them. Unlike other approaches, DeepLIFT has the ability to uncover dependencies that may be overlooked. It can also consider positive and negative contributions separately. Additionally, the scores can be efficiently computed in a single backward pass. SHAP method generates a matrix of Shapley values that contains one row per data instance and twelve columns per feature for each time step. These Shapley values can be used to create global explanations by averaging the absolute Shapley values per feature across the twelve time steps and data points. This process is repeated for the entire out-of-sample period. The results for GRUs are presented in Figure 9, which depicts the distribution of average absolute SHAP values across features. The features are ranked based on their medians, providing insights into their importance for different forecasting horizons. Figure 9: SHAP values of the predictors: GRU The figure shows the boxplots of the absolute average SHAP values of the predictors in descending order based on ridge models: Panel (A) to (E) display absolute average SHAP values of the predictors related to the nowcasting, immediate-term (1-month-ahead), short-term (3-months-ahead), medium-term (6-months-ahead), and long-term (12-months-ahead) forecast horizon, respectively Figure 10: SHAP values of the predictors: ridge logistic regression The ranking of features may vary over time, but it offers an initial understanding of the key drivers influencing the prediction results for each forecast horizon. Depending on the forecasting horizon, variables associated with financial conditions such as S&P 500 index and term spread, and macroeconomic variables related to GDP, inflation, and house market, which are commonly regarded as important recession indicators, consistently rank high (Estrella and Mishkin (1998)). These findings provide valuable insights into the black box and suggest that neural networks effectively capture the effects of significant recession indicators. Figure 10 presents the SHAP results for ridge logistic regression models. There are three major differences, among others, between the results in Figure 9 and 10. Firstly, the order of variables differs for each forecasting horizon between the two types of models. While some variables appear in the leading groups for both models, their specific order varies significantly. For instance, the money supply M2 is influential in the ridge model but less prominent in the GRU model. Spearman's rank correlations between the variable series for the two model types range from -0.24 to 0.10, indicating weak correlation. This suggests that GRU and Ridge models assign different weights to the variables. This difference could be attributed to the neural network's ability to capture nonlinear relationships, allowing for varied weighting of the variables based on different patterns. Secondly, the SHAP values of the GRU models exhibit a more even distribution among the variables compared to the ridge model. This is reflected in the smaller variation in medians between the variables. The distinction becomes particularly pronounced in the short-term setting (up to 3 months), where the ridge model heavily depends on the S&P 500 index. Additionally, the ridge model exhibits numerous outliers in the SHAP values, indicating its high sensitivity to changes in macroeconomic and financial conditions in SHAP value estimation. In contrast, this highlights the GRU model as a more robust modelling framework. Finally, the within variation of the GRU-based SHAP values for some variables are significantly larger than those of the others. Certain variables demonstrate little change in their SHAP values, while others exhibit significant variation. This phenomenon is less prominent in the outcomes of the ridge model. The variances are more uniformly spread, evident in the roughly equal sizes of boxes across variables. This could be viewed as another indication of the GRU's more resilient modeling framework. The GRU is capable of adapting to shifts in economic conditions, adjusting the weights of specific variables accordingly. In contrast, the ridge model has relatively fixed variable orders, with limited flexibility for variation. Figure 11: Marginal effects of the predictors: ridge logistic regression The figure presents the boxplots of the GRU-based absolute average LIME values of the predictors in descending order: Panel (A) to (E) display absolute average LIME values of the predictors related to the nowcasting, immediate-term (1-month-ahead), short-term (3-months-ahead), medium-term (6-months-ahead), and long-term (12-months-ahead) forecast horizon, respectively. Figure 12: LIME values of the predictors: GRU The marginal effect analysis is another option to evaluate the feature importance of linear models. Since deep learning models can not be interpreted in terms of marginal effects, I compute the average marginal effects of the ridge model for the out-of-sample period and plot them in Figure 11. The Spearman's rank correlations between the variable series for the ridge model, evaluated using either SHAP values or marginal effects, range from 0.22 to 0.51. The magnitudes of the correlations are not especially large to conclude that both metrics are the same, but the previous findings, especially the first one about the leading recession indicators, also apply to the Ridge model evaluated by the marginal effects. The averaged LIME values in Figure 12 which may be regarded as the marginal effect version of the GRU model reaffirm the main findings discussed above. The only minor distinction is that it places greater emphasis on the term spread as the comprehensive recession indicator, both in short-term and long-term contexts. The SHAP method allows for a detailed analysis of the impact of a single variable on predictions. In order to anticipate recessions as early as possible, I focus on the long-term forecasting horizon Figure 13: Dependence plot: Term Spread (12 months) and consider the term spread as the most relevant and influential variable to examine its effects on recession predictions for the GRU model. Figure 13 presents the dependence plot of the term spread using the latest available data from October 2021. The x-axis represents the standardized value of the feature (term spread) while the y-axis represents the corresponding SHAP value. Notably, there is a cutoff point around 0.2, above which a higher value of the term spread has a negative impact on the predicted recession probability, while a value below the cutoff positively influences the predicted probability. Additionally, the farther the value deviates from the cutoff, the stronger the effect of the term spread on the predicted probability becomes. This corroborates the traditional interpretation of the term spread in relation to the probability of a recession according to Estrella and Mishkin (1996) and indicates that advanced neural network models like GRU can also uncover and utilize this relationship. ## 6 Conclusion This research focuses on examining the real-time predictability of neural network models, compared to linear models, with respect to the recent two US recessions, the Great Recession and the Covid-19 recession. Three different neural network models are trained and updated throughout the study: a standard feed-forward neural network model, as well as two recurrent neural network models (LSTM and GRU) designed to capture temporal dependencies in time series data. The performance of these models is evaluated using out-of-sample forecasts and compared to standard and ridge logistic regression models. Additionally, the SHAP method is utilized to rank the predictors based on their importance for each forecast horizon, providing initial insights into the most influential predictors. The results are then compared to SHAP results obtained from the ridge logistic regression model. The main finding are validated using both the LIME approach and the marginal effect method. For in-depth analysis, the most influential variable for the long-term forecast horizon, which is the term spread, is chosen to investigate the impact of the variable on the recession probability. This paper introduces two main contributions. Firstly, it focuses on LSTM and GRU, specialized recurrent neural network models that address issues like exploding and vanishing gradients in standard recurrent neural networks. The performance of these models is compared with simple feed-forward neural networks, which face the challenge of specifying temporal dependence in advance, and with traditional linear models in the context of predicting recessions. Secondly, the paper employs the SHAP method to assess the importance of features in GRU for different forecast horizons. The three main findings are as follows: Firstly, LSTM and GRU demonstrate strong out-of-sample performance in recession forecasting, particularly excelling in long-term predictions across five forecasting horizons based on various statistical performance metrics. Secondly, there are differences in how GRU and ridge logistic regression models evaluate the importance of predictors. The variable order differs for each forecasting horizon between GRU and Ridge models. While some variables are significant in both models, their specific ranking varies notably. This suggests that GRU and Ridge models assign different weights to variables, potentially due to the neural network's capacity to capture nonlinear relationships and assign varied weights based on distinct patterns. Furthermore, the SHAP values of GRU models show a more balanced distribution among variables compared to the ridge model. This is evident in the smaller variation in medians between variables. The difference is particularly noticeable in the short-term scenario (up to 3 months), where the ridge model heavily relies on the S&P 500 index. Moreover, the ridge model displays numerous outliers in SHAP values, indicating high sensitivity to changes in macroeconomic and financial conditions. In contrast, this underscores the GRU model's robustness. The within variation of GRU-based SHAP values for some variables is significantly larger than for others. Certain variables show minimal change in SHAP values, while others exhibit considerable variation. This contrast is less pronounced in the ridge model outcomes, where variances are more evenly distributed, reflected in similar box sizes across variables. This suggests that the GRU has a more adaptable modeling framework, capable of adjusting variable weights in response to shifts in economic conditions. Conversely, the ridge model has relatively fixed variable orders, with limited flexibility for variation. Lastly, although the primary predictors for GRU and ridge logistic regression models show slight differences, key indicators such as the S&P 500 index, real GDP, and private residential fixed investment consistently play a significant role in short-term predictions (up to 3 months). For longer-term forecasts (6 months or more), the term spread and producer price index become more prominent. These results are supported by LIME and marginal effects, respectively.
2307.05551
Graph Neural Networks as an Enabler of Terahertz-based Flow-guided Nanoscale Localization over Highly Erroneous Raw Data
Contemporary research advances in nanotechnology and material science are rooted in the emergence of nanodevices as a versatile tool that harmonizes sensing, computing, wireless communication, data storage, and energy harvesting. These devices offer novel pathways for disease diagnostics, treatment, and monitoring within the bloodstreams. Ensuring precise localization of events of diagnostic interest, which underpins the concept of flow-guided in-body nanoscale localization, would provide an added diagnostic value to the detected events. Raw data generated by the nanodevices is pivotal for this localization and consist of an event detection indicator and the time elapsed since the last passage of a nanodevice through the heart. The energy constraints of the nanodevices lead to intermittent operation and unreliable communication, intrinsically affecting this data. This posits a need for comprehensively modelling the features of this data. These imperfections also have profound implications for the viability of existing flow-guided localization approaches, which are ill-prepared to address the intricacies of the environment. Our first contribution lies in an analytical model of raw data for flow-guided localization, dissecting how communication and energy capabilities influence the nanodevices' data output. This model acts as a vital bridge, reconciling idealized assumptions with practical challenges of flow-guided localization. Toward addressing these practical challenges, we also present an integration of Graph Neural Networks (GNNs) into the flow-guided localization paradigm. GNNs excel in capturing complex dynamic interactions inherent to the localization of events sensed by the nanodevices. Our results highlight the potential of GNNs not only to enhance localization accuracy but also extend coverage to encompass the entire bloodstream.
Gerard Calvo Bartra, Filip Lemic, Guillem Pascual, Aina Pérez Rodas, Jakob Struye, Carmen Delgado, Xavier Costa Pérez
2023-07-09T09:08:38Z
http://arxiv.org/abs/2307.05551v4
# Graph Neural Network-enabled Terahertz-based Flow-guided Nanoscale Localization ###### Abstract Scientific advancements in nanotechnology and advanced materials are paving the way toward nanoscale devices for in-body precision medicine; comprising integrated sensing, computing, communication, data and energy storage capabilities. In the human cardiovascular system, such devices are envisioned to be passively flowing and continuously sensing for detecting events of diagnostic interest. The diagnostic value of detecting such events can be enhanced by assigning to them their physical locations (e.g., body region), which is the main proposition of flow-guided localization. Current flow-guided localization approaches suffer from low localization accuracy and are, by-design, unable to localize events within the entire cardiovascular system. Toward addressing this issue, we propose the utilization of Graph Neural Networks (GNNs), and demonstrate localization accuracy and coverage enhancements of our proposal over the existing State of the Art (SotA) approaches. Based on our evaluation, we provide several design guidelines for GNN-enabled flow-guided localization. Graph Neural Network, Terahertz Nanocommunication, Flow-guided Localization, Precision Medicine. ## I Introduction Advances in nanotechnology are heralding the development of nanoscale devices that combine sensing, computing, and data and energy storage capabilities [1]. These devices hold great promise for revolutionizing precision medicine applications [2]. Some of these applications involve deploying nanodevices in the human cardiovascular system, where they need to be comparable in size to the red blood cells (i.e., less than 5 microns). Due to their small size, nanodevices will rely on harvesting environmental energy, such as from heartbeats or through ultrasound, using nanoscale harvesting components like Zinc-Oxide (ZnO) nanowires [1] and, as a result, they will be passively flowing within the cardiovascular system. Recent discoveries in advanced materials, in particular graphene and its derivatives [3], have opened up possibilities for wireless nanoscale communication in the Terahertz (THz) frequencies (i.e., 0.1-10 THz) [4]. Wireless communication capabilities will enable two-way interaction between nanodevices and the external world [5]. Nanodevices integrated with communication capabilities enable applications such as oxygen sensing in the cardiovascular system (a biomarker for cancer diagnosis) and targeted drug delivery for cancer treatment. Moreover, communication-enabled nanodevices facilitate flow-guided localization in the patients' cardiovascular systems [4], offering benefits such as non-invasiveness, early and accurate diagnostics, and cost reduction [6, 7, 8]. Existing evaluations of flow-guided localization approaches, particularly those in [6, 7], have taken a simplified approach, mainly focusing on nanodevice mobility. Moreover, the authors in [8] conducted a limited evaluation, examining the number of nanodevices required for localizing a nanodevice that has detected an event of interest at any location in the body through multi-hopping. Therefore, current evaluations only provide rough estimates due to their limited realism and subjective evaluation methodologies. This has been recognized in Lopez _et al._[9], where the authors provide a simulator that enhances the realism of such assessments by considering multiple factors simultaneously. This includes accounting for the mobility of the nanodevices, in-body nanoscale THz communication between the nanodevices and the external world, and various energy-related and technological constraints, such as pulse-based modulation, that impact nanodevices' performance. The authors follow this by providing a more comprehensive and realistic understanding of the performance of the State of the Art (SotA) flow-guided localization approaches in [10], only to conclude that the solutions perform poorly in terms of both localization accuracy and coverage (i.e., they are able to provide meaningful accuracy of body region classification only for regions with blood speeds of 1 cm/sec). This is attributed to the unreliable nature of THz communication between the nanodevices and the outside world, and the inability of the solutions to deal with the high complexity and erroneous nature of the input data [10]. Toward addressing this issue, we propose the utilization of Graph Neural Networks (GNNs) [11] for enabling THz-based flow-guided nanoscale localization, as indicated in Figure 1. The intuition behind our proposal is that the GNN architecture, built upon the foundation of Heterogeneous Graph Transformers (HGTs), represents a flexible and robust solution for the flow-guided localization task. This is due to its capacity to model complex and dynamic interactions within a heterogeneous graph, which makes it a powerful tool for Fig. 1: Overview of GNN-based flow-guided nanoscale localization tackling unique challenges presented by the localization of events sensed by the nanodevices in the cardiovascular system. Our results, derived using the simulator for evaluation of flow-guided localization from [9], show that the proposed GNN approach outperforms existing SotA proposals (cf., [6, 10]) along the lines of enhanced coverage (i.e., an event can be localized within the entire cardiovascular system) and reduced localization errors. Our results also indicate the limitations in the region classification accuracy for both the GNN-based and baseline approaches, resulting from imbalanced and erroneous raw data stemming from the scenario and motivating the need for alternative propositions for accuracy enhancements (e.g., introducing additional on-body anchors to support localization). ## II GNN-enabled THz-based Flow-guided Nanoscale Localization ### _Flow-guided THz-based Localization Fundamentals_ Flow-guided localization aims to detect and locate a target event using nanodevices, without requiring the nanodevices to determine their own location. The concept introduced in [8] falls under this category, although [6] and [7] are the notable representatives of this localization approach. In these studies, Machine Learning (ML) models are used to differentiate the regions traversed by each nanodevice during a single circulation through the cardiovascular system. In [7], the authors achieve this by tracking the distances covered by a nanodevice using an Inertial Measurement Unit (IMU). However, this poses challenges in terms of limited resources for storing and processing IMU data at the nanodevice level, as well as the accuracy of IMU readings affected by the blood's vortex flow. In contrast, [6] addresses these challenges by tracking the time taken for each circulation. The captured distance or time information is then transmitted to a beaconing anchor near the heart using short-range THz-based backscattering. These localization approaches, unlike [8], are not specifically designed for precise localization of the target. Instead, they focus on detecting the body region through which the nanodevice has passed. Increasing the number of circulations the nanodevices make through the cardiovascular system can enhance the accuracy and reliability of region detection. However, this would result in higher energy consumption for the localization process. Therefore, performance metrics such as point and region accuracies and reliability should be evaluated in relation to the application-specific delay allowed for event localization, as outlined in [9]. ### _GNN-enabled THz-based Flow-guided Localization_ When addressing the problem of localization in the cardiovascular system, we are dealing with a structured and highly connected environment. In other words, the cardiovascular system can be represented as a set of edges associated with node coordinates, where each node corresponds to an organ, limb, or vessel. Each edge has a specific length and flow velocity, defining the constraints for the movement of the nanodevices. GNNs allow for exploiting relational information present in the graph, which in turn is envisioned to facilitate accurate localization of the events in the cardiovascular system. One primary objective is to develop a GNN model capable of propagating the information from the anchors through the body regions to estimate the event's location. The GNN proposed in this work leverages two types of nodes: region nodes and anchors. The region nodes hold the information of the regions, such as region type (organ/limb/head, vein, or artery), length, and blood speed. The anchors carry the information on the circulation times of the positive bits received from the nanodevices for the localization process. Because this data can be of variant length (the anchor receives an undefined number of positive event bits), we propose to create a parameterizable distribution of the bivariate data (i.e., pairs of loop elapsed time and event bit) for each anchor. We model the circulation time for positive event bits as a Gaussian Mixture Model (GMM), where we intuitively expect to have a Gaussian cluster for each time a nanodevice fails to communicate to the anchor and runs a second loop in the cardiovascular system. With this approach, the distribution parameters derived using the GMM serve as features for the anchors alongside the average number of positive bits received per minute, thus providing a fixed-length feature set for each anchor. ### _GNN Architecture_ The GNN architecture employs a comprehensive design paradigm, aiming to leverage the inherent structure of the graph representing the cardiovascular system. HGTs form the basis of our architecture as they provide versatility to handle the system's multiple types of nodes, i.e., region nodes and anchors. The GNN starts its operations by generating unique embeddings for each node type, as shown in Figure 2. It applies a linear transformation and a non-linear activation function (i.e., ReLU) to the initial node features, effectively transforming the information into a latent space of higher-dimensionality. This initial transformation significantly aids in the propagation and processing of information in the later stages of the model. The main body of the architecture consists Fig. 2: GNN-based flow-guided nanoscale localization design of three principal components: an initial set of convolution layers, a suite of HGT layers, and a concluding collection of convolution layers, where each component contributes to the model's efficacy and adaptability. The primary purpose of the initial convolution layers is to introduce non-linearity and adaptability into the model. These layers use Graph Attention Networks (GATs), which aggregate information from each region node's neighbors based on their relative importance, a measure learned during training. This allows the model to effectively consider complex weighted relationships between the nodes, making it highly sensitive to the intricate dynamics of nanodevices' propagation. Following the initial layers, the HGT layers are crucial for dealing with the complex interactions between different types of nodes. These layers incorporate the information from the anchors to region nodes, enabling the model to capture the dynamic propagation of nanodevices through different body regions. By dynamically adjusting the importance of different nodes based on the information they are carrying, the HGT layers provide a nuanced representation that encapsulates the spatial and temporal aspects of the nanodevices' propagation. Our models links the anchors to all region nodes, enabling efficient communication between them and eliminating the need for multiple stacked message-passing layers to ensure that information from the anchors reaches all the region nodes. After the HGT layers have processed the information, a set of final convolution layers is applied to refine the region nodes' representations. These layers, similar to the initial convolution layers, utilize GATs to amplify the refined information obtained from the HGT layers. The architecture concludes with a final linear layer applied to the refined representations of the region nodes. The output of this layer undergoes a sigmoid activation function to produce the final predictions, indicating the likelihood of an event occurring in each region. ### _Model Hyperparameterization and Training_ To optimize the performance of flow-guided localization of nanodevices within the cardiovascular system, we trained the GNN model using an extensive hyperparameter tuning process and a robust training procedure. The model was trained on an exhaustive simulated training dataset in which the cardiovascular system is, for each iteration of a nanodevice within it, represented as a graph with 94 region nodes. The region nodes feature circulation times and event-indicating bits as input features and labels as the output feature. Moreover, the anchors are attributed with the same input features. Their labels indicate the region in which the event was detected, which intuitively results in a highly unbalanced dataset with one positively detected label against 93 negative ones. Given this data imbalance, the training focused on minimizing the loss and optimizing the F1 score, which measures the balance between precision and recall and is especially useful for such imbalanced datasets. By optimizing for the F1 score, the model is encouraged to correctly predict the positive class, i.e., correctly identify the regions where the event is located, which is the model's primary objective. We standardized the region features of the dataset for training robustness. The model employs the Binary Cross-Entropy (BCE) loss function with sum reduction to aggregate the loss, ensuring the learning process considers all data points. The loss is weighed inversely proportionally to the class frequencies to account for the class imbalance. We use the Adam optimizer, while the hyperparameter optimization process includes the learning rate and weight decay parameters. Gradient clipping was applied to mitigate potential gradient explosion problems and ensure stable learning, which is the usual practice for preventing the gradients from becoming too large, leading the model to diverge during training. The hyperparameter optimization used the Weights and Biases (W&B) platform's sweep functionality. The sweep was configured to utilize Bayesian optimization to maximize the F1 score on the validation set. The optimization process explored various hyperparameters, as indicated in Table I. This systematic exploration helped identify a combination of hyperparameters that best suit the data and the task at hand, resulting in a model capable of event localization. ## III Evaluation Setup and Results ### _Evaluation Setup_ We utilize the simulator from [9] for assessing the performance of the proposed GNN-based flow-guided nanoscale localization approach. In the simulator, the nanodevices are assumed to have capacitors to store and ZnO nanowires to collect energy. The charging of the capacitors is modeled as an exponential process that takes into account the rate and interval of energy harvesting, as well as the storage capacity of the capacitors. The nanodevices exhibit intermittent behavior due to constraints related to energy harvesting and storage. This behavior is represented by a _Turn ON_ threshold, where a nanodevice turns on if its current energy level exceeds the threshold. Once the energy is depleted, the nanodevice turns off until its energy increases above the threshold. When the nanodevices are turned on, they perform sensing tasks at a given frequency or granularity. Each task consumes a constant amount of energy, meaning that more frequent tasks require higher energy consumption at the nanodevice level. The location of the event to be detected is assumed to be pre-programmed by the experimenter. A nanodevice is considered to detect an event if it is turned on and its location at the time of the sensing task execution is close to the location of the event based on a predefined threshold (i.e., 1 cm). To simulate the movement of the nanodevices, the utilized simulator integrates BloodVoyagerS [12]. BloodVoyagerS offers a simplified representation of the cardiovascular system, consisting of 94 vessels, organs, and limbs. The coordinate system of the model is centered in the heart. All organs share the same spatial depth, with a reference thickness of 4 cm, resembling the depth of a kidney. Consequently, the z-coordinates of the nanodevices range from 2 to -2 cm. The simulator assumes that arteries and veins are situated anterior and posterior, respectively. Transitions between arteries and veins occur in the organs, limbs, and head. In the heart, blood flows from veins to arteries (i.e., from posterior to anterior). The flow rate is simulated based on the relationship between pressure difference and flow resistance. This results in average blood speeds of 20 cm/sec in the aorta and 10 cm/sec in the arteries (_Region type = 0_), and 2-4 cm/sec in veins (_Region type = 1_). Transitions between arteries and veins are simplified with a constant velocity of 1 cm/sec (_Region type = 2_). In the THz-based communication between the anchor and nanodevices, the anchor transmits beacons at a constant frequency and power. The nanodevices passively receive the beacons and actively send back responses, which consumes energy. The response packets from the nanodevices contain information about the time elapsed since their last passage through the heart and an event bit. These data points are then used by a flow-guided localization approach to determine the location of an event. Whenever a nanodevice passes through the heart, the time elapsed since the last passage is reset to avoid accumulating multiple circulation periods. The event bit is set to "1" if a target event is successfully detected and is reset in each passage through the heart. The simulation models the THz channel according to [13], i.e., by calculating the receive power for each pair of communicating devices and scheduling the reception of packets based on the corresponding propagation time. The channel model takes into account in-body path-loss and Doppler effects. The path-loss is determined by considering the attenuation and thickness of vessels, tissues, and skin. The Doppler effect is incorporated by evaluating the changes in relative positions between the nanodevices and the anchor over time. The potential for collisions is modelled by calculating the Signal to Interference and Noise Ratio (SINR) and discarding the packet if the SINR falls below a predefined reception threshold, known as the receiver sensitivity. The main simulation parameters are the same as the ones in [10] (cf., Table 1), and are omitted here for brevity. ### _Evaluation Results_ The hyperparameter tuning of the proposed GNN model is depicted in Figure 3, which includes the cases in which the events might be located solely in limbs, organ, and head, i.e., the regions in which the blood speeds are low, as well as accounting for the entire cardiovascular system. Note that only the best performing subset of the considered hyperparameters has been depicted for clarity, and our depiction includes cases in which we were optimizing for the F1 score and BCE loss. We currently consider solely single-anchor systems, which are, by-design, unable to distinguish between left and right body sides due to comparable circulation times of the corresponding regions. Hence, as the hyperparameter tuning objective we consider multiple number of targets as correct ones, as depicted on the x-axes of the graphs. This approach also provides us with an indication of how many body regions are classified wrong on average. Considering the best-performing set of hyperparameters as shown in Figure 2(a)), the correct region is always within the 6 most likely regions outputted by the network. This provides a primer for constraining diagnostic searches, exploratory surgeries, and similar medical procedures. In the remainder of our evaluation, we utilize the model indicated with dashed green line in Figure 3 due to its close-to-optimal performance in both scenarios considered in Figure 3 for the number of correct targets equaling 1. In Figure 4, we compare the performance of the proposed GNN model with the SotA baseline from the literature, based on [6], [10]. The baseline is a Neural Network (NN) solution that implements three fully-connected layers, with PReLU activation function for the first two and log-softmax for the last. The first two layers feature a dropout for regularization, as well as batch normalization for stabilizing the learning process. The hidden layer's size is 512 and the model is trained to classify 25 classes. The approaches utilizes the Negative Log Likelihood loss due to its ability to handle unbalanced datasets, as well as the Adam optimizer due to its dynamic learning rate adaptation and its ability to operate with relatively simple fine-tuning of the hyperparameters. The comparison is carried out along a set of heterogeneous performance metrics characterizing the accuracy of flow-guided localization. Specifically, the point accuracy metric indicates the amplitude of localization errors and is derived as Fig. 3: Hyperparameter tuning (see Table I for understanding the legend) the Euclidean distance between the true location of the event and the estimated one, with the estimated one being modeled as the centroid of the estimated region. The point accuracy results are depicted in the regular box-plot fashion, indicating the distribution of such errors for 25 randomly located events, one in each of the 25 body regions with blood speeds of 1 cm/sec, as modeled by the BloodVoyagerS. The region accuracy is defined as the percentage of correctly estimated regions. As visible in the Figure, the proposed approach outperforms the baseline in terms of the point accuracy, regardless of the considered granularity. As an example, 18 minutes after the deployment of either solution and considering the sensing granularity of 1 sample per second, the point accuracy distribution of the GNN-based approach is bounded to less than 100 cm of error, while in the baseline a significant number of estimates features errors bounded by 150 cm, representing an improvement of more than 30% over the baseline. Furthermore, Figure 4 illustrates that, as the runtime duration increases, the accuracy of point and region estimation exhibits only a slight improvement or no improvement at all, regardless of the solution under consideration. For instance, the region estimation accuracy increases by approximately 20 to 25%. This behavior can be attributed to two key factors that significantly impact the performance of the solutions under consideration. The first factor is the general principle that ML models tend to enhance their performance when provided with a larger volume of raw data for making predictions. Naturally, a longer runtime results in a larger amount of raw input data for the solutions under evaluation, thereby benefiting the accuracy of estimation. However, it is crucial to consider the challenges associated with THz communication between the nanodevices and the anchor. These challenges include high in-body attenuation, the nanodevices' high mobility, and self-interference between different nanodevices attempting to communicate simultaneously with the anchor. Due to these obstacles, the communication becomes unreliable, causing instances where the anchor does not receive raw data from certain nanodevices at specific time points. More problematically, in such cases, the nanodevices do not reset their iteration times and event bits. Consequently, when the data is eventually reported to the anchor, the reported iteration times represent a combination of multiple iterations, while the event bit may be erroneous. In other words, the event was detected in one of the iterations but propagated through several iterations, some of which did not actually feature the event. Moreover, the intermittent operation of nanodevices, driven by energy harvesting, can lead to situations where a nanodevice misses detecting an event because it was turned off, even though it passed through the region of the cardiovascular system where the event occurred. Additional information regarding the characteristics of the raw data can be found in [9]. This outlined behavior indicates that although increasing the amount of data input into the models should enhance the accuracy of estimation, the highly erroneous nature of the data counterbalances these improvements, resulting in a "flat" performance in terms of region detection and point accuracies for both solutions under consideration. In other words, these results indicate that neither of the considered approaches can to the full extent deal with the complexity and erroneous nature of the raw data. Hence, improvements primarily along the line of introducing additional anchors will be required for optimizing the accuracy of flow-guided localization. Apart from the enhancement in the point accuracy, the proposed GNN-based approach is, by-design, able to classify different regions throughout the cardiovascular system, including the ones in which the blood speeds are higher than 1 cm/sec, i.e., _Region type_ = {0, 1} in Figure 5. This is in contrast to the baseline, which is yielding meaningless (i.e., 0 %) region classification accuracy for these region types. As visible in the figure, the GNN approach is still able to maintain meaningful accuracy levels for the regions in which the blood is faster than 1 cm/sec. As an example, in the region with the slowest blood speeds the GNN-based approach achieves the classification accuracy of almost 40%, which reduces in the regions with increased speeds of blood. The results also indicate that the event sampling granularity, i.e., the frequency of sensing that a nanodevice performs, has a significant effect on the accuracy of the proposed approach. Specifically, our results indicate that, in regions with fast blood speeds, the sampling should be done more frequently in order not to miss events, while this sampling frequency should be reduced when the blood is slower for accuracy optimization. In more Fig. 4: Comparison with the current state of the art Fig. 5: Coverage of GNN-based flow-guided nanoscale localization general terms, further research is needed on assessing the effects of different system parameters such as the number of nanodevices, sampling granularity, number of anchors, etc., for further optimizing the performance of the proposed approach. In addition, dynamic adaptation of certain system parameters based on the context of their operation might be a feasible option for further performance enhancements. For example, the sampling granularity might be dynamically adapted to the blood speeds, thereby maximizing the accuracy of localization while simultaneously minimizing the energy consumption at the nanodevice level. Nonetheless, the fact that meaningful accuracy has been observed indicates that the GNNs will eventually be able to reach high region classification accuracies for the entire cardiovascular system. ## IV Conclusion In this paper, we have proposed a Graph Neural Network (GNN)-based model for Terahertz (THz)-supported Flow-guided nanoscale localization. We have shown that the model outperforms the existing State of the Art (SotA) approaches in terms of localization accuracy, and simultaneously extends the coverage of such localization to the entire cardiovascular system. However, we have also demonstrated that the localization accuracy is low, especially the accuracy of classifying body regions that contain sensed events of interest. Future work will be focused on the integration of additional anchors for accuracy enhancements purposes and more granular region estimation. In that regard, the proposed GNN-based model will be extended with dynamic spatial modelling [14], which is envisioned to enable it to operate well for a changing number of anchors, supporting scenarios in which the users wear multiple anchors, but eventually take some off (e.g., the ones on wrists) without the need for excessive retraining. Moreover, the model's aspect of relating the nanodevice's circulation time with the circuit length and velocity will be extended with dynamic temporal modelling [14]. The utilization of dynamic temporal modelling is envisaged to make the model "body agnostic", i.e., by-design adaptable to different physical conditions such as an accelerated pulse during exercise, adaptable region classification granularities, and different cardiovascular systems, potentially without retraining. Finally, we will aim at introducing heterogeneity [15] in the model in the sense of dynamically adjusting the number of output variables based on the features of input data. By doing so, we hypothesize it might become capable of independently estimating the locations of multiple events simultaneously, paving the way toward its eventual deployment in the cardiovascular system. We envision this approach to also provide an added layer of interpretability for the model's predictions.
2306.04214
DualHGNN: A Dual Hypergraph Neural Network for Semi-Supervised Node Classification based on Multi-View Learning and Density Awareness
Graph-based semi-supervised node classification has been shown to become a state-of-the-art approach in many applications with high research value and significance. Most existing methods are only based on the original intrinsic or artificially established graph structure which may not accurately reflect the "true" correlation among data and are not optimal for semi-supervised node classification in the downstream graph neural networks. Besides, while existing graph-based methods mostly utilize the explicit graph structure, some implicit information, for example, the density information, can also provide latent information that can be further exploited. To address these limitations, this paper proposes the Dual Hypergraph Neural Network (DualHGNN), a new dual connection model integrating both hypergraph structure learning and hypergraph representation learning simultaneously in a unified architecture. The DualHGNN first leverages a multi-view hypergraph learning network to explore the optimal hypergraph structure from multiple views, constrained by a consistency loss proposed to improve its generalization. Then, DualHGNN employs a density-aware hypergraph attention network to explore the high-order semantic correlation among data points based on the density-aware attention mechanism. Extensive experiments are conducted in various benchmark datasets, and the results demonstrate the effectiveness of the proposed approach.
Jianpeng Liao, Jun Yan, Qian Tao
2023-06-07T07:40:04Z
http://arxiv.org/abs/2306.04214v1
DualHGNN: A Dual Hypergraph Neural Network for Semi-Supervised Node Classification based on Multi-View Learning and Density Awareness ###### Abstract Graph-based semi-supervised node classification has been shown to become a state-of-the-art approach in many applications with high research value and significance. Most existing methods are only based on the original intrinsic or artificially established graph structure which may not accurately reflect the "true" correlation among data and are not optimal for semi-supervised node classification in the downstream graph neural networks. Besides, while existing graph-based methods mostly utilize the explicit graph structure, some implicit information, for example, the density information, can also provide latent information that can be further exploited. To address these limitations, this paper proposes the Dual Hypergraph Neural Network (DualHGNN), a new dual connection model integrating both hypergraph structure learning and hypergraph representation learning simultaneously in a unified architecture. The DualHGNN first leverages a multi-view hypergraph learning network to explore the optimal hypergraph structure from multiple views, constrained by a consistency loss proposed to improve its generalization. Then, DualHGNN employs a density-aware hypergraph attention network to explore the high-order semantic correlation among data points based on the density-aware attention mechanism. Extensive experiments are conducted in various benchmark datasets, and the results demonstrate the effectiveness of the proposed approach. Hypergraph neural networks, Hypergraph learning, Density-aware attention, Node classification, Semi-supervised learning ## I Introduction Over the last few years, Graph Neural Networks (GNNs) have attracted much attention because of their ability to effectively deal with graph-structured data and achieve amazing performance, and have been widely used for many machine learning tasks including computer vision [1], recommendation systems [2], neural machine translation [3], and others. Compared with the traditional neural networks that encode every single data separately, GNNs can encode the graph structure of different input data through a graph message propagation mechanism, which allows it to obtain more information than the single data encoding for neural networks. Graph-based semi-supervised learning, which can exploit the connectivity relationship between small amounts of labeled samples and a relatively large number of unlabeled samples to improve the performance of deep neural networks, has been shown to be one of the most effective approaches for semi-supervised node classification. Graph-based semi-supervised node classification has seen applications in various fields, such as predicting the customer type of users in e-commerce [4], assigning scientific papers from a citation network into topics [5, 6], and credit card fraud detection [7]. To date, a large number of graph-based semi-supervised node classification methods have been proposed [8, 9, 6]. Most of these methods focused only on the pairwise connections among data. However, the data correlation in real practice could be beyond pairwise relationships and even more complicated. Under such circumstances, only exploring the pairwise connections and modeling it as a graph may lose the high-order semantic correlation among data. The traditional structure with simple graphs cannot fully formulate the data correlation and thus limits the application of GNN models [10]. To tackle this challenge, hypergraph neural networks (HGNNs) have been proposed, introducing hyperedges that can link any number of nodes to improve the learning performance. Compared with the simple graph, the hyperedges in HGNNs allows the latter to more effectively represent the high-order semantic relationship among data [10, 11, 12]. This work will also leverage the hypergraph to explore the high-order semantic correlation among data for semi-supervised node classification. In this study, we will mainly focus on two challenges in graph-based semi-supervised node classification. First, we noted that much of the success of graph and hypergraph neural networks is attributed to the graph-structure data offered to them. In general, the data we provide to GNNs either have a known intrinsic graph structure, such as citation networks or are a human-established graph that we construct for it, such as \(k\)-nearest neighbor graph. However, we cannot guarantee that the original intrinsic or artificially established graph is optimal for semi-supervised node classification in the downstream graph neural networks. Besides, the original graph is usually constructed from the original feature space in which the similarity between samples may not be accurately measured. In other words, the original graph may have some redundant or missing edges, and thus it may not accurately reflect the "true" correlation among data. What's more, the human-established \(k\)-nearest neighbor graph is mainly based on a fixed and single similarity measurement function, which may not be suitable for accurately measuring the similarity between all samples. Accordingly, this calls for accurate modeling and learning techniques to obtain a suitable graph and hypergraph structure. Second, most existing graph-based semi-supervised node classification methods mostly only utilized the explicit graph structure information [6, 8, 10]. One of the most challenging problems for semi-supervised learning is how to exploit the implicit information among data to improve model performance. Some implicit information among data, for example, the density information, has been demonstrated to provide important clues for semi-supervised node classification [1], yet it is rarely exploited in depth. Li _et al._[1] first exploited density information for graph-based deep semi-supervised visual recognition. Yet it is also only based on the exploration of the graph-structure relationships among data, while high-order semantic correlation has been ignored. Inspired by this, we decided to explore density information among data on hypergraph structure to improve semi-supervised node classification accuracy in this work. To tackle these two challenges, we propose the Dual Hypergraph Neural Network (DualHGNN), a dual connection model containing two sub-networks that perform hypergraph structure learning and hypergraph representation learning for graph-based semi-supervised node classification. For the first challenge, DualHGNN adopts a multi-view hypergraph learning network to learn the hypergraph structure from multiple views. By adopting different learnable similarity measure functions on each view, we can measure the sample similarity more accurately. By introducing a consistency loss, DualHGNN can effectively improve the generalization ability of hypergraph learning. For the second, DualHGNN employs a density-aware hypergraph attention network to exploit density information on hypergraph explicitly to improve semi-supervised node classification performance. We define a density rule for hypergraph and structure a density-aware attention mechanism. Based on density-aware attention, DualHGNN can effectively improve hypergraph representation learning. In short, DualHGNN jointly optimizes the multi-view hypergraph learning network and the density-aware hypergraph attention network to learn the optimal hypergraph suitable for downstream graph-based semi-supervised node classification tasks. Meanwhile, based on the suitable hypergraph, we can improve the performance of the density-aware hypergraph attention network. As shown in the experiments, the combination of two HGNNs effectively allows the proposed architecture to achieve higher classification performance. The main contributions of can be summarized as follows. * A novel Dual Hypergraph Neural Network (DualHGNN) is proposed, integrating both hypergraph structure learning and hypergraph representation learning simultaneously in a unified network architecture for semi-supervised node classification. * A new multi-view hypergraph learning network is proposed to learn an optimal hypergraph suitable for downstream semi-supervised node classification from multiple views with different learned similarity measure functions, constrained by a consistency loss to improve its generalization ability. * The explicit density information of hypergraphs is leveraged to propose a density-aware hypergraph attention network. A density rule for hypergraphs is defined, and a density-aware attention mechanism is developed to effectively improve the performance of hypergraph representation learning. * Extensive experiments have been conducted to demonstrate the effectiveness of the DualHGNN for semi-supervised node classification. The ablation study further proved the validity of the multi-view hypergraph learning and the density-aware attention mechanism. ## II Related Work ### _Graph Neural Networks_ The core idea of graph neural networks (GNNs) is graph message propagation [13], which can be divided into spectral-based approaches [6] and spatial-based approaches [8, 9]. Graph convolutional networks (GCNs) [6] performed label prediction based on graph neighborhood aggregation, which provided a novel idea for graph spectral information propagation. By adopting a self-attention layer, Velickovic _et al._ proposed graph attention networks (GATs) [8] to perform attention neighborhood aggregation. Wu _et al._[14] removed the nonlinear activation function and collapsed weight matrices from GCNs [6] and proposed a simplifying GNNs. Liu _et al._[15] proposed ElasticGNN by introducing \(L1\) and \(L2\) regularization and providing an elastic message passing scheme to enhance the local smoothness of the graph. Recently, Duan _et al._[16] proposed a dual cost-sensitive graph convolutional network (DCSGCN) to tackle the imbalanced graph learning problem. However, the simple graph structure may not fully formulate the high-order data correlation, for which hypergraphs can provide an effective solution. ### _Hypergraph Neural Networks_ A hypergraph is a generalization of graphs to model the high-order semantic correlation among data. Shi _et al._[17] adopted a hypergraph learning process to optimize the high-order correlation among data. Feng _et al._[10] proposed a hypergraph neural network (HGNN) to perform the node-edge-node transform through hyperedge convolution operations. Hypergraph attention networks (HGATs) [11] introduced the attention mechanism into hypergraph neural networks to encode the high-order data correlation. Jiang _et al._[12] integrated dynamic hypergraph construction and hypergraph convolution modules to propose dynamic hypergraph neural networks (DHGNN) further improve hypergraph representation learning. Recently, many improved methods have been proposed, including hypergraph label propagation networks (HLPN) [18] and hypergraph convolution and hypergraph attention (HCHA) [19], among others. However, most of these studies focused only on hypergraph representation learning based on the original hypergraph that may not accurately reflect the "true" data correlation and are not optimal for the downstream HGNNs, and this motivates the accurate learning of a suitable hypergraph structure to improve the performance of HGNNs. ### _Graph-based Semi-supervised Node Classification_ Graph-based semi-supervised learning methods are one of the most effective approaches for semi-supervised node classification. Hamilton _et al._[9] proposed an inductive graph neural network GraphSAGE extending graph data processing to large graphs. Gasteiger _et al._[20] proposed APPNP by introducing a personalized PageRank propagation scheme, achieving graph information propagation in a larger neighborhood. Yet they all neglected the learning of graph structure. Jiang _et al._[5] introduced a graph learning module to learn an optimal graph structure that makes GCNs [6] better for semi-supervised learning. In the same way, we introduce a hypergraph learning module in our method. Rong _et al._[21] randomly removed a certain number of edges from the input graph to realize data enhancement. Similarly, Tang _et al._[22] proposed GRAND by designing a random propagation strategy based on the drop node mechanism. Yet both [21] and [22] only use a sub-optimal graph structure for semi-supervised node classification. Song _et al._[23] formulated a Bayesian probabilistic model, obtained the posterior distribution from the downstream classification module, and employed a variational inference method to an optimal graph. Lee _et al._[24] proposed GraFN to learn discriminative node representations through supervised and unsupervised consistency between two augmented graphs. Li _et al._[25] proposed a cooperative dual-view graph neural network regarding different views as the reasoning processes of two GNN models. Unlike [25], we adopt a dual connection model in our DualHGNN, which is based on the effective combination of two hypergraph neural networks. In addition, most of these methods only utilized the explicit graph structure information, calling for an effective mechanism to explore implicit information in the hypergraphs, such as the density, to improve the performance of semi-supervised node classification. ## III The DualHGNN Architecture The proposed DualHGNN is shown in Figure 1. The DualHGNN first adopts a multi-view hypergraph learning network to learn a suitable hypergraph structure on multi-view with different similarity measure functions and outputs a new hypergraph. Subsequently, the DualHGNN employs a density-aware hypergraph attention network based on a density-aware attention mechanism to perform hypergraph representation learning for class prediction. We linearly combine the losses calculated from the output of two sub-networks and perform backpropagation to update the parameters of these two modules at the same time. The specific designs are elaborated as follows. ### _Multi-View Hypergraph Learning Network_ A hypergraph can be formulated as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), which includes a set of vertex \(\mathcal{V}\) and a set of hyperedges \(\mathcal{E}\). Let \(X=[x_{1},x_{2},\ldots,x_{n}]\in\mathbb{R}^{n\times d}\) be the collection of \(n\) data vectors of \(d\) dimensions, where \(x_{i}\) denotes the feature vector of \(i\)-th sample. The structure of hypergraph can be denoted by an incidence matrix \(H\in\mathbb{R}^{n\times m}\), in which \(H_{x_{i},e_{k}}=1\) indicates that the node \(x_{i}\) is connected by the hyperedge \(e_{k}\), otherwise \(H_{x_{i},e_{k}}=0\), and \(n\) and \(m\) are the numbers of nodes and hyperedges, respectively, in the hypergraph. The main idea of hypergraph learning is to learn an optimal hypergraph structure for semi-supervised node classification of the downstream hypergraph neural networks by jointly optimizing hypergraph structure learning and hypergraph representation learning. In this paper, we proposed a multi-view hypergraph learning network to adaptively learn a suitable hypergraph. The multi-view hypergraph learning network learns the hypergraph structure from multiple views with different learnable similarity measure functions to accurately fit the similarity between samples, which can avoid the defect that the single fixed distance measure function may not be able to accurately measure the similarity between all samples. The final hypergraph structure can be obtained by merging the hypergraph learned in each view. To avoid the influence of noise and data redundancy in the original feature space, we perform similarity learning in the low-dimensional embedding space. We adopt a fully connected layer to map the feature matrix \(X_{0}\) from the original feature space to the low-dimensional embedding space, which can be implemented by multiplying a learnable embedding matrix \(P\in\mathbb{R}^{d\times p}\), that is, \[\tilde{X}=X_{0}P. \tag{1}\] The similarity between samples can be measured by the function \(\mathrm{sim}(\cdot)\) and obtained in a matrix \(S\) as: \[S_{ij}=\mathrm{sim}(\tilde{x}_{i},\tilde{x}_{j}). \tag{2}\] To avoid the huge computational overhead brought by the fully connected graph, we perform sparse sampling for the similarity matrix \(S\). We employ a predefined threshold \(\delta_{1}\) to filter out lower similarity values, which can be formulated as \[\tilde{S}_{ij}=\left\{\begin{array}{cc}S_{ij},&S_{ij}\geq\delta_{1}\\ 0,&S_{ij}<\delta_{1}.\end{array}\right. \tag{3}\] The hyperedges can then be constructed based on the sparse similarity matrix \(\tilde{S}\) and obtain the learned hypergraph incidence matrix \(H\). Our multi-view hypergraph learning network can perform hypergraph learning on multi-view. Therefore, we can adopt different learnable similarity measure functions on each view, such as cosine similarity and inner product. The final hypergraph structure is obtained by the mean of hypergraph incidence matrices learned in each view, that is, \[H=\frac{1}{V}\sum_{v=1}^{V}H^{(v)}, \tag{4}\] where \(V\) is the number of views, and \(v\) represents the \(v\)-th view. We introduce a consistency loss into the multi-view hypergraph learning network, aiming to constrain the similarity of hypergraph structures learned from each view. By introducing consistency loss, we can effectively use a large number of unlabeled samples to provide weak supervision to improve the generalization of the multi-view hypergraph learning network. The consistency loss can be defined as the sum of squared \(L2\) distances between each output and its mean, that is, \[\mathcal{L}_{con}=\frac{1}{V}\sum_{v=1}^{V}\left\|H^{(v)}-H\right\|_{2}^{2}. \tag{5}\] Considering that the original hypergraph may contain useful information. We merge the learned hypergraph with the original hypergraph and obtain the final incidence matrix \(\tilde{H}\). It can be formulated as follow: \[\tilde{H}=\eta H+(1-\eta)H_{0}, \tag{6}\] where \(\eta\) is a trade-off parameter. \(H_{0}\) can be a known intrinsic hypergraph or a human-established \(k\)-nearest neighbor hypergraph. The loss function of the hypergraph learning network is \[\mathcal{L}_{HGL}=\frac{\alpha}{n^{2}}\operatorname{tr}(\tilde{X}^{\top} \tilde{H}\tilde{X})+\frac{\beta}{n^{2}}\left\|\tilde{H}\right\|_{F}^{2}-\frac {\gamma}{n}1^{\top}\log\hat{H}1+\frac{\mu}{n^{2}}\mathcal{L}_{con}, \tag{7}\] where \(\hat{H}=D_{v}^{-1/2}\tilde{H}D_{e}^{-1}\tilde{H}^{\top}D_{v}^{-1/2}\) is the hypergraph Laplacian, in which \(D_{e}\) and \(D_{v}\) are the diagonal matrices of the hyperedge degrees and the vertex degrees, respectively. \(\alpha\), \(\beta\), \(\gamma\) and \(\mu\) are hyperparameters. \(\operatorname{tr}(\cdot)\) denotes the trace of matrix. \(\left\|\cdot\right\|_{F}\) is the Frobenius norm. \(\cdot^{\top}\) denotes transposition. This loss function contains four terms, where the first term restricts adjacent nodes to having similar features and learning a smooth incidence matrix, and the second one constrains learning a sparse hypergraph. The third term penalizes the formation of disconnected hypergraphs, and the last one is consistency loss. ### _Density-Aware Hypergraph Attention Network_ The proposed DualHGNN uses the density information on the hypergraph structure to propose a density-aware hypergraph attention network. It integrates the density information with attention to structure a density-aware attention mechanism and performs hypergraph representation learning through density-aware neighborhood feature aggregation, which is elaborated as follows. The input of the density-aware hypergraph attention network includes the node feature matrix \(X\) and the hyperedge feature matrix \(E\). We perform hypergraph message propagation to obtain the node and the hyperedge feature matrix, that is, \[E=D_{e}^{-1/2}\tilde{H}^{\top}D_{v}^{-1/2}X_{0}, \tag{8}\] \[X=D_{v}^{-1/2}\tilde{H}D_{e}^{-1/2}E. \tag{9}\] The density-aware hypergraph attention network mainly consists of two parts, density-aware attention vertex aggregation, and density-aware attention hyperedge aggregation. More detail about this will be described as follows. Fig. 1: Overview of the proposed DualHGNN framework. #### Iii-B1 Density-aware attention vertex aggregation Density-aware attention vertex aggregation module integrates the density information of each node as a part of attention and then performs attention vertex aggregation to enhance hyperedge features. We define a density rule for each node, which can be defined as the sum of the similarities of neighbor nodes whose similarity with the target node is greater than a predefined threshold. The density of node \(x_{i}\) can be formulated as \[\rho_{x_{i}}=\sum_{x_{k}\in\mathcal{N}_{x_{i}}}\left\{\begin{array}{cc}\sin \left(x_{i},x_{k}\right),&\text{if}\;\sin\left(x_{i},x_{k}\right)>\delta_{2}\\ 0,&\text{if}\;\sin\left(x_{i},x_{k}\right)\leq\delta_{2},\end{array}\right. \tag{10}\] where \(\mathcal{N}_{x_{i}}\) denotes the neighbors set of node \(x_{i}\). \(\delta_{2}\) is a predefined threshold. The similarity measure function \(\sin(\cdot)\) can adopt cosine similarity in implementation. Intuitively, the higher the density of a node, the more neighbors that are similar to it. In other words, the target node is lying in a more densely distributed area. Based on the density-peak assumption [26], the nodes with higher density are closer to the cluster center. Therefore, higher weights need to be assigned when performing neighborhood feature aggregation. While traditional attention mechanisms only consider feature similarity, which may be sub-optimal. By fusing the density information, it can effectively avoid this defect and achieve more accurate attention neighborhood aggregation. In the density-aware attention vertex aggregation module, we compute the attention weight of each node relative to the hyperedge it is on. We adopt an attention mechanism \(\mathrm{Attention}(\cdot)\) to calculate the attention weight between node \(x_{i}\) and hyperedge \(e_{k}\), that is, \[a_{x_{i},e_{k}}=\mathrm{Attention}(Wx_{i},We_{k}), \tag{11}\] where \(W\) is the weight matrix that needs to be trained. The density-aware attention mechanism can be then structured by combining the density information with the attention weight, which is shown in the following: \[da_{x_{i},e_{k}}=a_{x_{i},e_{k}}+\tilde{\rho}_{x_{i}}, \tag{12}\] where \(\tilde{\rho}_{x_{i}}\in[0,\max(a_{X})]\) is the normalized density, and \(a_{X}\) is the collection of attention weight \(a_{x_{i},e_{k}}\). The adopted attention mechanism \(\mathrm{Attention}(\cdot)\) can be designed similarly to GATs [8]. We first concatenate the node embedding vector and the hyperedge embedding vector and then employ a weight vector \(\alpha_{X}\in\mathbb{R}^{2d\times 1}\) to map it to a scalar value, which can be formulated as \[DA_{x_{i},e_{k}}=\quad\frac{\exp\left(\mathrm{LeakyReLU}\left(a_{X}^{\top}( Wx_{i}\|We_{k})\right)+\tilde{\rho}_{x_{i}}\right)}{\sum_{x_{j}\in\mathcal{N} \left(e_{k}\right)}\exp\left(\mathrm{LeakyReLU}\left(\alpha_{X}^{\top}(Wx_{j }\|We_{k})\right)+\tilde{\rho}_{x_{j}}\right)}, \tag{13}\] where \(\mathcal{N}\left(e_{k}\right)\) denotes the set of vertices connected by the hyperedge \(e_{k}\). \(\mathrm{LeakyReLU}(\cdot)\) is an activation function. And \(\|\) represents the concatenation operation. Then we can obtain the density-aware attention matrix \(DA_{X}\in\mathbb{R}^{n\times m}\), of which each element is \(DA_{x_{i},e_{k}}\in[0,1]\). At last, we utilize this density-aware attention matrix to perform feature aggregation, which is formulated as follows: \[\tilde{E}=\sigma(DA_{X}^{\top}WX), \tag{14}\] where \(\sigma(\cdot)\) is an activation function, which can be \(\mathrm{ELU}(\cdot)\) in implementation. #### Iii-B2 Density-aware attention hyperedge aggregation Density-aware attention hyperedge aggregation module integrates the density information of each hyperedge as a part of the attention and then aggregates the connected hyperedge to enhance the node embedding. Similarly, we also define a density rule for each hyperedge. The density of each hyperedge can be defined as the sum of the density of all nodes connected by this hyperedge, which is formulated as \[\rho_{e_{k}}=\sum_{x_{j}\in\mathcal{N}\left(e_{k}\right)}\rho_{x_{j}}. \tag{15}\] Intuitively, if a hyperedge has a higher density, it would be located in a node-dense area. Accordingly, in the density-aware attention hyperedge aggregation module, we calculate the density-aware attention weights of each hyperedge with respect to each node connected by this hyperedge. We employed an attention mechanism similar to the above density-aware attention vertex aggregation module, that is, \[DA_{e_{k},x_{i}}=\quad\frac{\exp\left(\mathrm{LeakyReLU}\left(\alpha_{E}^{ \top}(We_{e_{k}}\|Wx_{i})\right)+\tilde{\rho}_{x_{k}}\right)}{\sum_{x_{j}\in \mathcal{N}\left(x_{i}\right)}\exp\left(\mathrm{LeakyReLU}\left(\alpha_{E}^{ \top}(We_{j}\|Wx_{i})\right)+\tilde{\rho}_{x_{j}}\right)}, \tag{16}\] where \(\mathcal{N}\left(x_{i}\right)\) represents the set of hyperedges connecting to vertex \(x_{i}\). \(\alpha_{E}\in\mathbb{R}^{2d\times 1}\) is a weight vector to be trained. And \(\tilde{\rho}_{e_{k}}\) is the normalized density. Afterward, we can obtain the density-aware attention matrix \(DA_{E}\in\mathbb{R}^{m\times n}\), which can be utilized to aggregate hyperedge features and update the node embedding by: \[\tilde{X}=\sigma(DA_{E}^{\top}\tilde{E}). \tag{17}\] We combine the two modules described above to form a density-aware hypergraph attention layer shown as follows: \[\tilde{X}=\mathrm{ELU}\left(DA_{E}^{\top}\ \mathrm{ELU}\left(DA_{X}^{\top}WX \right)\right). \tag{18}\] In each density-aware hypergraph attention layer, we first pay a density-aware attention weight to each node and gather the node features to enhance hyperedge features. Then we assign a density-aware attention weight to each hyperedge and aggregate the connected hyperedge features to generate new vertex features. By using this node-hyperedge-node feature transform mechanism, we can efficiently explore high-order semantic correlation among data. This study chooses the multi-head attention mechanism [8] to enhance the density-aware hypergraph attention layer. The output feature representation of this layer is obtained by concatenating the output features of each head, that is, \[\tilde{X}=\parallel_{t=1}^{T}\ \mathrm{ELU}\left(DA_{E}^{\top}\ \mathrm{ELU}(DA_{X}^{\top}WX)\right), \tag{19}\] where \(\parallel_{t=1}^{T}\) denotes the concatenation operation, and \(T\) is the number of attention heads. The final output of the DualHGNN is a low-dimensional node embedding and the class prediction \(Z\in\mathbb{R}^{n\times c}\) can be obtained by performing a \(\mathrm{softmax}(\cdot)\). The cross-entropy loss is adopted as the optimization function: \[\mathcal{L}_{CE}=-\sum_{i\in L}\sum_{j=1}^{c}Y_{ij}\ln Z_{ij}, \tag{20}\] where \(L\) is the set of labeled samples. Accordingly, we will jointly optimize the multi-view hypergraph learning network and the density-aware hypergraph attention network by linearly combining the hypergraph learning loss and the cross-entropy loss, which is shown as follows: \[\mathcal{L}=\mathcal{L}_{HGL}+\lambda\mathcal{L}_{CE}, \tag{21}\] where \(\lambda\) is a trade-off parameter. Overall, the entire algorithm of DualHGNN is summarized in Algorithm 1. ``` Input: Node feature matrix \(X_{0}\in\mathbb{R}^{n\times d}\), initial hypergraph incidence matrix \(H_{0}\in\mathbb{R}^{n\times m}\). 1Initialize\(P\), \(W\), \(\alpha_{X}\), \(\alpha_{E}\) whilenot convergesdo 2foreach view of hypergraph learning networkdo 3 Calculate \(S\) using Eq.(2). 4 Sparse sampling to obtain \(\tilde{S}\) using Eq.(3). 5 Construct hyperedges based on \(\tilde{S}\). 6 7 end for 8 Calculate the learned \(H\) using Eq.(4). 9 Calculate \(\tilde{H}\) using Eq.(6). 10 Perform hypergraph message propagation to update \(X\) and \(E\) according to Eq.(8) and Eq.(9). 11for each density-aware hypergraph attention layerdo 12 Calculate node density \(\rho_{X}\) using Eq.(10). 13 Calculate \(DA_{X}\) using Eq.(13). 14 Perform attention vertex aggregation according to Eq.(14). 15 Calculate hyperedge density \(\rho_{E}\) using Eq.(15). 16 Calculate \(DA_{E}\) using Eq.(16). 17 Perform attention hyperedge aggregation according to Eq.(17). 18 19 end for 20 Calculate \(\mathcal{L}_{HGL}\) using Eq.(7). 21 Calculate \(\mathcal{L}_{CE}\) using Eq.(20). 22 Calculate \(\mathcal{L}\) using Eq.(21). 23 Update parameters by performing back-propagation. 24 25 end for ``` **Algorithm 1**The algorithm of DualHGNN. ## IV Experiments ### _Datasets_ We evaluate the effectiveness of our method on three widely-used image datasets: Scene15 [27], CIFAR-10 [28], and MNIST [29]. Each dataset is used in a semi-supervised learning setup where only a small part of the data samples are labeled. More details of these datasets and their usage in our experiments are introduced as follows, which are also summarized in Table I. **Scene15:** This dataset contains 4,485 RGB images coming from 15 scene categories, and each category contains 200 to 600 samples. In our experiments, we used all 4,485 samples to evaluate our method. For each image, we use the 3,000-dimension features provided in the previous work [30]. **CIFAR-10:** This dataset consists of 10 types of natural images. In our experiments, we use 10,000 images from the independent test set to evaluate our method. To represent each image, we used the same 13-layer CNN networks as in [31] to extract the features. **MNIST:** It contains 10 classes of images of hand-written digits. We randomly selected 1,000 images for each class and obtained 10,000 images at to conduct our experiments. Similar to the prior work [5, 32], we use 784-dimension feature vectors converted from grayscale to represent each sample. ### _Experiment Setup_ For the architecture of DualHGNN, we adopt a multi-view hypergraph learning network with two views to learn a hypergraph and employ a two-layer density-aware hypergraph attention network for hypergraph representation learning where the first layer uses a multi-head attention mechanism with two heads. The similarity measure functions we adopt in the multi-view hypergraph learning network are cosine similarity and inner product. The output low dimension of \(P\) is set to 70 for the Scene15 and CIFAR-10 datasets and to 128 for the MNIST dataset. We introduce \(L2\)-normalization into each view of the hypergraph learning network and each density-aware hypergraph attention layer. The number of units in the density-aware hypergraph attention hidden layer is set to \(64\). We employ Xavier algorithm [34] for the initialization of \(P\), \(W\), \(\alpha_{X}\) and \(\alpha_{E}\). We adopt Adam optimizer [35] with a learning rate of 0.2, 0.01, and 0.002 for Scene15, CIFAR-10, and MNIST datasets, respectively, and the learning rate decays to half after every 100 epochs. We train DualHGNN for a maximum of 2,000 epochs and stop training if the validation loss does not decrease for 100 consecutive epochs. ### _Performance_ **Baselines**: We compare the proposed DualHGNN with representative graph-based semi-supervised node classification methods, including GCNs [6], GATs [8], GraphSAGE [9], APPNP [20], HGNN [10], DHGNN [12], SGC [14], DropEdge [21], GCNII [33], GRAND [22] and ElasticGNN [15]. For a fair comparison, we construct a k-nearest neighbor graph for all the methods, and the value of \(k\) is set to 15. We retrain all the baseline methods, and all the reported results are averaged over 10 runs. **Results**: Table II summarizes the classification accuracy comparison results on three datasets. The best results are highlighted. From these results, we can make a few observations as follows. First, in Scene15 and CIFAR-10 datasets, our DualHGNN significantly outperforms all the baseline approaches. Compared with the state-of-the-art method ElasticGNN, our DualHGNN achieved at least 1.6% and at most 2.17% improvement on the Scene15 dataset. This may be because the data points of each category in the Scene15 dataset are imbalanced, and the baseline methods suffer performance in imbalanced data, while our DualHGNN still maintains excellent performance. DualHGNN is slightly better than ElasticGNN on CIFAR-10 dataset, while it achieves a lower standard deviation in most cases. These clearly prove the strong performance of DualHGNN on graph-based semi-supervised node classification. In the MNIST dataset, DualHGNN outperforms most baseline methods. Compared with ElasticGNN, DualHGNN also obtains competitive performance, especially in the case of a few labeled samples, such as less than 1,000 labels. This also proves the advantage of DualHGNN in the case of fewer labels. Moreover, in all three datasets, DualHGNN outperforms the baseline method GCNs and GATs by significant margins. Compared with GATs, DualHGNN receives an improvement at most of 1.49%, 0.49%, and 2.17% on the Scene15, CIFAR-10, and MNIST datasets, respectively. DualHGNN significantly outperforms the hypergraph neural networks baseline HGNN with the least margins of 3.27%, 2.34%, and 2.29% on the Scene15, CIFAR-10, and MNIST datasets, respectively, which straightforwardly indicates the higher predictive accuracy on semi-supervised node classification of DualHGNN by performing multi-view hypergraph learning and density-aware attention neighborhood aggregation. ### _Ablation Study_ #### Iv-D1 Effectiveness of multi-view hypergraph learning network To verify the effectiveness of the multi-view hypergraph learning network, we conducted an ablation study on the multi-view hypergraph learning network. We removed the multi-view hypergraph learning network from the proposed DualHGNN and denoted it as _DualHGNN w/o HGL_, which performs hypergraph representation learning only based on an original \(k\)-NN hypergraph. The proposed version is denoted as _DualHGNN w/i HGL_. The ablation experiments are conducted on all three datasets, and for ease of presentation, we only show the results on Scene15 and MNIST datasets in Figure 2. From these results, we can clearly observe that employing the multi-view hypergraph learning network to learn the hypergraph structure can achieve higher classification accuracy than only using the original \(k\)-NN hypergraph and receives an improvement at most of 0.51% and 1.09% on the Scene15 and MNIST datasets, respectively. This further demonstrates the effectiveness of the proposed multi-view hypergraph learning network. #### Iv-D2 Effectiveness of density-aware attention mechanism We also conducted an ablation study on all three datasets to evaluate the effectiveness of the proposed density-aware attention mechanism. We remove the density information from the proposed DualHGNN and only keep the traditional attention mechanism, denoted as _DualHGNN w/o density_. Correspondingly, the proposed version is denoted as _DualHGNN w/i density_. Similarly, given the page limitation, we only show the results on Scene15 and CIFAR-10 datasets in Figure 3. From these results, we can observe that integrating the density-aware attention mechanism can significantly improve the performance of hypergraph representation learning and achieve higher predictive accuracy. This directly demonstrates the effectiveness of the density-aware attention mechanism on graph-based semi-supervised node classification. view can achieve higher classification performance and lower computing overhead, which once again demonstrates the effectiveness of the proposed multi-view hypergraph learning network. #### Iv-D2 Effect of parameter \(\lambda\) The proposed DualHGNN jointly optimizes the multi-view hypergraph learning network and the density-aware hypergraph attention network by linearly combining the two losses. And the parameter \(\lambda\) is a trade-off parameter of the hypergraph learning loss \(\mathcal{L}_{HGL}\) and the cross-entropy loss \(\mathcal{L}_{CE}\) in Eq.(21). We conduct a parameter analysis experiment to verify how different values of \(\lambda\) influence the performance of DualHGNN. For ease of presentation, we only show the results on Scene15 and MNIST datasets when setting \(\lambda\) from \(0.1\) to \(2.0\) in Figure 5. It can be observed that choosing an appropriate value for \(\lambda\) can increase the classification accuracy of DualHGNN to a certain extent, which is in line with our expectations of jointly optimizing the multi-view hypergraph learning network and the density-aware hypergraph attention network. However, setting \(\lambda\) too large will also hurt the performance. In our experiments, we set \(\lambda=1.3\), \(1.1\) and \(0.9\) on Scene15, CIFAR-10 and MNIST datasets, respectively. ## V Conclusion In this paper, we propose a Dual Hypergraph Neural Network (DualHGNN), integrating both hypergraph structure learning and hypergraph representation learning simultaneously in a unified network architecture and performing joint optimization for semi-supervised node classification. The DualHGNN first adopts a multi-view hypergraph learning network to learn a hypergraph structure from multi-view with different similarity measure functions. Then DualHGNN employs a density-aware hypergraph attention network based on a density-aware attention mechanism to perform hypergraph representation learning. We have conducted extensive experiments on three benchmark datasets and demonstrated the effectiveness of the DualHGNN on various semi-supervised node classification tasks. Although our DualHGNN has achieved excellent performance, there are some limitations. For instance, the proposed multi-view learning mechanism introduces additional computing overhead. Besides, DualHGNN is only applied to node-level tasks, i.e., node classification. For future work, on the one hand, we plan to apply our method to more and larger graph-based datasets and add the comparison in computing overhead at the same time to further validate its performance. On the other hand, we will attempt to extend it to graph-level tasks, such as graph classification. ## Acknowledgment This work was supported by the National Natural Science Foundation of China (62276101) and the National Key R&D Program of China (2019YFC1510400).
2308.12864
Auto-weighted Bayesian Physics-Informed Neural Networks and robust estimations for multitask inverse problems in pore-scale imaging of dissolution
In this article, we present a novel data assimilation strategy in pore-scale imaging and demonstrate that this makes it possible to robustly address reactive inverse problems incorporating Uncertainty Quantification (UQ). Pore-scale modeling of reactive flow offers a valuable opportunity to investigate the evolution of macro-scale properties subject to dynamic processes. Yet, they suffer from imaging limitations arising from the associated X-ray microtomography (X-ray microCT) process, which induces discrepancies in the properties estimates. Assessment of the kinetic parameters also raises challenges, as reactive coefficients are critical parameters that can cover a wide range of values. We account for these two issues and ensure reliable calibration of pore-scale modeling, based on dynamical microCT images, by integrating uncertainty quantification in the workflow. The present method is based on a multitasking formulation of reactive inverse problems combining data-driven and physics-informed techniques in calcite dissolution. This allows quantifying morphological uncertainties on the porosity field and estimating reactive parameter ranges through prescribed PDE models with a latent concentration field and dynamical microCT. The data assimilation strategy relies on sequential reinforcement incorporating successively additional PDE constraints. We guarantee robust and unbiased uncertainty quantification by straightforward adaptive weighting of Bayesian Physics-Informed Neural Networks (BPINNs), ensuring reliable micro-porosity changes during geochemical transformations. We demonstrate successful Bayesian Inference in 1D+Time and 2D+Time calcite dissolution based on synthetic microCT images with meaningful posterior distribution on the reactive parameters and dimensionless numbers.
Sarah Perez, Philippe Poncet
2023-08-24T15:39:01Z
http://arxiv.org/abs/2308.12864v2
Auto-weighted Bayesian Physics-Informed Neural Networks and robust estimations for multitask inverse problems ###### Abstract In this article, we present a novel data assimilation strategy in pore-scale imaging and demonstrate that this makes it possible to robustly address reactive inverse problems incorporating Uncertainty Quantification (UQ). Pore-scale modeling of reactive flow offers a valuable opportunity to investigate the evolution of macro-scale properties subject to dynamic processes. Yet, they suffer from imaging limitations arising from the associated X-ray microtomography (X-ray \(\mu\)CT) process, which induces discrepancies in the properties estimates. Assessment of the kinetic parameters also raises challenges, as reactive coefficients are critical parameters that can cover a wide range of values. We account for these two issues and ensure reliable calibration of pore-scale modeling, based on dynamical \(\mu\)CT images, by integrating uncertainty quantification in the workflow. The present method is based on a multitasking formulation of reactive inverse problems combining data-driven and physics-informed techniques in calcite dissolution. This allows quantifying morphological uncertainties on the porosity field and estimating reactive parameter ranges through prescribed PDE models with a latent concentration field and dynamical \(\mu\)CT. The data assimilation strategy relies on sequential reinforcement incorporating successively additional PDE constraints and suitable formulation of the heterogeneous diffusion differential operator leading to enhanced computational efficiency. We guarantee robust and unbiased uncertainty quantification by straightforward adaptive weighting of Bayesian Physics-Informed Neural Networks (BPINNs), ensuring reliable micro-porosity changes during geochemical transformations. We demonstrate successful Bayesian Inference in 1D+Time calcite dissolution based on synthetic \(\mu\)CT images with meaningful posterior distribution on the reactive parameters and dimensionless numbers. We eventually apply this framework to a more realistic 2D+Time data assimilation problem involving heterogeneous porosity levels and synthetic \(\mu\)CT dynamical observations. **Keywords:** Hamiltonian Monte Carlo, Uncertainty Quantification,d Multi-objective training, Imaging inverse problem, Pore-scale porous media, Artificial Intelligence, Bayesian Physics-Informed Neural Networks. ## 1 Introduction Studying reactive flows in porous media is essential to manage the geochemical effects arising from CO\({}_{2}\) capture and storage in natural underground reservoirs. While long-term predictions are commonly modeled at the field scale [20], pore-scale approaches meanwhile provide insights into local geochemical interactions between the injected CO\({}_{2}\) and the aquifer structure [52]. Through mathematical homogenization of the sub-micrometer porous medium and appropriate modeling, one can simulate the reactive processes that occur at the pore scale and predict their impact on the macro-scale properties [5, 6]. Geochemical processes are critical components for understanding the mineral trapping mechanisms and local evolving interfaces, either due to precipitation, crystallization, or dissolution within the porous environment. In this sense, investigating the impact of such reactive processes provides insight into reservoir safety submitted to chemical interactions that may compromise the aquifer structure. Pore-scale modeling of reactive flow hence appears as a complementary mean to field scale studies wherein homogenization theory bridges the gap between these scales. Pore-scale modeling in porous media is intrinsically related to X-ray microtomography (X-ray \(\mu\)CT) experiments. Advances in this imaging technique coupled with efficient numerical simulation offer a valuable opportunity to investigate dynamical processes and study the evolving macro-scale properties, such as the upscaled porosity and permeability [9; 47]. This is of great importance in the risk management perspective of CO\({}_{2}\) storage, and therefore ensuring the reliability of pore-scale modeling and simulation appears as crucial. Uncertainties, however, arise from the microtomography imaging process where artifacts, noise, and unresolved morphological features are intrinsic limitations inducing important deviations in the estimation of petrophysical properties [54; 19]. In particular, quantifying the impact of sub-resolution porosity in \(\mu\)CT images is identified as critical for geosciences applications [34; 62]. This limiting factor arises from the compromise between the field of view being investigated and the image resolution. For multi-scale porous media such as carbonate rocks, this trade-off can readily result in scan resolutions that do not fully resolve morphological features of the pore space. Intrinsic limiting factors remain in the X-ray \(\mu\)CT imaging process, and investigating their effects and related uncertainties is fundamental to developing more accurate predictive models at the pore scale. In addition to these imaging uncertainties, proper assessment of the kinetic parameters raises challenges in the pore-scale modeling of reactive flows. Mineral reactivities, including reactive surface area, are critical parameters to account though they commonly suffer from discrepancies of several orders of magnitude [49]. Providing uncertainty estimates on these kinetic parameters is essential to ensure reliable calibration of pore-scale models for CO\({}_{2}\) mineral storage assessment. Unsuitable characterization of the reactive surface area, for instance, will considerably affect the numerical model generating highly distinct behaviors that can become inconsistent with experimental investigations. Such concern is widely known, and several experimental works have developed potential solutions that address dynamical \(\mu\)CT imaging processes of carbonate dissolution [50; 41]. This relies on 4D \(\mu\)CT and differential imaging techniques to derive averaged reaction rates and provide local maps of mineral reactivity at the porous medium surface. However, dynamical \(\mu\)CT scans also suffer from trade-off issues that may disrupt the identification of these parameters [78]. In addition to potential sub-resolved porosity, one needs to consider the compromise between the acquisition time capturing the dynamical process and the image quality. This may result in noisy observation data or non-physical variations leading to misleading estimations of the kinetic parameters. Querying the reliability of reactive parameters involved in pore-scale modeling is crucial, and time-resolved experiments of dynamical processes offer such an opportunity while suffering from imaging limitations. Overall, we identify two current challenges to address reliable pore-scale modeling of reactive flows based on \(\mu\)CT images and ensure trustable evolutions of the macro-scale properties. The first challenge aims at quantifying morphological uncertainties on the porous medium sample due to unresolved features resulting from X-ray \(\mu\)CT. Investigating the uncertainties in the micro-porosity field is a major concern, and neglecting these uncertainty effects can bias the determination of the evolving petrophysical properties in geological applications. The second challenge concerns the uncertainty quantification of the kinetic parameters for reactive processes. In this sense, providing reliable mineral reactivity from dynamical \(\mu\)CT remains critical in order to perform relevant direct numerical simulation at the pore scale. The present article addresses these two challenges and incorporates Uncertainty Quantification (UQ) concerns in the workflow of pore-scale modeling. Accounting for these concerns, however, requires developing efficient data assimilation techniques to perform extensive parameter estimation studies, uncertainty quantification assessments, and improve model reliability. In fact, uncertainty quantification is commonly achieved through stochastic PDE models [28; 21] or probabilistic Markov Chain Monte Carlo (MCMC) methods embedding Bayesian inference [46; 61]. The main drawback being this requires numerous evaluations of the PDE model and can thus quickly become computationally expensive. To overcome such computational constraints, machine learning methods have appeared as a popular framework in geosciences and have shown effectiveness in building efficient surrogate models in PDE-based data assimilation problems [69; 22]. This offers alternatives and complementary means to traditional numerical methods to improve predictive modeling based on observation data and investigate uncertainty quantification within a Bayesian context. The development of machine-learning surrogate modeling incorporating uncertainty has, therefore, garnered increasing interest for a wide range of scientific applications [23; 21]. A popular framework combining physics-based techniques, data-driven methodology, and intrinsic uncertainty quantification are Bayesian Physics-Informed Neural Networks (BPINNs) [76]. This benefits from the advantages of neural network structures in building parameterized surrogate models and Bayesian inference standards in estimating probabilistic posterior distribution. BPINNs can, however, be prone to a range of pathological behaviors, especially in multi-objective and multiscale inverse problems. This is because their training amounts to sample from a weighted multitask posterior distribution for which the setting of the weights parameters is challenging. Ensuring robust Bayesian inference, indeed, hinges on properly estimating these distinct task weights. We thus rely on the efficient BPINNs framework developed in [53], which robustly addresses multi-objective and multiscale Bayesian inverse problems including latent field reconstruction. The strategy relies on an adaptive and automatic weighting of the target distribution parameters and objectives. It benefits from enhanced convergence and stability compared to conventional formulations and reduces sampling bias by avoiding manual tuning of critical weighting parameters [37]. The adjusted weights bring information on the task uncertainties, improve the reliability of the noise-related and model adequacy estimates and ensure unbiased uncertainty quantification. All these characteristics are crucial to address reliable reactive inverse problems of calcite dissolution, and we thus built the present methodology upon this efficient data-assimilation framework. In this article, we focus on a multitask inverse problem for reactive flows at the pore scale through data assimilation that incorporates uncertainty quantification by means of the Bayesian Physics-Informed Neural Networks framework presented in [53]. We intend to develop a novel approach for pore-scale imaging problems that combines dynamical microtomography and physics-based regularization induced by the PDE model of dissolution processes, for which the images are substantially noisy. To the best of our knowledge, investigating morphological and mineral reactivity uncertainties from the perspective of coupling physics-based models with data-driven techniques is the main novelty of this work. This formulation presents the joint ability to infer altogether kinetic parameters and quantify the residual micro-porosity generated by unresolved features in the \(\mu\)CT images. Overall, we aim to ensure reliable calibration of the PDE model and account for the morphological imaging uncertainty to provide meaningful evolution of the petrophysical properties due to the reactive process. The present methodology relies on sequential reinforcement of the target posterior distribution, which successively incorporates additional constraints from the PDE model into the data assimilation process. This sequential splitting formulation arises from the strong coupling, in the reactive model, between the micro-porosity field related to the \(\mu\)CT observations and the solute concentration, which is a latent field. Therefore, we consider successive sampling steps dedicated to 1) preconditioning the micro-porosity surrogate model with pure regression on the dynamical \(\mu\)CT images, 2) preconditioning the latent reactive fluid and inferring a first reactive parameter through PDE-constrained tasks, and 3) considering the overall data assimilation problem with two inverse parameters, a predictive posterior distribution on the micro-porosity and insight on the latent concentration field. We also propose a differentiation strategy wherein we consider a reformulation of the heterogeneous diffusion differential operator involved in the PDE model. This enhances the computational efficiency of the BPINN surrogate model and shows that suitable differential operator expressions considerably improve the computational cost, especially when dealing with complex non-linear operators. The main contributions of this article are summarized below: 1. We infer reactive inverse parameter uncertainty ranges in prescribed PDE models through suitable dimensionless formulations in inverse problems, for which we identify and define the corresponding dimensionless numbers. 2. We quantify morphological uncertainties from a pore-scale perspective, coupling image-based and physics-informed techniques in dynamical dissolution processes. 3. We improve the relevance and reliability of predictions in dynamical systems through data-driven approaches and robust Bayesian Inference methodology. 4. We provide reliable quantification of the micro-porosity changes during geochemical transformations, with a focus on calcite dissolution processes. 5. We built an intrinsic data assimilation strategy for pore-scale imaging inverse problems relying on a sequential reinforcement approach and suitable formulation of the heterogeneous diffusion differential operator. The remainder of this manuscript is organized as follows: In Sect. 2, we review the current challenges arising from uncertainty quantification concerns in pore-scale modeling of reactive flows, with a focus on \(\mu\)CT limitations and model reliability issues. We focus in Sect. 2.1 on the formulation of pore-scale modeling of reactive flows that we consider to study the dynamical dissolution of calcite. Sect. 3 describes the dimensionless expressions of the dissolution PDE model for direct and inverse problems. We identify the main differences in their formulations and we establish in Sect. 3.3 the dimensionless inverse problem on calcite dissolution that we address in the data assimilation approach, which ends up with equation (19). Sect. 4 is dedicated to presenting the efficient adaptive framework for Bayesian Physics-Informed Neural Networks, which has been developed in our previous work. In Sect. 5, we describe the proposed data assimilation strategy for pore-scale imaging inverse problems, with sequential reinforcement of the target posterior distribution and computational strategy for the differential operator expressions. We validate this strategy in Sect. 6 on several 1D+Time test cases of calcite dissolution based on synthetic \(\mu\)CT images. This particularly demonstrates successful Bayesian inference of the reactive parameters with posterior distributions on the dimensionless numbers. This also highlights consistent UQ on the micro-porosity field with uncertainty ranges on the residual micro-porosity, potentially unresolved, arising from the \(\mu\)CT dynamical images. Finally, we apply in Sect. 7 our methodology to a more realistic 2D+Time data assimilation problem of calcite dissolution with heterogeneous porosity levels and synthetic \(\mu\)CT dynamical observations. ## 2 Uncertainty Quantification in pore-scale modeling of reactive flows: context and motivation Pore-scale modeling of reactive flows plays a crucial role in the long-term management of CO\({}_{2}\) capture and storage in natural underground reservoirs. Understanding the local geochemical interactions between the injected CO\({}_{2}\) and the aquifer structure and how it impacts the reservoir macro-scale properties is an active field in porous media research [33; 39; 67; 10]. These geochemical effects include mineral trapping through precipitation and crystallization but also dissolution reactions associated with flow, and transport mechanisms [52; 3]. Mathematical models of such processes, at the pore scale, are usually combined with Direct Numerical Simulations (DNS) of highly coupled and non-linear Partial Differential Equations (PDE). Such PDE systems characterize the local evolving interfaces and provide insight into reservoir safety submitted to chemical interactions [41; 43; 51]. In this risk management perspective, ensuring the reliability of pore-scale modeling and simulation of reactive flows is therefore essential, and this requires embedding Uncertainty Quantification (UQ) concerns. ### Modeling of pore-scale dissolution The study of geochemical processes related to CO\({}_{2}\) capture and storage is crucial in the context of risk management and investigation of the coupled mechanisms occurring within aquifers. In particular, the dissolution of the carbonate rock architecture by the injected CO\({}_{2}\) may compromise the integrity of the geological reservoir. Pore-scale modeling of dissolution phenomena in porous media, therefore, remains an extensive research area [65; 42]. These mathematical models require a thin description of the highly heterogeneous pore structure in order to account for local interactions. The present article focus on the pore-scale dissolution of calcite subject to acidic transport in the subsoil. We, therefore, target the following irreversible chemical reaction with uniform stoichiometric coefficients: \[\mathrm{CaCO_{3}\left(s\right)+H^{+}\longrightarrow Ca^{2+}+HCO_{3}}^{-} \tag{1}\] In this section, we present the mathematical model used to simulate the calcite dissolution process (1) at the pore scale. We introduce a spatial domain \(\Omega\subset\mathbb{R}^{n}\), \(n=1,2,3\) which corresponds to the porous medium described at its pore scale. This sample description involves a pure fluid region \(\Omega_{F}\), also called void-space and assumed to be a smooth connected open set, and a surrounding solid matrix \(\Omega_{S}\) itself considered as a porous region. This region \(\Omega_{S}=\Omega\smallsetminus\Omega_{F}\) is seen as complementing the full domain \(\Omega\), which in practice represents the computational box of the numerical simulations, and the internal fluid/solid interface is denoted \(\Sigma\). We denote by \(\varepsilon=\varepsilon_{f}=1-\varepsilon_{s}\) the micro-porosity field defined on \(\Omega\), given \(\varepsilon_{f}\) and \(\varepsilon_{s}\) respectively the volume fractions of void and solid according to usual notations [64, 25]. This defines a micro continuum description of the porous medium such that \(\varepsilon=1\) in the pure fluid region \(\Omega_{F}\) and takes a small value in the surrounding matrix \(\Omega_{S}\). In fact, the local micro-porosity \(\varepsilon\) is assumed to have a strictly positive lower bound \(\varepsilon(x,t)\geqslant\varepsilon_{0}>0\) for all \((x,t)\) in the spatiotemporal domain \(\Omega\times(0,T_{f})\). This lower bound \(\varepsilon_{0}\) characterizes the residual, potentially unresolved, porosity of the porous matrix. In practice, we set throughout this article \(\varepsilon_{0}=5\%\). This micro-continuum formulation relies on a two-scale representation of the sample characterized by its micro-porosity field \(\varepsilon\). Such a two-scale description of the local heterogeneities in the carbonate rocks is appropriate to simulate the pore-scale physics and establish the governing flow and transport equations in each distinct region. Indeed, we consider the model on superficial velocity \(u\) introduced and derived rigorously by Quintard and Whitaker in the late 80s [56] and commonly used until nowadays [31, 73, 64, 42]: \[\varepsilon^{-1}\frac{\partial\rho u}{\partial t}+\varepsilon^{-1}\nabla \cdot(\varepsilon^{-1}\rho u\otimes u)-\varepsilon^{-1}\nabla\cdot(2\mu D(u ))+\mu^{*}K_{\varepsilon}^{-1}u=f-\nabla p \tag{2}\] along with the divergence-free condition \(\nabla\cdot u=0\). In this equation, \(D(u)=(\nabla u+\nabla u^{T})/2\) is the shear-rate tensor, \(\mu\) is the dynamic viscosity, \(p\) is the volumic pressure, \(f\) the volumic driving force and \(\rho\) the fluid density. The related viscosity \(\mu^{*}\) coincides usually with the fluid viscosity \(\mu\) but may be different in order to account for viscous deviations. The quantities \(\rho\), \(\mu\), \(\mu^{*}\) and \(f\) are assumed to be constant. In contrast, the permeability \(K_{\varepsilon}\) refers to the micro-scale permeability and depends on the local micro-porosity field \(\varepsilon\). In fact, the permeability of the micro-porous domain is modeled by the empirical Kozeny-Carman relationship [30, 17, 18]: \[K_{\varepsilon}^{-1}=\kappa_{0}^{-1}\frac{(1-\varepsilon)^{2}}{\varepsilon^{3}} \tag{3}\] where \(\kappa_{0}\) is a coarse estimation of the reference macro-scale permeability. In this article, we consider both \(K_{\varepsilon}\) and \(\kappa_{0}\) as scalars, meaning we restrict ourselves to the isotropic case although this formalism can be extended to anisotropic porous media. The superficial velocity formulation (2) defines a two-scale model that can be solved on the overall domain \(\Omega\) -- using for instance penalization principles -- and retrieves the usual Navier-Stokes equation in the pure fluid region \(\Omega_{F}\) (since \(K_{\varepsilon}^{-1}=0\) for \(\varepsilon=1\)). At low Reynolds numbers and for highly viscous Darcian flows, equation (2) reduces to the following Darcy-Brinkman Stokes (DBS) model: \[-\nabla\cdot(2\mu D(u))+\mu\kappa_{0}^{-1}\frac{(1-\varepsilon)^{2}}{ \varepsilon^{2}}u=\varepsilon(f-\nabla p),\quad\mbox{in}\quad\Omega \tag{4}\] where \(\mu^{*}=\mu\) for sake of readability. In the present work, we consider this DBS equation (4), which is adequate in the flow regime hypothesis of low Reynolds number representative in pore-scale modeling. The DBS equation based on the superficial velocity is an efficient formalism to model the hydrodynamic in multi-scale porous media. The flow model (4) needs to be complemented by transport-reaction-diffusion equations of the different species involved in the geochemical processes. These equations are derived from the mass balance of the chemical species [64], and can be written under the form: \[\frac{\partial\varepsilon\widetilde{C}_{k}}{\partial t}+\nabla\cdot(u \widetilde{C}_{k})-\nabla\cdot\left(\alpha_{k}(\varepsilon)\varepsilon\nabla \widetilde{C}_{k}\right)=\widetilde{R}(\widetilde{C}_{k}), \tag{5}\] where \(\widetilde{C}_{k}=\rho_{f}\overline{\omega}_{f,k}/M_{k}\) is a concentration per unit of fluid (following the notations introduced by Quintard and Whitaker in [57], and afterward by Soulaine and al. in [64]) with \(M_{k}\) the molar mass of the \(k^{\text{th}}\) specie. The term \(\alpha_{k}(\varepsilon)\) is a space-variable effective diffusion coefficient and accounts for a reduced diffusion in the surrounding porous matrix due to the tortuosity effect, which is usually quantified using Archie's law [11]: \[\alpha_{k}(\varepsilon)=D_{m,k}\varepsilon^{\beta}. \tag{6}\] In this empirical relationship, \(\beta\) refers to the tortuosity index and \(D_{m,k}\) to the molecular diffusion of the considered species [68]. We finally introduce the concentration per unit of volume defined by \(C_{k}=\varepsilon\widetilde{C}_{k}\), so that the equation (5) is written \[\frac{\partial C_{k}}{\partial t}+\nabla\cdot(\varepsilon^{-1}uC_{k})-\nabla \cdot\left(D_{m,k}\varepsilon^{1+\beta}\nabla(\varepsilon^{-1}C_{k})\right)= R(C_{k}), \tag{7}\] which is no more than superficial modeling of the chemistry. In the context of the current article, we are not interested in monitoring the dissolution products of (1) (_i.e_ the Ca\({}^{2+}\) and HCO\({}_{3}\)\({}^{-}\) ions), hence we mainly focus on the concentrations of the acid and calcium carbonate species respectively denoted \(C(x,t)=[\mathrm{H}^{+}]\) and \(C_{s}(x,t)=[\mathrm{CaCO}_{3}]\). The solid concentration is linked to the porosity through the molar volume of calcite \(v\) by the relation \(C_{s}=(1-\varepsilon)/v\) with \(v=36.93\,\mathrm{cm}^{3}.\mathrm{mol}^{-1}\). In this configuration, the evolution of the acid phase (_i.e_ the concentration field \(C\)) follows the equation (7), and the evolution of the solid phase with superficial concentration \(C_{s}\) is given by the same equation without transport nor diffusion: \[\frac{\partial C_{s}}{\partial t}=R(C). \tag{8}\] This reaction rate -- related to the chemical reaction (1) -- is written [44]: \[R(C)=-K_{s}A_{s}\gamma_{\mathrm{H}^{+}}C\mathbb{1}_{\{(1-\varepsilon)>0\}} \tag{9}\] where \(K_{s}\) is the dissolution rate constant, \(A_{s}\) the specific reactive area, and \(\gamma_{\mathrm{H}^{+}}\) the activity coefficient of the acid, whose physical units are respectively \(\mathrm{mol}.\mathrm{m}^{-2}.\mathrm{s}^{-1}\), \(\mathrm{m}^{-1}\) and \(\mathrm{m}^{3}.\mathrm{mol}^{-1}\) (such that the chemical activity \(a_{H^{+}}=\gamma_{\mathrm{H}^{+}}C\) is dimensionless). The notation \(\mathbb{1}\) refers to a characteristic or activation function and ensures the rate of the chemical reaction is non-zero only in the presence of solid minerals. Along with its boundary and initial conditions, this defines a set of partial differential equations modeling reactive flows at the pore scale [66, 44]: \[\left\{\begin{array}{ll}-\nabla\cdot(2\mu D(u))+\mu\kappa_{0}^{-1}\frac{(1- \varepsilon)^{2}}{\varepsilon^{2}}u=\varepsilon(f-\nabla p),&\text{in}\quad \Omega\times(0,T_{f})\\ \frac{\partial C}{\partial t}+\nabla\cdot(\varepsilon^{-1}uC)-\nabla\cdot \left(D_{m}\varepsilon^{1+\beta}\nabla(\varepsilon^{-1}C)\right)=-K_{s}A_{s} \gamma_{\mathrm{H}^{+}}C\mathbb{1}_{\{(1-\varepsilon)>0\}},&\text{in}\quad \Omega\times(0,T_{f})\\ \frac{\partial\varepsilon}{\partial t}=\upsilon\,K_{s}A_{s}\gamma_{ \mathrm{H}^{+}}C\mathbb{1}_{\{(1-\varepsilon)>0\}},&\text{in}\quad\Omega \times(0,T_{f})\\ \text{+ adequate boundary and initial conditions, along with }\nabla\cdot u=0\end{array}\right. \tag{10}\] which is strongly coupled, since \(u\) and \(C\) (by means of \(\varepsilon\)) depend on each other. Finally, one can notice that the reactive system (10) is valid on the whole domain \(\Omega\), whether the local state is fluid or not. In the pure fluid region, this system indeed converges toward a Stokes hydrodynamic model coupled with a standard transport-diffusion equation for the acid, with its molecular diffusion \(D_{m}\). The overall system (10) defines the direct formulation of the calcite dissolution problem, following the chemical equation (1), at the pore-scale. Nonetheless, appropriate model calibration of the kinetic input parameters, such as the specific surface area \(A_{s}\) or the dissolution rate constant \(K_{s}\), that compare with experimental results remains challenging. This comes from the observation these reactive constants can span over a wide range of orders of magnitude, inducing highly different behaviors in the system. Quantifying the uncertainties on these kinetic parameters thereby appears as a necessity to provide reliable reactive flow models at the pore scale. ### X-ray microtomography limitations: toward the uncertainty quantification assessment Independently of the modeling aspects developed in Sect. 2.1 and the choice of the numerical method used as a direct solver, pore-scale simulations are intrinsically related to X-ray microtomography (X-ray \(\mu\)CT). In fact, the latter provides, beforehand, scans of the complex shape geometry on a representative elementary volume (REV), defined as the characteristic minimal volume on which the microscopic variables can be averaged [12]. Pore-scale numerical simulations of reactive dynamical processes are then performed on this REV initial geometry -- which defines the domain \(\Omega\) -- tracking the dynamical interface evolutions and micro-properties changes. This REV concept also allows the passage from the pore scale to the Darcy scale by referring to representative criteria of the domain in terms of averaged properties, such as the macro-porosity \(\phi\), bulk permeability \(\kappa_{0}\), and the reactive surface area \(A_{s}\). These bulk parameters are derived from the evolving micro-structures through homogenization principles, and upscaling of the governing equations [58, 70], as illustrated in Fig 1. Indeed, at the Darcy scale, the upscaled porosity and the upscaled absolute permeability are respectively defined by: \[\phi=<\varepsilon>_{\Omega}\text{\ \ and\ }\kappa_{0}=\frac{\mu\phi<u>_{\Omega_{F}}}{<f>_{ \Omega_{F}}}, \tag{11}\] using the notations introduced in Sect. 2.1, and where \(<.>\) represents the average on the corresponding domain, and \(u\) is the pore-scale velocity. Therefore, trustable measurement of the impact of the reactive processes on the porous medium macro-properties requires ensuring reliable quantification of the changes in micro-properties. This can be achieved under the constraint of having a fine description of the pore space, with correct knowledge of the surrounding solid matrix defined by the local micro-porosity field \(\varepsilon\). An efficient representation of the porous sample at the pore scale is necessary to guarantee reliable estimation of the macro-properties evolutions along the reactive processes. Advances in X-ray microtomography offer such an opportunity. X-ray \(\mu\)CT is regarded as a powerful high-resolution imaging technique able to non-destructively determine the inner structure of a porous sample up to a characteristic scale, which defines the voxel size. The voxels are small elementary volumes (of a few \(\mu m\)) that compose the overall 3D reconstructed sample geometry and are identified by different grey levels characterizing the local attenuation of the material. The resulting dataset can either be segmented to separate the pore space (fluid phase) from the surrounding solid matrix or benefit from the information related to the greyscale values of the different voxels. The segmented images lend themselves to numerical simulations that require an explicit representation of the fluid-solid interfaces (_e.g._ Lattice-Boltzmann [2]), unlike the Darcy-Brinkman Stokes formulation presented in Sect. 2.1, which incorporates the voxel greyscale values. Indeed, these grey-level shades, depicting the material local attenuation, are correlated to the porosity field description \(\varepsilon\) and can be taken into account in the DBS model through the equation (4). This introduces Figure 1: **From the pore-scale to the reservoir scale: an upscaling principle.** Schematic representation of a reservoir scale structure, on the left, with its inherent averaged macro-properties \(\phi\) and \(\kappa_{0}\) computed on a representative elementary volume (REV). Local description of the pore-scale heterogeneities in this REV, on the right, along with its intrinsic micro-scale properties. These are the local micro porosity field \(\varepsilon\) (bounded by the least physically possible porosity \(\varepsilon_{0}>0\)) and the microscale permeability \(K_{\varepsilon}\), based on the Kozeny-Carman relationship from equation (3). Digital Rock Physics applications as the joint use of high-resolution X-ray computed microtomography and advanced simulation techniques to characterize, _inter alia_, the rock petrophysical properties and their evolutions [8, 9]. Pure imaging alternatives readily regard the resulting dataset to derive the sample's effective physical properties (porosity, permeability, dispersivity...) but also geochemical rates and mineral reactivity in dynamical processes [62, 49]. Therefore, X-ray microtomography is both a complementary means to numerical modeling at the pore scale and a fundamental imaging process on its own to study the CO\({}_{2}\) storage implications on porous material. However, limitations in the \(\mu\)CT imaging process may affect the determination of medium effective properties, and query the reliability of the predictive models based on these inputs. In fact, several imaging artifacts exist and disrupt the efficient description of the pore space morphology. Firstly, the finite resolution of the \(\mu\)CT pipeline is challenging, as the interfaces appear blurry and do not manifest themselves as sharp intensity steps in the images, but rather as gradual intensity changes spanning over several voxels [59]. Actually, the local attenuation signal within a voxel is influenced by the material heterogeneity in its neighborhood, then the resulting grey scale value represents averaged properties: this is known as the partial volume effect [48]. This phenomenon is also involved when morphological features of interest are smaller than the characteristic voxel size, resulting in unresolved micro-porosity or roughness of the pore space walls. Quantifying sub-resolution porosity, which is a prevalent imaging artifact in \(\mu\)CT, and measuring its impact on numerical modeling and simulation is identified as critical for geosciences applications [34, 62, 19]. Such an issue is well-known and arises from a compromise between the sample volume being investigated and the scan resolution. For porous media covering a wide range of pore scales, this trade-off can readily result in voxel sizes that are not able to capture fully resolved morphological features of the pore space. Finally, in the presence of sharp density transitions, the different refraction index at either side of the interface furthermore leads to so-called edge enhancement which manifests itself as an over- and under-shoot of the grey level immediately next to the interface [14]. Consequently, the position of the material interface is prone to uncertainty, in addition to the roughness of the pore space walls, and therefore results in an approximation of the true morphology. While the mentioned effects can be minimized, they cannot be eliminated and add uncertainties to the estimation of the effective properties, the characterization of the void/solid interfaces, and the reliability of the numerical models. In addition, the accuracy of X-ray \(\mu\)CT images is challenged by additional artifacts coming from both inherent physical and technical limitations [29]. It includes, among them, instrumental noise, beam hardening [71] which results in cupping (an underestimation of the attenuation at the center of the object compared to its edges) or drag/streak appearances (due to an underestimation between two areas of high attenuation), beam fluctuations along the scanning process and scatter radiations coming from the object and/or the detector. These variations can manifest as noise, ring or streak artifacts, and halos that are often hard to distinguish from real features and therefore hinder the identification of sample heterogeneities at multiple scales. Ubiquitous limiting factors remain in the X-ray \(\mu\)CT imaging process, and the assessment of their related uncertainties is fundamental to developing more accurate predictive models. ### Dynamical microtomography: mineral reactivity and imaging morphological uncertainties Accounting for the \(\mu\)CT morphological uncertainties and sub-resolution porosity, introduced in Sect. 2.2, is essential in providing reliable pore-scale simulations of reactive flows. This is of primary importance when considering risk assessment and predicting meaningful evolutions of the rock macro-properties under geochemical effects. The study of these overall X-ray imaging limitations, therefore, raises concerns in the research community, and investigations are conducted on quantifying their implications on the effective properties. In fact, sub-resolution porosity may lead to a misleading estimation of the pore-space connectivity that disrupts the flow description within the REV and induces significant deviations in the computed permeability. Several modeling approaches, mainly based on upscaling principles, aim at quantifying these deviations. They cover DBS formulation altogether with the Kozeny-Carman equation (3), which estimates the permeability of the micro-porous domain through a heuristic relation with the residual micro-porosity [63]. However, in the absence of prior knowledge of this unresolved residual porosity, the setting of the micro-porous permeability becomes controversial. Alternatives rely on appropriate boundary conditions to model the unresolved features and wall roughness through slip-length formalism, and range from theoretical implications [1, 31, 32] to the practical computation of the permeability deviations on real 3D \(\mu\)CT scans [54]. Apart from the modeling quantification of the effective properties uncertainties, experimental and imaging approaches are developed to resolve the sub-resolution porosity. This involves differential imaging techniques based on comparisons between several enhanced contrast scans [34], statistical studies based on \(\mu\)CT histograms [79], or deep learning methodologies such as Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) that provide super-resolved segmented images [7, 77]. Overall, uncertainty quantification of the X-ray \(\mu\)CT limitations either relies on appropriate mathematical modeling with the estimation of computed deviations or experimental approaches based on image treatment analysis of the \(\mu\)CT scans. The reliability of pore-scale modeling related to X-ray \(\mu\)CT scans is questioned due to its inherent imaging limitations and morphological uncertainties. At the same time, proper assessment of the kinetic parameters in dynamic phenomena, including mineral reactivity and reactive surface area, also raise challenges. Actually, mineral reactivity is a critical parameter to account for in many geosciences applications though discrepancies of several orders of magnitude can be found in the literature [16, 24]. However, these parameters are usually regarded as input in the numerical models and eventually tuned to aggregate experimental results. Providing reliable uncertainty estimates on these kinetic parameters is, therefore, of great interest to provide trustable pore-scale reactive simulations. Such concern has received attention over the past decades, and considering dynamic imaging processes subsequently appears as a necessity. Several experimental works have already focused on 4D imaging techniques of carbonate dissolution to provide fundamental information on mineral reaction rates [50]. These kinetic characterization studies mostly rely on voxel-to-voxel subtraction of consecutive images in order to quantify the change of greyscale values, hence the evolution of the dissolution process and calcite retreat. This is referred to, in the literature, as differential imaging techniques and has been investigated for different imaging techniques such as X-ray \(\mu\)CT and atomic force microscopy (AFM) [34, 60]. These approaches enable to capture heterogeneous spatial distributions of calcite dissolution rates through successive real-time measurements. They provide local maps of mineral reactivity at the crystal surfaces and quantification of their morphological evolutions [60, 49]. Menke et al. [41] also performed _in situ_ time-resolved experiments of carbonate dissolution under reservoir conditions (in terms of pressure and temperature) to derive averaged reaction rates and evaluate dynamical changes in the effective properties. Investigation of mineral surface reactivity is another challenging concern to ensure reliable calibration of pore-scale models for CO\({}_{2}\) dissolution, and is usually achieved through dynamical \(\mu\)CT experiments. Nonetheless, dealing with dynamical \(\mu\)CT images brings its own challenges [78]. In addition to the unresolved features, dynamical imaging of chemical processes requires making a compromise between the acquisition time and the image quality. Indeed, capturing fast-dissolution processes, for instance, imposes short acquisition times and could result in highly noisy data since statistically, the number of photons reaching the detector would be reduced. In such a case, differential imaging makes it difficult to distinguish between true morphological changes and the derivation of highly noisy data. On top of that, any additional movement in the sample, not related to the dissolution process but rather resulting from instrumentation artifacts, makes it challenging to work with dynamical samples only to characterize \(\mu\)CT errors and uncertainty. Indeed, Zhang and al. [78] identified on a Bentheimer sampler that about 32% of the voxels have at least a 2% difference in greyscale values between two consecutive fast scans. These differences are not physical-based variations but rather intrinsic uncertainties measurements. They also show that such artifacts' uncertainties can be reduced by using slower acquisition time, though this is not always feasible to capture fast-dynamical processes. Time-resolved experiments of dynamical processes can provide insights into kinetic dissolution rates, though this also suffers from imaging limitations that can lead to misleading estimations. Inferring reliable mineral reactivity from dynamical microtomography and quantifying imaging morphological uncertainties are identified as the major issues that can bias the determination of evolving petrophysical properties in geological applications. Current methodologies addressing these problems mainly fall into two categories: on one side, purely model-related approaches based on static \(\mu\)CT scans and upscaling principles, and on the other side, image treatment analysis relying on experimental static or dynamical images. Nonetheless, neither morphological uncertainty nor reaction rate quantification has been investi gated from the perspective of coupling physics-based models with data-driven techniques. To the best of our knowledge, the development of data assimilation approaches on pore-scale imaging problems that combine dynamical microtomography and physical regularization induced by the PDE model of reactive processes is the main novelty of the present manuscript. The motivation for this formulation lies in its joint ability to infer mineral reactivity parameters and quantify the residual micro-porosity generated by unresolved features in the microtomography imaging process. Therefore, we assert that a proper balance between dynamical microtomography imaging and their PDE-based physical formulation could provide insights into the uncertainty quantification issues related to reactive pore-scale modeling. In this direction, we propose a novel methodology that uses a physics-based dissolution model as regularization constraints to dynamical data-driven microtomography inference. This aim at quantifying both the uncertainties on kinetic parameters to perform reliable model calibration and the morphological imaging uncertainty on the unresolved micro-porosity field. ## 3 Direct and inverse problem setup This section is dedicated to setup the dimensionless versions of the dissolution PDE model for direct and inverse problems. We establish the main differences in their dimensionless formulations and define the modeling assumptions used in the present article. ### Usual dimensionless formulation of the direct problem The overall calcite dissolution PDE system, defined in equation (10), can model a wide range of dissolution regimes and patterns characterized by well-established dimensionless numbers. By setting \(x^{*}=x/L\) and \(t^{*}=tD_{m}/L^{2}\) and following the notations of Sect. 2.1, one can introduce the so-called Peclet and Reynolds numbers \[\mathrm{Pe}=\overline{u}L/D_{m}\;\;\mathrm{and}\;\;\mathrm{Re}=\rho\overline{ u}L/\mu \tag{12}\] where \(\overline{u}\) and \(L\) are respectively the characteristic velocity and length of the sample. In the context of pore-scale simulations, the inertial effects become negligible compared to viscous forces due to low Reynolds numbers -- typically we have the assumption \(\mathrm{Re}\ll 1\) throughout this article. Regarding the chemical reactions, two dimensionless numbers are defined: the catalytic Damkohler number denoted \(\mathrm{Da}_{\mathrm{II}}\) and its inherited convective number \(\mathrm{Da}_{\mathrm{I}}\), expressed as \[\mathrm{Da}_{\mathrm{II}}=\frac{K_{s}A_{s}\gamma_{\mathrm{H}^{+}}L^{2}}{D_{m}} \quad\mathrm{and}\quad\mathrm{Da}_{\mathrm{I}}=\frac{\mathrm{Da}_{\mathrm{II}} }{\mathrm{Pe}}=\frac{K_{s}A_{s}\gamma_{\mathrm{H}^{+}}L}{\overline{u}}. \tag{13}\] The characteristic length \(L\) is usually related to average pore throat diameters or \(L^{2}\) can be set as the surface of a section divided by the average number of grains (_e.g._ see [27] for practical cases). Otherwise, it is possible to set the characteristic length of the problem as \(L=\sqrt{\kappa_{0}}\), provided an experimental or numerical estimation of \(\kappa_{0}\)[64]. All these dimensionless numbers are meaningful in direct dissolution problems to qualify the different dominant regimes in terms of diffusion, reaction, and advection. Using the dimensionless variables \((x^{*},t^{*})\), the normalized concentration \(C^{*}=C/C_{0}\) and velocity \(u^{*}=u/\overline{u}\), one finally gets the dimensionless formulation of the overall reactive flow system (10) on the dimensionless spatiotemporal domain \(\Omega^{*}\times(0,T_{f}^{*})\). This leads to the usual PDE model: \[\left\{\begin{array}{ll}-\Delta u^{*}+L^{2}\kappa_{0}^{-1}\frac{(1-\varepsilon )^{2}}{\varepsilon^{2}}u^{*}=\varepsilon(f^{*}-\nabla p^{*}),&\mathrm{in}\quad \Omega^{*}\times(0,T_{f}^{*})\\ \frac{\partial C^{*}}{\partial t^{*}}+\mathrm{Pe}\,\nabla\cdot( \varepsilon^{-1}u^{*}C^{*})-\nabla\cdot\left(\varepsilon^{1+\beta}\nabla( \varepsilon^{-1}C^{*})\right)=-\mathrm{Da}_{\mathrm{II}}C^{*}\mathbb{1}_{\{(1 -\varepsilon)>0\}},&\mathrm{in}\quad\Omega^{*}\times(0,T_{f}^{*})\\ \frac{1}{\upsilon C_{0}}\frac{\partial\varepsilon}{\partial t^{*}}=\mathrm{ Da}_{\mathrm{II}}C^{*}\mathbb{1}_{\{(1-\varepsilon)>0\}},&\mathrm{in}\quad \Omega^{*}\times(0,T_{f}^{*})\\ \mbox{\sf+ adequate boundary and initial conditions, along with }\nabla^{*}\cdot u^{*}=0\end{array}\right. \tag{14}\] obtained by means of multiplying the hydrodynamic DBS equation by \(L/\rho\overline{u}^{2}\) and the chemical equations in (10) by \(L^{2}/C_{0}D_{m}\). The notations \(f^{*}\) and \(p^{*}\) in the dimensionless DBS equation are defined by \(f^{*}=fL^{2}/(\mu\overline{u})\) and \(p^{*}=pL/(\mu\overline{u})\), and finally \(C_{0}\) is a characteristic constant for the acid concentration field. This PDE system defines the overall dimensionless formulation of the direct problem of calcite dissolution. ### Modeling assumptions on the direct and inverse problems In the applications Sect. 6 and 7, we consider inverse problems in the dissolution process of calcite cores with heterogeneous porosity levels for 1D and 2D spatial configurations. Although the 1D+Time test case is purely synthetic and aims to validate the method developed in Sect. 5, the 2D+Time application addresses a more realistic problem that can be applied to isotropic porous samples. Several modeling assumptions are, though, made to address both the reactive direct and inverse problems. These modeling assumptions are detailed hereafter and determine the dissolution regime considered in the applications. The dimensionless numbers \(\mathrm{Re}\) and \(\mathrm{Pe}\) are common to both the direct and inverse formulations and respectively establish the viscosity-dominated regime and the convective or diffusive transport regime. At the initial state in the dynamic imaging process, we assume that the porous medium is completely saturated with the acid by capillary effect. The amount of reactant at the pore interface is initially homogeneously distributed, and therefore we expect, at first, a cylindrical dissolution regime of the calcite core with spherical symmetry. Subsequently, the dissolution process may deviate from this cylindrical pattern due to local heterogeneities in the micro-porosity field \(\varepsilon\). We also suppose a low Peclet hypothesis \(\mathrm{Pe}\ll 1\), so that the reactant diffusion is dominant over the advection phenomena resulting in more homogeneous dissolution rates at the interface (_e.g._ see [64] for the dissolution regimes characterization). In this sense, continuous acid injection is maintained at a given fluid flow rate to ensure a diffusive-dominated regime for the dissolution. Consequently, we neglect the advection effects in the present article and focus on the following reaction-diffusion system: \[\left\{\begin{array}{ll}\frac{\partial C^{*}}{\partial t^{*}}- \nabla\cdot\left(\varepsilon^{1+\beta}\nabla(\varepsilon^{-1}C^{*})\right)=- \mathrm{Da}_{\mathrm{II}}C^{*}\mathbb{1}_{\{(1-\varepsilon)>0\}},&\mbox{ in}\quad\Omega^{*}\times(0,T_{f}^{*})\\ \\ \frac{1}{\upsilon C_{0}}\frac{\partial\varepsilon}{\partial t^{*}}= \mathrm{Da}_{\mathrm{II}}C^{*}\mathbb{1}_{\{(1-\varepsilon)>0\}},&\mbox{ in}\quad\Omega^{*}\times(0,T_{f}^{*})\\ \\ C^{*}=1,&\mbox{ on}\quad\partial\Omega^{*}\times(0,T_{f}^{*})\\ \\ C^{*}(x,0)=C_{\mathrm{init}}(x)/C_{0}:=C_{\mathrm{init}}^{*},&\mbox{ in}\quad\Omega^{*}\times\{0\}\end{array}\right. \tag{15}\] written in its dimensionless form with the normalized concentration field \(C^{*}=C/C_{0}\). The continuous acid injection is modeled through non-homogeneous Dirichlet boundary conditions on \(C^{*}\), with the characteristic constant \(C_{0}\) chosen as the value of the Dirichlet boundary conditions on \(C\). The initial condition on the micro-porosity field \(\varepsilon\) arises from the dry microtomography scan or the initial synthetic porous medium. Ultimately, we obtain a PDE model driven by one dimensionless number, namely the catalytic Damkohler number, characterizing the ratio of the reaction rate over the diffusion effects. The system (15) is, therefore, consistent with the standard dimensionless formulation of the direct problem, subject to a diffusive-dominated transport regime. However, we merely cannot consider the dimensionless temporal variable \(t^{*}=tD_{m}/L^{2}\) in a reactive inverse problem as it strongly depends on molecular diffusion \(D_{m}\), which is among the unknown kinetic parameters to be estimated. In the next section, we focus on the challenge arising from the dimensionless formulation of a reactive inverse problem in the context of calcite dissolution. ### Dimensionless inverse problem on calcite dissolution In this article, we address pore-scale imaging inverse problems in dissolution processes. We aim to recover and quantify uncertainties both on the micro-porosity field description \(\varepsilon\) and the reactive parameters involved in the diffusion-reaction system. Among these inverse kinetic parameters, one can find the molecular diffusion \(D_{m}\), the tortuosity index \(\beta\), the dissolution rate constant \(K_{s}\), and even the specific surface area \(A_{s}\) -- usually estimated on the dry \(\mu\)CT scan. Consequently, these parameters - in particular cannot be used for the non-dimensionalization of the model since they are to be determined. Apart from special considerations of the tortuosity index of the sample, the other inverse parameters characterize the dissolution regime of the dynamical \(\mu\)CT experiment. In this sense, they provide insight into the physical catalytic Damkohler number \(\mathrm{Da}_{\mathrm{II}}\), though the direct dimensionless formulation (15) is inappropriate for an inverse problem. The dimensionless temporal variable \(t^{*}\) in the PDE system (15) is, indeed, closely related to the unknown molecular diffusion, compromising its application to inverse modeling. Establishing the dimensionless formulation of the inverse dissolution problem is not straightforward and therefore requires a different dimensionless time. In the inverse problem, we consequently introduce the new temporal variable \[t^{*}=\frac{tD_{\mathrm{ref}}}{L^{2}}, \tag{16}\] where \(D_{\mathrm{ref}}\) is a scaling factor of the dimensionless formulation for the chemical kinetics. This scaling factor can also be defined as \(D_{\mathrm{ref}}=L^{2}/T\) introducing \(T\) the characteristic time for the dimensionless problem, which is not the physical characteristic time for the diffusion since the latter is unknown. In practice, we can rely on a rough estimation of physical dissolution time \(T_{f}\) -- determining the dynamical process end -- and a given dimensionless final time -- usually \(T_{f}^{*}=1\) -- to set the factor \(D_{\mathrm{ref}}\). The estimations of this scaling parameter will be detailed on a case-by-case basis throughout the applications developed in Sect. 6 and 7. Using the new dimensionless variables \((x^{*},t^{*})\), the normalized concentration \(C^{*}=C/C_{0}\) along with the definition of \(D_{\mathrm{ref}}\), one can obtain the dimensionless formulation of the reaction-diffusion system in the context of inverse modeling, which leads to: \[\left\{\begin{array}{ll}\frac{\partial C^{*}}{\partial t^{*}}- D_{m}^{*}\nabla\cdot\left(\varepsilon^{1+\beta}\nabla(\varepsilon^{-1}C^{*}) \right)=-\mathrm{Da}_{\mathrm{II}}^{*}C^{*}\mathbb{1}_{\{(1-\varepsilon)>0\} },&\mbox{ in }\quad\Omega^{*}\times(0,T_{f}^{*})\\ \frac{1}{vC_{0}}\frac{\partial\varepsilon}{\partial t^{*}}= \mathrm{Da}_{\mathrm{II}}^{*}C^{*}\mathbb{1}_{\{(1-\varepsilon)>0\}},&\mbox{ in }\quad\Omega^{*}\times(0,T_{f}^{*})\\ C^{*}=1,&\mbox{ on }\quad\partial\Omega^{*}\times(0,T_{f}^{*})\\ C^{*}(x,0)=C_{\mathrm{init}}^{*},&\mbox{ in }\quad\Omega^{*}\times\{0\}.\end{array}\right. \tag{17}\] In reactive inverse problems, we thus obtain a PDE model driven by two dimensionless numbers denoted \(\mathrm{Da}_{\mathrm{II}}^{*}\) and \(D_{m}^{*}\) which are defined as: \[\mathrm{Da}_{\mathrm{II}}^{*}:=K_{s}A_{s}\gamma_{\mathrm{II}^{+}}T=\frac{K_{s} A_{s}\gamma_{\mathrm{II}^{+}}L^{2}}{D_{\mathrm{ref}}}\;\;\mbox{and}\;\;D_{m}^{*}:= \frac{D_{m}T}{L^{2}}=\frac{D_{m}}{D_{\mathrm{ref}}}. \tag{18}\] Finally, the physical Damkohler number corresponding to the dynamical \(\mu\)CT experiment is recovered as the a-posteriori ratio \(\mathrm{Da}_{\mathrm{II}}=\mathrm{Da}_{\mathrm{II}}^{*}/D_{m}^{*}\). From now on, we consider this dimensionless formalism and forget the star notation on the differential operator, domains, and field descriptions for the sake of readability. This results in the following inverse dimensionless PDE system: \[\left\{\begin{array}{ll}\frac{\partial C}{\partial t}-D_{m}^{* }\nabla\cdot\left(\varepsilon^{1+\beta}\nabla(\varepsilon^{-1}C)\right)=- \mathrm{Da}_{\mathrm{II}}^{*}C\mathbb{1}_{\{(1-\varepsilon)>0\}},&\mbox{ in }\quad\Omega\times(0,T_{f})\\ \frac{1}{vC_{0}}\frac{\partial\varepsilon}{\partial t}= \mathrm{Da}_{\mathrm{II}}^{*}C\mathbb{1}_{\{(1-\varepsilon)>0\}},&\mbox{ in }\quad\Omega\times(0,T_{f})\\ C=1,&\mbox{ on }\quad\partial\Omega\times(0,T_{f})\\ C(x,0)=C_{\mathrm{init}},&\mbox{ in }\quad\Omega\times\{0\}\end{array}\right. \tag{19}\] with \(\mathrm{Da}_{\mathrm{II}}^{*}\) and \(D_{m}^{*}\) the inverse parameters to estimate, and \(C_{0}\) and \(\upsilon\) constant parameters. The tortuosity index \(\beta\) is either set through a-priori estimation on the porous sample, modeling through the empirical Archie law, or regarded as an additional inverse parameter. Especially, the index \(\beta=1\) is often considered for porous media with strong pore connections [68, 64] although practical upscaling of the diffusion can result in intermediate index values [27]. In addition to inferring the reactivity parameters \(\mathrm{Da}_{\mathrm{II}}^{*}\) and \(D_{m}^{*}\), we aim to estimate the spatial variability on the porosity field \(\varepsilon\). In this sense, we develop a data assimilation approach on pore-scale imaging that combine dynamical \(\mu\)CT experiments of calcite dissolution and physical regularization induced by the dimensionless PDE model (19). It benefits from the joint ability to quantify the ranges of mineral reactivity and the residual micro-porosity generated by unresolved features in the microtomography imaging process. This formulation also relevantly combines the advantages of experimental and modeling approaches and overcomes their own limitations. On the one hand, the dissolution process observation will bring insights into the unresolved morphological features and lead to a better characterization of the sample's initial state. On the other, the PDE model regularization can efficiently substitute the differential imaging approach, which is controversial for fast-dynamical processes subject to poor imaging quality. Therefore, one can address mineral reactivity inference for highly noisy dynamical \(\mu\)CT resulting from the compromise between scan quality and time resolution, as introduced in Sect. 2.3. The major challenge of this data assimilation formulation, though, relies on the PDE constraint for the concentration field as the \(\mu\)CT experiments do not provide information on the flow, transport, or diffusion of the chemical reactant. In the reactive inverse problem, the acid concentration is thus a latent field whose only the dimensionless boundary conditions are known in equation (19) through the normalizing constant \(C_{0}\). In the next section, we will develop the methodology adopted to solve such a dissolution inverse problem, accounting for all the established modeling assumptions. ## 4 Bayesian Physics-Informed Neural Networks in pore-scale imaging: concepts and methods Developing efficient data assimilation techniques is crucial to perform extensive parameter estimations, uncertainty quantification, and improving the reliability of direct pore-scale predictions. In particular, inverse problems are often subject to various sources of uncertainty that need to be quantified to ensure trustable estimations. This includes approximate model accuracy whose reliability can be questioned, with sparse or noisy data exhibiting measurement variability. Integrating physical principles, such as conservation laws or PDE models, in these inverse problems can though compensate for the lack of massive or accurate measurements through additional regularization constraints [40]. At the same time, embedding these physical regularizations allows addressing model accuracy in the total uncertainty quantification, especially when misleading a-priori uncertainty is assumed on the physical constraints [55]. Therefore, the combination of physics-based and data-driven methods offers an efficient alternative to overcome the limitations of both purely data-driven or purely modeling approaches. This has established data-driven inference as a complementary partner to theory-driven models in data assimilation and inverse modeling incorporating uncertainty quantification. ### Uncertainty Quantification in coupled physics-based and data-driven inverse problems Several approaches were developed to address uncertainty concerns in the context of data assimilation. These uncertainty quantification problems either require stochastic PDE models [13, 28] -- also used in sensitivity analysis -- or probabilistic approaches such as Markov Chain Monte Carlo (MCMC) methods. The latter can be used in the Bayesian Inference framework to sample from a target posterior distribution, though this usually requires numerous evaluations of the forward PDE model. In this sense, developing efficient MCMC methodologies remains challenging since repeatedly solving a complex coupled PDE system is computationally expensive and therefore can quickly become prohibitive for uncertainty assessments. These computational concerns have motivated the emergence of surrogate models in Bayesian Inference to speed up the forward model evaluation. This covers methods ranging from Polynomial Chaos Expansions [38, 74] which rely on a representation of the physical model by a series of low-order polynomials of random variables, to neural network proxies [75, 4]. Both approaches present the advantage of creating a surrogate model that can be evaluated inexpensively compared to solving the forward problem through usual direct numerical simulations. Nonetheless, Polynomial Chaos expansions suffer from truncation errors due to the low order of the polynomials yielding inaccurate estimates of the posterior distributions [36]. On the contrary, deep learning methods have shown effectiveness in building surrogate models for a wide range of complex and non-linear PDEs encoding the underlying physical principles. Developing fast surrogate models based on machine learning has garnered increasing interest in accelerating Bayesian inference for a wide range of scientific applications [23; 21]. A popular framework in deep learning integrating both physics regularization, measurement data, and uncertainty estimates are Bayesian Physics-Informed Neural Networks (BPINNs) [76; 35]. BPINNs benefit from the combined advantages of neural network structures in building parameterized surrogate models based on physical principles and Bayesian inference standards in integrating uncertainty quantification. Introducing the Bayesian neural network parameters \(\theta\in\mathbb{R}^{d}\) building the surrogate model and the inverse parameters of the PDE model \(\mathcal{P}_{\mathrm{inv}}\in\mathbb{R}^{p}\), we define the joint set of unknown parameters as \(\Theta=\{\theta,\mathcal{P}_{\mathrm{inv}}\}\). The BPINN formulation aims to explore the posterior distribution of \(\Theta\) \[P(\Theta|\mathcal{D},\mathcal{M})\propto P(\mathcal{D}|\Theta)P(\mathcal{M}| \Theta)P(\Theta) \tag{20}\] given some measurement data \(\mathcal{D}\) and a presumed model \(\mathcal{M}\) with unknown parameters. The posterior distribution expression (20) basically involves a likelihood term \(P(\mathcal{D}|\Theta)\) evaluating the distance to the experimental data, a PDE-likelihood term \(P(\mathcal{M}|\Theta)\) characterizing the potential modeling discrepancies, and a joint prior distribution \(P(\Theta)\). Through a marginalization process, the posterior distribution (20) on the parameters \(\Theta\) then transfers into a posterior distribution of the predictions, also called a predictive Bayesian Model Average (BMA) distribution (_e.g._ see [72]): \[P(y|x,\mathcal{D},\mathcal{M})=\int P(y|x,\Theta)P(\Theta|\mathcal{D}, \mathcal{M})\mathrm{d}\Theta \tag{21}\] where \(x\) and \(y\) respectively refer to the input (_e.g._ spatial and temporal points) and output (_e.g._ field prediction of the micro-porosity) of the neural network. The BPINN formulation hence provides a predictive distribution (21) of the quantities of interest (QoI), such as the output micro-porosity field, as well as posterior distributions over the model inverse parameters \(\mathcal{P}_{\mathrm{inv}}\). Sampling from the posterior distribution (20) is achieved through MCMC methods, which efficiently combine with fast surrogate models based on deep learning. In particular, one of the most popular MCMC schemes for BPINNs is Hamiltonian Monte Carlo (HMC), which provides a particularly efficient sampler for high-dimensional inference problems [15]. In addition to theoretical analyses, the HMC-BPINNs formulation also demonstrates numerical performances on both forward and inverse problems [76]. BPINNs with the HMC sampler appear as an efficient data-assimilation alternative coupling physics-based with data-driven approaches, and incorporating intrinsic uncertainty quantification. The HMC sampler, in particular, introduces the dynamics of a fictive system composed of the unknown parameters \(\Theta\) -- regarded as particle positions in the physical analogy -- and auxiliary momentum variables \(r\) -- regarded as particle velocities. It describes a conservative Hamiltonian system whose energy denoted \(H(\Theta,r)\) is the sum of a potential energy \(U(\Theta)\) which characterize the inverse problem formulation and a kinetic energy \(K(r)\) accounting for momentum perturbations. The latter enables the sampler to diffuse across several energy levels and hence results in an efficient exploration of the joint posterior distribution \(\pi(\Theta,r)\) in the phase space, defined as follows: \[\pi(\Theta,r)\sim\mathrm{e}^{-H(\Theta,r)}. \tag{22}\] The potential energy definition relies on a Bayesian probabilistic formulation of the inverse problem such that it depends on the posterior distribution (20) by the relation \(U(\Theta)=-\mathrm{ln}P(\Theta|\mathcal{D},\mathcal{M})\). Along with a Euclidean-Gaussian assumption for the kinetic energy (_e.g._ see [15] or [53]), this ensures that the marginal distribution of \(\Theta\) provides immediate samples of the target posterior distribution: \[P(\Theta|\mathcal{D},\mathcal{M})\sim\mathrm{e}^{-U(\theta)}. \tag{23}\] Efficient exploration of the joint distribution \(\pi(\Theta,r)\) in the phase space hence projects to samples of the target distribution (20) and then provides predictive BMA distributions on the QoI given by equation (21). The successive samples \((\Theta,r)\) are generated by solving for the Hamiltonian dynamical system for the frictionless fictive particle of positions \(\Theta\) \[\left\{\begin{array}{l}\mathrm{d}\Theta=\mathbf{M}^{-1}r\,\mathrm{d}t\\ \mathrm{d}r=-\nabla U(\Theta)\,dt,\end{array}\right. \tag{24}\] through a symplectic integrator, such as the Stormer-Verlet also known as the leapfrog method. This account for a deterministic exploration of specific energy level sets -- since the Hamiltonian energy is theoretically preserved by symplectic integrators -- while the kinetic energy, through momentum sampling, enables a stochastic exploration between the energy levels. The HMC-BPINN formulation ensures efficient sampling of the target posterior distribution thanks to the description of a conservative Hamiltonian system related to the inverse problem description. Overall, the potential energy term can be expressed under the general form (see [53] for detailed development of this weighted multi-potential energy): \[U(\Theta)=\sum_{k=0}^{K}\lambda_{k}\mathcal{L}_{k}(\Theta)+\lambda_{K+1}\| \Theta\|^{2} \tag{25}\] where \(L_{k}=\lambda_{k}\mathcal{L}_{k}\) refers to the weighted \(k^{th}\) objective term, either corresponding to data-fitting log-likelihood or PDE regularization tasks. We assume here that the prior distribution on the set of parameter \(\Theta\) follows a Gaussian distribution such that \(P(\Theta)\sim\mathcal{N}(0,I_{p+d})\). The weights \(\lambda_{k}\) are positive parameters integrating the various sources of uncertainties. Indeed, the deterministic PDE model is completed by stochastic representations of the model discrepancy, and the data-fitting likelihood is itself supplemented by stochastic modeling of the experimental noise, both affecting the weights \(\lambda_{k}\). In this sense, a HMC-BPINN intends to capture and estimate the various sources of uncertainties whether aleatoric -- arising from variability or randomness in the observations like sensor noise -- or epistemic -- caused by imperfect modeling hypothesis or ignorance in the model adequacy. Automatic management of these uncertainties, though, remains challenging as this relies on the appropriate setting of the critical weighting parameters \(\lambda_{k}\) arising from the expression of the multi-potential energy (25). Although some of these parameters -- mainly the noise estimation -- can be adjusted _offline_ with pre-trained Generative Adversarial Networks (GAN) as proposed by Snaros et al. in [55], proper estimation of these weights is crucial to ensure robust uncertainty quantification. Unsuitable choices of these weights can lead to biased predictions and pathological behaviors of the HMC-BPINN sampler, especially in the context of complex real-world Bayesian inference involving multi-objective, multiscale, and stiffness issues. This needs the development of data assimilation strategies that robustly address these issues to achieve reliable uncertainty quantification in inverse problems. ### A robust adaptive weighting sampling strategy for complex real-world Bayesian inference The Bayesian Physics-Informed Neural Network paradigm offers the opportunity to query altogether the confidence in the predictions, the estimations of inverse parameters, and the model adequacy in inverse problems incorporating uncertainty quantification. Despite their effectiveness, BPINNs can be difficult to use correctly in complex real-world Bayesian inference as they are prone to a range of pathological behaviors. These instabilities arise from the multi-potential energy term (25) in multi-objective inverse problems which likely involve conflicting tasks or multiscale issues. In particular, such a multi-potential energy directly translates to a weighted multitask posterior distribution for which achieving successful and unbiased sampling is challenging. Ensuring robust Bayesian inference in this context hinges on properly estimating the distinct task weights. Indeed, unsuitable choices of the weights \(\lambda_{k}\) in (25) can result in biased predictions, vanishing task behavior, or substantial instabilities in the Hamiltonian conservation. This can even prevent the sampler from identifying the highest posterior probability region, namely the Pareto front neighborhood, corresponding to predictions that correctly balance all the different tasks. While manual calibration of the critical \(\lambda_{k}\) weights is still commonplace [40, 35, 45], robust Bayesian inference strategies should not rely on a-priori hand-tuning or biased calibration of the posterior distribution. Indeed, appropriately setting these parameters is neither easy nor computationally efficient, especially for multi-objective inverse problems arising from real-world data. Developing an alternative that accounts for this multitask consideration becomes crucial to ensure robust sampling when dealing with coupled physics-based and data-driven inference. We benefit from an efficient BPINN framework developed in our previous work [53], which robustly addressed multitask Bayesian inference problems with potential multiscale effects, stiffness issues, or com peting tasks. This new strategy relied on an adaptive and automatic weighting of the target posterior distribution based on an Inverse Dirichlet control of the weights \(\lambda_{k}\)[37], which leverages gradient variances information of the different tasks: \[\lambda_{k}=\left(\frac{\gamma^{2}}{\mathrm{Var}\{\nabla_{\Theta}\mathcal{L}_{k} \}}\right)^{1/2},\quad\text{with}\quad\gamma^{2}:=\min_{t=0..K}(\mathrm{Var}\{ \nabla_{\Theta}\mathcal{L}_{t}\}),\quad\forall k=0,...,K. \tag{26}\] This results in an alternative sampler called Adaptively Weighted Hamiltonian Monte Carlo (AW-HMC) that concentrates its sampling on the Pareto front exploration after the adaptive procedure (see [53] for the detailed methodological development). In this sense, this AW-HMC sampler avoids imbalanced conditions between the different tasks. It also benefits from enhanced convergence and stability compared to conventional samplers, such as HMC or NUTS [26], and reduces sampling bias by avoiding manual tuning of critical weighting parameters. This new alternative demonstrated efficiency in managing the scaling sensitivity of the different terms either to noise distributions (homo- or hetero-scedastic) or multi-scale issues. In fact, the adjusted weights bring information on the distinct task uncertainties. This improves the reliability of the noise-related and model adequacy estimates as the uncertainties are quantified with minimal a-priori assumptions on their scaling. Our novel sampling strategy has demonstrated outstanding performances on several levels of complexity. This covers applications ranging from data-fitting predictions based on sparse measurements, physics-based data-assimilation problems, data-assimilation in inverse problems with unknown PDE model parameters, and data-assimilation in inverse problems with unknown parameters and latent fields [53]. Taken together, the AW-HMC sampler enhances BPINN robustness and offers a promising alternative as an overall data-assimilation strategy, extending its applications to more complex Bayesian inference problems. Indeed, this adaptive weighting sampling presents the ability to effectively address multiscale and multitask inverse problems, to couple UQ with physical priors, and to handle sparse noisy data. It also showed effectiveness in addressing stiff dynamics problems including latent field reconstruction and deriving unbiased uncertainty information from the measurement data. The AW-HMC strategy provides a promising data assimilation framework to address robust and reliable Bayesian inference in multitask inverse problems. ## 5 Data assimilation strategy: sequential reinforcement and operator differentiation In the present work, we focus on a multitask inverse problem for reactive flows at the pore scale involving two inverse parameters (\(\mathrm{Da}_{\mathrm{in}}^{*}\) and \(D_{m}^{*}\)) and a latent concentration field \(C\). This novel approach combines dynamical imaging data of calcite dissolution and physics-based regularization induced by the dimensionless PDE model (17). We build the current data assimilation approach upon the efficient AW-HMC framework for Bayesian Physics-Informed Neural Networks, presented in Sect. 4.2, to quantify both morphological and chemical parameter uncertainties. In this section, we present the data assimilation method developed to handle this pore-scale imaging inverse problem. Our methodology emphasizes the sequential reinforcement of the multi-potential energy and the efficient computation of the heterogeneous diffusion operator arising from Archie's law. This first requires setting up a few dedicated notations. ### Domain decomposition and sampling notation setup Dynamical synthetic or experimental \(\mu\)CT images are available on the overall spatiotemporal domain \(\Omega\times(0,T_{f})\), and provide dissolution observations subject to noise and imaging limitations (see Sect. 2.2 and 2.3): we introduce the two-index set \(\mathfrak{Im}_{i,j}\) of image intensities (dissolution measurements) defined for the whole image voxels. Then, we define a subset of this image for sampling purposes, involving the positions \(\mathcal{D}\) as a subset of \(\overline{\Omega}\times(0,T_{f})\) together with their image intensities \(\mathrm{Im}\), corresponding to \(N_{\text{obs}}\) partial and corrupted training observations: \[\mathcal{D}=\{(x_{k},t_{k}),\quad k=1...N_{\text{obs}}\} \tag{27}\] On this set of discrete points, there exists mappings \(i\) and \(j\) such as the image intensity satisfies \[\mathrm{Im}_{k}:=\mathfrak{Im}_{i(k),j(k)}=1-\varepsilon(x_{k},t_{k})+ \xi(x_{k},t_{k}),\,k=1...N_{\text{obs}}\] where the noise \(\xi\sim\mathcal{N}(0,\sigma^{2}I)\) for which the standard deviation \(\sigma\) is automatically estimated in the AW-HMC sampler by means of the \(\lambda_{k}\) adjustment in equation (25). This relationship between the microtomography images \(\mathrm{Im}\) and \(\varepsilon\) comes from the correlation between the \(\mu\)CT values and the material local attenuation. Indeed, in a greyscale tomographic scan, the minimum signal corresponds to the least attenuating or the least dense areas (in \(\Omega_{F}\) where \(\varepsilon=1\)), while the maximum signal refers to the most attenuating areas (in \(\Omega_{S}\) where \(0<\varepsilon_{0}\leqslant\varepsilon<1\)). Due to the micro-continuum description of the medium based on the two-scale porosity assumption, each distinct region of the domain -- namely \(\Omega_{S}\) and \(\Omega_{F}\) -- is though regarded as a different term in the multi-potential energy definition. Such a distinction is prescribed since there is no guarantee that the data corruption is uniform: the measurement variability can differ locally when facing heteroscedastic noise. In particular, the artifact limitations tend to enhance the blurring effects at the material fluid/solid interface \(\Sigma\). This motivates the special consideration of this interface neighborhood to account for the unresolved features and provide reliable morphological uncertainties. In this sense, we introduce the Reactive Area of Interest (RAI) as the evolving fluid/mineral interface along the dissolution process that is defined by: \[\text{RAI}=\left\{(x_{k},t_{k})\in\Omega\times(0,T_{f})\quad\text{such that}\quad\mathcal{Im}_{i(k)+1,j(k)}<\mathcal{Im}_{i(k),j(k)}\quad\text{and} \quad 0.1<\mathrm{Im}_{k}<0.9\right\} \tag{28}\] where the imaging extreme values are ignored, as a correction criterion, to avoid integrating noise derivation artifacts into the definition of the RAI. These thresholds rely on the analysis of the \(\mu\)CT histogram of the initial porous medium dataset. We also define: * The extended Reactive Area of Interest, denoted RAI\({}^{+}\), as the RAI augmented by a fluid tubular neighborhood of the RAI both in space and time, that is to say, the fluid region close to the evolving interface in \(\Omega\times(0,T_{f})\), * The reduced Reactive Area of Interest, denoted RAI\({}^{-}\), based on the acceptability criterion -- positivity of the \(D_{m}^{*}\) estimations -- defined thereafter and related to the relations (34)-(35). Moreover, we introduce several discrete domains \(\mathcal{D}^{\bullet}\) defined as the intersection between the non-reactive part of \(\mathcal{D}\) and their respective time-dependent regions: fluid, solid, reactive, or boundary. For instance, one gets \(\mathcal{D}^{F}=(\mathcal{D}\cap Q_{F})\smallsetminus\text{RAI}\) with \(Q_{F}\) the evolving fluid region defined as \[Q_{F}=\left\{(x,t)\in\Omega\times(0,T_{f})\quad\text{such that}\quad x\in \Omega_{F}(t)\right\}. \tag{29}\] In the same way, \(\mathcal{D}^{S}=(\mathcal{D}\cap Q_{S})\smallsetminus\text{RAI}\) where \(Q_{S}\) is the evolving solid region and \(\mathcal{D}^{\partial}=\mathcal{D}\cap\{\partial\Omega\times(0,T_{f})\}\). The overall domain \(\Omega\times(0,T_{f})\) is decomposed into several regions and respective training datasets that are involved in the sequential reinforcement of the multi-potential energy. Finally, these different domains satisfy the following properties: * RAI\({}^{-}\subset\text{RAI}\subset\text{RAI}^{+}\), * \(\Omega_{F}\cup\Omega_{S}=\Omega\), where \(\Omega_{F}=\mathbb{1}_{(\varepsilon<1)}\) in \(\Omega\) is an open set, * \(\mathcal{D}=\mathcal{D}^{F}\cup\mathcal{D}^{S}\cup\mathcal{D}^{\partial}\cup \text{RAI}\). ### Sequential reinforcement of the multi-potential energy The data assimilation strategy developed in the present article relies on a sequential design of the multi-potential energy \(U(\Theta)\), which will be reinforced to incorporate additional constraints through dedicated sampling steps. This sequential splitting is necessary due to the strong coupling between the porosity field \(\varepsilon\), related to the \(\mu\)CT imaging process, the latent concentration field \(C\), and the two unknown inverse parameters \(\mathrm{Da}_{\mathrm{II}}^{*}\) and \(D_{m}^{*}\). The first sampling step is, therefore, dedicated to providing a-priori estimations of the micro-porosity field through data-fitting terms only. This sequence of tasks is then decomposed into three steps, from the one for which we have the most information to the set of tasks involving all the aspects and constraints we need to consider, as displayed in Fig 2 and developed thereafter: * Step 1: Preconditioning on the micro-porosity by pure regression on image data, * Step 2: Preconditioning of the latent reactive fluid with additional PDE constraint, * Step 3: Overall data assimilation potential with full reactive model. #### 5.2.1 Step 1: Preconditioning by pure regression on image data The first sampling step 1 of the sequential splitting strategy aims to provide a preconditioning description of the surrogate micro-porosity field \(\varepsilon_{\Theta}\). We consider a task differentiation between \(\mathcal{D}^{S}\) and \(\mathrm{RAI}^{+}\) and hence, we discard from the training the fluid measurements which are far from the mineral interface -- as they are of no interest in this first step to characterize morphological uncertainty on \(\varepsilon\). The resulting potential energy term writes: \[U_{1}(\Theta)=\frac{\lambda_{0}}{2\sigma_{0}^{2}}\,\|1-\varepsilon_{\Theta}- \mathrm{Im}\|_{\mathcal{D}^{S}}^{2}+\frac{\lambda_{1}}{2\sigma_{1}^{2}}\,\|1- \varepsilon_{\Theta}-\mathrm{Im}\|_{\mathrm{RAI}^{+}}^{2}+\frac{1}{2\sigma_{ \Theta}^{2}}\|\Theta\|^{2} \tag{30}\] where \(\sigma_{k}\) are unknown standard deviations characterizing the noise distributions on their respective areas, and we assume a prior distribution on \(\Theta\) given by \(P(\Theta)\sim\mathcal{N}(0,\sigma_{\Theta}^{2}I_{p+d})\). The notation \(\|\cdot\|\) refers to either the RMS (root mean square) norm -- inherited from the functional \(\mathbb{L}^{2}\)-norm -- for the two first log-likelihood terms or to the usual Euclidean norm for the last log-prior term. In practice, we do not rely on a-priori manual calibration of the noise magnitudes -- all \(\sigma_{k}\) are set to be equal -- but rather use the AW-HMC sampler to automatically and adaptively estimate these uncertainties through adjustments of the \(\lambda_{k}\). This is especially meaningful in the neighborhood of the evolving fluid/solid interface (corresponding to the \(\mathrm{RAI}^{+}\)) to study edge-enhancement implications and in the pure solid region to quantify the unresolved features. At the end of step 1, one gets a first a-priori estimation of the field \(\varepsilon\), which presents the advantage of being denoised compared to the \(\mu\)CT images and hence is more suitable to differentiate. In this sense, we now have access to the time derivative of the surrogate porosity \(\varepsilon_{\Theta}\) that is subsequently used to provide some preconditioning of the latent concentration field \(C\). #### 5.2.2 Step 2: Preconditioning of the latent reactive fluid The second sampling step 2 relies on this first insight of the Bayesian neural network parameters obtained through step 1. We hence restart an adaptive weighting procedure with the AW-HMC sampler by adding additional constraints arising from the PDE model (19). As the acid concentration is a latent unknown field in our reactive inverse problem, we benefit from this second sampling step to provide a surrogate estimation \(C_{\Theta}\) of this field and identify a first reactive parameter, namely \(\mathrm{Da}_{\mathrm{ii}}^{*}\). In this sense, we impose a physics-based regularization linking the porosity derivative to the surrogate concentration field through the PDE equation: \[\frac{1}{\upsilon C_{0}}\frac{\partial\varepsilon_{\Theta}}{\partial t}- \mathrm{Da}_{\mathrm{ii}}^{*}C_{\Theta}\mathbb{1}_{\{(1-\varepsilon)>0\}}=0 \tag{31}\] Figure 2: **Sequential graph of the potential energy reinforcement:** our data assimilation strategy incorporates additional physics-based constraints arising from the PDE model (19) through successive sampling steps. The notations are defined in Sect. 5.2. where the calcite molar volume \(\upsilon\) and \(C_{0}\) are constant parameters -- we assume the concentration \(C_{0}\) of continuous acid injection, defining the Dirichlet boundary conditions on \(C\), to be known. In a direct formulation, equation (31) is imposed over the whole domain \(\Omega\times(0,T_{f})\), though the PDE constraint in inverse modeling mainly brings meaningful information on the RAI. Indeed, we have \[\frac{\partial\varepsilon_{\Theta}}{\partial t}\simeq-\frac{\partial\mathrm{ Im}}{\partial t}>0\] on the reactive area of interest, which is useful to characterize the reaction regime through the \(\mathrm{Da}_{\mathrm{ii}}^{*}\) dimensionless number. In the pure solid region where \(\frac{\partial\varepsilon_{\Theta}}{\partial t}=0\), the PDE constraint (31) translates into low acid penetration in \(Q_{S}\) that we impose through the condition \(C_{\Theta}=c_{0}=10^{-7}\). In the fluid region \(Q_{F}\), the latent acid concentration field is a solution of the following heat equation: \[\frac{\partial C_{\Theta}}{\partial t}-D_{m}^{*}\Delta C_{\Theta}=0 \tag{32}\] with initial and boundary conditions, respectively unknown in the inverse formulation and non-homogeneous Dirichlet boundary conditions (see the dimensionless PDE model (19) from Sect. 3.3). Following the modeling assumptions of Sect. 3.2, especially on the diffusive dominated regime with \(\mathrm{Pe}\ll 1\), we though assume as a first approximation that the surrogate acid concentration field \(C_{\Theta}\) is driven by the quasy-stationary Poisson equation \(\Delta C_{\Theta}=0\) in \(Q_{F}\). This behaves as a continuity extension of the surrogate concentration field from the domain boundary to the mineral evolving interface defined by the RAI. Along with the PDE equation (31) in the RAI, this defines augmented multi-potential energy for the second sampling step: \[\begin{split} U(\Theta)&=U_{1}(\Theta)+\frac{ \lambda_{2}}{2\sigma_{2}^{2}}\left\|(\upsilon C_{0})^{-1}\alpha_{\Theta}\frac{ \partial\varepsilon_{\Theta}}{\partial t}-C_{\Theta}\right\|_{\text{RAI}}^{2} +\frac{\lambda_{3}}{2\sigma_{3}^{2}}\left\|\Delta C_{\Theta}\right\|_{\mathcal{ D}^{F}}^{2}\\ &+\frac{\lambda_{4}}{2\sigma_{4}^{2}}\left(\left\|1-C_{\Theta} \right\|_{\mathcal{D}^{0}}^{2}+\left\|c_{0}-C_{\Theta}\right\|_{\mathcal{D}^{S }}^{2}\right)+\frac{1}{2\sigma_{\Theta}^{2}}\left\|\Theta\right\|^{2}\\ &:=U_{1}(\Theta)+U_{2}(\Theta)\end{split} \tag{33}\] where the constant constraints on the boundary \(\mathcal{D}^{\partial}\) and solid \(\mathcal{D}^{S}\) datasets are gathered as a single term. The notation \(\alpha_{\Theta}\) here refers to the first inverse parameter effectively sampled, with \(\alpha_{\Theta}:=(\mathrm{Da}_{\mathrm{ii}}^{*})^{-1}\). This second step reinforces the sampling of the surrogate micro-porosity \(\varepsilon_{\Theta}\) by providing insight into the latent concentration field \(C_{\Theta}\) on the RAI and posterior distribution on the inverse parameter \(\mathrm{Da}_{\mathrm{ii}}^{*}\). #### 5.2.3 Step 3: Overall data assimilation potential with full reactive model Finally, the third sampling step 3 will address the overall reactive inverse problem, to refine the micro-porosity and acid concentration predictions accounted for the fully coupled PDE model (19) and provide uncertainty quantification on the inverse parameters \(\mathrm{Da}_{\mathrm{ii}}^{*}\) and \(D_{m}^{*}\). The extension by continuity of the acid concentration -- from the Poisson equation in \(Q_{F}\) -- is replaced by its corresponding heat equation term (32) (see equation 36 bellow). We also use the diffusion-reaction PDE coupling \(\varepsilon_{\Theta}\) and \(C_{\Theta}\) to infer the dimensionless number \(D_{m}^{*}\): \[\frac{\partial C_{\Theta}}{\partial t}-D_{m}^{*}\nabla\cdot\left(\varepsilon_ {\Theta}^{1+\beta}\nabla(\varepsilon_{\Theta}^{-1}C_{\Theta})\right)+\mathrm{ Da}_{\mathrm{ii}}^{*}C_{\Theta}=\frac{\partial C_{\Theta}}{\partial t}-D_{m}^{*} \nabla\cdot\left(\varepsilon_{\Theta}^{1+\beta}\nabla(\varepsilon_{\Theta}^ {-1}C_{\Theta})\right)+\frac{1}{\upsilon C_{0}}\frac{\partial\varepsilon_{ \Theta}}{\partial t}=0 \tag{34}\] which is theoretically valid on the whole RAI for the inverse modeling. Nonetheless, the heterogeneous diffusion term \(\mathcal{D}_{i}(\varepsilon,C):=\nabla\cdot\left(\varepsilon^{1+\beta}\nabla (\varepsilon^{-1}C)\right)\) arising from Archie's law becomes highly sensitive at the mineral boundary due to jumps in the porosity derivatives at the interface. This may disrupt the identification of the inverse parameter \(D_{m}^{*}\). The PDE constraint (34) therefore needs to be imposed on a reduced neighborhood of the reactive area of interest, namely the RAI\({}^{-}\) domain. This restricted RAI is then defined by the eligible points of the RAI domain where \(D_{m}^{*}\) is predicted positive. From the overall samples of step 2, we compute the predictive BMA distributions of the two operators \[\frac{\partial C_{\Theta}}{\partial t}+\frac{1}{\upsilon C_{0}}\frac{\partial \varepsilon_{\Theta}}{\partial t}\quad\text{and}\quad\mathcal{D}_{i}( \varepsilon_{\Theta},C_{\Theta}) \tag{35}\] on the domain \(\Omega\times(0,T_{f})\) and then estimate \(D_{m}^{*}\) through equation (34) to define the \(\text{RAI}^{-}\) domain. From this procedure, one also gets an estimate of the posterior distribution of \(D_{m}^{*}\) after sampling step 2 which is regarded as an initial a-priori on this inverse parameter in step 3. This will be further detailed in the applications (see Sect. 6 and 7). Taken together, the fully reinforced multi-potential energy for the third sampling step writes: \[\begin{split} U(\Theta)&=U_{1}(\Theta)+\frac{ \lambda_{2}}{2\sigma_{2}^{2}}\left\|(\upsilon C_{0})^{-1}\alpha_{\Theta}\frac{ \partial\varepsilon_{\Theta}}{\partial t}-C_{\Theta}\right\|_{\text{RAI}}^{2} +\frac{\lambda_{3}}{2\sigma_{3}^{2}}\left\|\gamma_{\Theta}\frac{\partial C_{ \Theta}}{\partial t}-\Delta C_{\Theta}\right\|_{\mathcal{D}^{F}}^{2}\\ &+\frac{\lambda_{4}}{2\sigma_{4}^{2}}\left(\|1-C_{\Theta}\|_{ \mathcal{D}^{\partial}}^{2}+\|c_{0}-C_{\Theta}\|_{\mathcal{D}^{S}}^{2}\right) \\ &+\frac{\lambda_{5}}{2\sigma_{5}^{2}}\left\|\gamma_{\Theta}\left( \frac{\partial C_{\Theta}}{\partial t}+(\upsilon C_{0})^{-1}\frac{\partial \varepsilon_{\Theta}}{\partial t}\right)-\nabla\cdot\left(\varepsilon_{ \Theta}^{1+\beta}\nabla(\varepsilon_{\Theta}^{-1}C_{\Theta})\right)\right\|_{ \text{RAI}^{-}}^{2}+\frac{1}{2\sigma_{\Theta}^{2}}\|\Theta\|^{2}\\ &:=U_{1}(\Theta)+\widetilde{U_{2}}(\Theta)+U_{3}(\Theta)\end{split} \tag{36}\] where \(\gamma_{\Theta}:=(D_{m}^{*})^{-1}\) such that the set of inverse parameters we infer in practice is \((\mathcal{P}_{\text{inv}})_{\Theta}=\{\alpha_{\Theta},\gamma_{\Theta}\}\). The data assimilation strategy developed in the present article incorporates successive physics-based constraints using a sequential reinforcement of the multi-potential energy \(U(\Theta)\). This is achieved by splitting the sampling steps, which is required due to the strong coupling of the overall PDE system (19) involving latent field and unknown parameters. This overall algorithm is summarized in Fig 2. ### Computational strategy for differential operator expression This section is dedicated to the development of a differentiation strategy for efficient computation of the heterogeneous diffusion \(\mathcal{D}_{i}(\varepsilon,C)\) arising from Archie's law. Indeed, the third sampling step in the sequential reinforcement of the multi-potential energy (see Sect. 5.2) involves the computation of this diffusion operator through a neural network surrogate model. This implies the use of automatic differentiation (AD) which is a prevalent technique in deep-learning frameworks such as Physics-Informed Neural Networks (PINNs) and Bayesian Physics-Informed Neural Networks (BPINNs). Such an automatic differentiation relies on gradient backpropagation to compute the derivatives of the neural network functional outputs with respect to its inputs, by using the chain rule principle. AD is thus a fast computational technique when it comes to the evaluation of first and second-order derivatives of the output fields, namely the spatial gradient and Laplacian operators, and the temporal partial derivatives. More complex non-linear operators resulting from two successive differentiation of non-trivial functional compositions -- as this is the case for the \(\mathcal{D}_{i}(\varepsilon,C)\) operator -- can though readily lead to high-computational cost. This observation leads to reconsidering the heterogeneous diffusion term as a succession of sum and product of first and second-order operators. Consequently, we consider the diffusion operator \(\mathcal{D}_{i}(\varepsilon,C)\) under two formulations: its compact form (37a) and its developed form (37c) reading \[\mathcal{D}_{i}(\varepsilon,C) =\nabla\cdot\left(\varepsilon^{\beta+1}\nabla(\varepsilon^{-1}C)\right) \tag{37a}\] \[=\nabla\cdot\left(\varepsilon^{\beta}\nabla C-\varepsilon^{\beta -1}C\nabla\varepsilon\right)=\nabla\cdot\left(\varepsilon^{\beta-1}( \varepsilon\nabla C-C\nabla\varepsilon)\right)\] (37b) \[=\varepsilon^{\beta-1}\left(\varepsilon\Delta C-C\Delta \varepsilon\right)+(\beta-1)\varepsilon^{\beta-1}\,\nabla\varepsilon\cdot \nabla C+(\beta-1)\varepsilon^{\beta-2}\,C\,\nabla\varepsilon\cdot\nabla\varepsilon \tag{37c}\] Then we replace the expression of the diffusion in the multi-potential energy (36) with the novel operator formulation (37c). This makes possible to reduce the computational cost of evaluating this diffusion operator through merely the auto differentiation of the following terms: \(\nabla\varepsilon_{\Theta}\), \(\nabla C_{\Theta}\), \(\Delta\varepsilon_{\Theta}\), and \(\Delta C_{\Theta}\). Finally, we observe on the developed expression (37c) that the case \(\beta=1\) even results in a more straightforward expression of Archie's law which then writes \(\mathcal{D}_{i}(\varepsilon,C)=\varepsilon\Delta C-C\Delta\varepsilon\). This is particularly convenient as the tortuosity index \(\beta=1\) can be regarded as an approximation of the effective diffusivity in pore-scale models (_e.g._ see [64]). Furthermore, this reduced expression confirms the high sensitivity of the heterogeneous diffusion term at the mineral boundary \(\sigma\) due to the micro-porosity Laplacian involved in Archie's law. Considering suitable differential operator expressions can enhance the surrogate model efficiency by reducing the automatic differentiation cost. We perform some validations of this first insight through computational time measurements for the different expressions of the heterogeneous diffusion term. We hence compare the original formulation of \(\mathcal{D}_{i}(\varepsilon,C)\) with the developed operator (37c) for tortuosity coefficients \(\beta=0.5\) -- to consider the most general form -- and \(\beta=1\) that leads to the reduced evaluation of the heterogeneous diffusion. We account for computational times on both CPU and GPU devices and perform several evaluations of the diffusion operator expressions to provide averaged computational times along the sampling procedure of step 3. These results are summarized in Table 1 for the 1D+Time test case (Table 1.a) and 2D+Time application (Table 1.b) explicitly developed in the validation Sect. 6 and application Sect. 7. We define in each case the reference computational cost \(T_{0}\) as the CPU time necessary to evaluate the original operator expression \(\mathcal{D}_{i}(\varepsilon,C)\). This respectively leads to \(T_{0}=37.13\) ms and \(T_{0}=83.88\) ms for 1D+Time and 2D+Time applications. We then evaluate the speedup, denoted \(S\), of the distinct operator formulations as \(S=T_{0}/T_{\bullet}\) where \(T_{\bullet}\) are their respective computational times. It results from this comparison an effective improvement of the computational costs, either on CPU or GPU, when considering the developed operator (37c) from equation (37) -- even in its general form for \(\beta\neq 1\) (see second rows of Table 1.a and 1.b). This highlights that the configuration that best optimizes the speedup is to use the operator (37c) on GPU devices. The improvement between the general and reduced form of the operator (37c) is, however, less significant especially in 2D+Time. This can be readily explained by the fact that most of the computational time is spent evaluating the gradient and Laplacian operators rather than their combination. In this sense, the general form (37c) can be used effectively regardless of the tortuosity index value \(\beta\). Overall, the developed heterogeneous diffusion operator (37c) contributes to reducing the computational cost of its single evaluation. Validation on synthetic 1D+Time calcite dissolution In this article, we develop a novel data assimilation strategy to address reactive inverse problems in pore-scale imaging with uncertainty quantification. This aims to quantify morphological uncertainty on the micro-porosity field \(\varepsilon\) and estimate reliable ranges of chemical parameters through dynamical \(\mu\)CT noisy observations augmented with PDE models of dissolution. The present strategy is based on the robust Bayesian framework presented in [53] along with the AW-HMC sampler (see Sect. 4.2) and relies on sequential reinforcement of the multi-potential energy. In this section, we validate the present methodology on inverse problems of calcite dissolution with heterogeneous porosity in artificial 1D spatial configurations. All the \(\mu\)CT measurements that we consider are synthetic observations resulting from direct numerical simulations of reactive flows with noise perturbation, in order to validate our methodology in well-established test cases. The validation test case is a purely synthetic 1D+Time problem, for which we check two configurations with distinct tortuosity indices with \(\beta=1\) and \(\beta=0.5\). ### Direct reactive model: problem set up We consider two heterogeneous samples of 1D synthetic calcite cores whose initial geometries are characterized by the numerical \(\mu\)CT images presented in Fig 3 on a 'physical' spatial domain \(\Omega\) of width \(0.3\) mm. These initial images correspond to normalized greyscale tomographic scans, corrupted with noise that either accounts for sensor noise or unresolved morphological features. Direct numerical simulations (DNS) of reactive processes are then performed on these initial geometries to provide synthetic \(\mu\)CT dynamical images of dissolution. These observation data are generated by solving the reaction-diffusion system (15) by means of mesh-based and particle methods -- namely a Backward Euler or Mid Point method for the time integration coupled with Particle Strength Exchanges scheme for the heterogeneous diffusion -- on a Cartesian spatiotemporal grid of resolution \(N_{x}=200\) and \(N_{t}=240\). Continuous acid injection is maintained through non-homogeneous Dirichlet boundary conditions on the domain \(\Omega\) to ensure a diffusive-dominated regime. Numerically, we consider a strong acid solution with \(\mathrm{pH}=0\) such that the normalizing constant \(C_{0}\) equals \(1\). The characteristic length \(L\) of these porous samples is set to \(L=0.1\) mm and the reactive parameters are respectively defined by \(K_{s}=0.8913\,\mathrm{mol}.\mathrm{m}^{-2}.\mathrm{s}^{-1}\), \(D_{m}=10^{-9}\,\mathrm{m}^{2}.\mathrm{s}^{-1}\), and \(\gamma_{\mathrm{H}^{+}}=10^{-3}\,\mathrm{m}^{3}.\mathrm{mol}^{-1}\) -- taken from the benchmark [42]. The reactive specific area \(A_{s}\) is set to \(A_{s}=10^{3}\,\mathrm{m}^{-1}\), and we do not account for the calcite molar volume \(\upsilon\) in these test cases -- as such 1D+Time examples do not mean to be physically consistent but rather serve validation purposes. We also consider distinct tortuosity indexes, namely \(\beta=1\) and \(\beta=0.5\), on the different geometries to address both the compact form (37a) and develop form (37c) of the diffusion operator \(\mathcal{D}_{i}(\varepsilon,C)\) in the data assimilation problem. The DNS is performed until the overall calcite core is dissolved which corresponds to a characteristic final time \(T_{f}=24\) s. Taken together, one gets a sequence of synthetic \(\mu\)CT images \(\mathfrak{Im}_{i,j}\), similar to Fig 3, characterizing the dissolution process of the two calcite cores on the 'physical' spatiotemporal domain \(\Omega\times(0,T_{f})\). ### Dimensionless inverse problem and dimensionless numbers From the setting of these reactive parameters, one identifies the dissolution regime of these test cases given by the dimensionless catalytic Damkohler \(\mathrm{Da}_{\mathrm{H}}=8.913\) from equation (13). The inference of this Damkohler number is though not straightforward in inverse problems, as developed in Sect. 3.3, and we define the dimensionless time \(t^{*}\) such that the its related final time is \(T^{*}_{f}=1\). The dimensionless spatial variable \(x^{*}\) is computed as in the dimensionless formulation of the direct problem using \(x^{*}=x/L\). For the data assimilation, we hence consider the dimensionless domain \(\Omega^{*}\times(0,T^{*}_{f})=[0,3]\times(0,1)\) to extract the observation dataset \[\mathcal{D}=\{(x_{k},t_{k})\in[0,3]\times(0,1),\quad k=1...N_{\text{obs}}\} \tag{38}\] where the number of training points \(N_{\text{obs}}=7725\) represents about 16% of the data required for the full field reconstructions on \(\Omega^{*}\times(0,T^{*}_{f})\). The dataset \(\mathcal{D}\) is divided into the corresponding datasets \(\mathcal{D}^{S}\), \(\mathcal{D}^{F}\), \(\mathcal{D}^{\text{RAI}}\), RAI\({}^{+}\), RAI\({}^{-}\) and \(\mathcal{D}^{\partial}\) that respectively cover around 50%, 4%, 15%, 13%, 10% and 8% of the \(N_{\text{obs}}\) training measurements. One also determines the scaling dimensionless factor \(D_{\mathrm{ref}}\) which appears in the dimensionless inverse formulation (19): \[D_{\mathrm{ref}}=\frac{T_{f}^{*}L^{2}}{T_{f}}=4.16\times 10^{-10}\,\mathrm{m}^{2}.\mathrm{s}^{-1} \tag{39}\] according to the relation (16) and the estimations of \(T_{f}\), \(T_{f}^{*}\) and \(L\). We characterize the dissolution regime in these reactive inverse problems by means of the two dimensionless numbers defined in (18), and one gets: \[\mathrm{Da}_{\mathrm{ii}}^{*}=21.3912\quad\text{ and }\quad D_{m}^{*}=2.4 \tag{40}\] which are related to the inverse parameters to infer through \((\mathcal{P}_{\mathrm{inv}})_{\Theta}=\{\alpha_{\Theta},\gamma_{\Theta}\}= \big{\{}(\mathrm{Da}_{\mathrm{ii}}^{*})^{-1},(D_{m}^{*})^{-1}\big{\}}\). For such 1D+Time reactive inverse problems, the overall multi-potential energy finally writes in the case \(\beta=1\): \[\begin{split} U(\Theta)&=\frac{\lambda_{0}}{2\sigma _{0}^{2}}\left\|1-\varepsilon_{\Theta}-\mathrm{Im}\right\|_{\mathcal{D}^{S}}^ {2}+\frac{\lambda_{1}}{2\sigma_{1}^{2}}\left\|1-\varepsilon_{\Theta}-\mathrm{ Im}\right\|_{\text{RAI}^{*+}}^{2}+\frac{\lambda_{2}}{2\sigma_{2}^{2}}\left\| \alpha_{\Theta}\frac{\partial\varepsilon_{\Theta}}{\partial t}-C_{\Theta} \right\|_{\text{RAI}}^{2}\\ &+\frac{\lambda_{3}}{2\sigma_{3}^{2}}\left\|\gamma_{\Theta}\frac{ \partial C_{\Theta}}{\partial t}-\frac{\partial^{2}C_{\Theta}}{\partial x^{2} }\right\|_{\mathcal{D}^{F}}^{2}+\frac{\lambda_{4}}{2\sigma_{4}^{2}}\left( \left\|1-C_{\Theta}\right\|_{\mathcal{D}^{\partial}}^{2}+\left\|10^{-7}-C_{ \Theta}\right\|_{\mathcal{D}^{S}}^{2}\right)\\ &+\frac{\lambda_{5}}{2\sigma_{5}^{2}}\left\|\gamma_{\Theta}\left( \frac{\partial C_{\Theta}}{\partial t}+\frac{\partial\varepsilon_{\Theta}}{ \partial t}\right)-\left(\varepsilon_{\Theta}\frac{\partial^{2}C_{\Theta}}{ \partial x^{2}}-C_{\Theta}\frac{\partial^{2}\varepsilon_{\Theta}}{\partial x^{ 2}}\right)\right\|_{\text{RAI}^{-}}^{2}+\frac{1}{2\sigma_{\Theta}^{2}}\| \Theta\|^{2}\end{split} \tag{41}\] and is sequentially reinforced, as presented in Sect. 5.2, through three successive sampling steps using the AW-HMC sampler. The hyperparameters setting of the sequential AW-HMC samplers together with the neural network architecture are detailed hereafter. ### Deep learning configuration Regarding the deep learning strategy for the overall data assimilation problem, we use two distinct neural network architectures to define the micro-porosity and acid concentration surrogate models. Each surrogate model has, therefore, one single output corresponding to \(\varepsilon_{\Theta}\) and \(C_{\Theta}\), respectively. This is preferred to merging the two outputs into a single neural network architecture to avoid a strong correlation between the Figure 3: **Initial \(\mu\)CT images defining the porous sample geometries:** synthetic cases with tortuosity indices a) \(\beta=1\) and b) \(\beta=0.5\). The \(\mu\)CT measurements are normalized, corrupted with noise, and provide the dataset \(\mathrm{Im}\) before the dissolution process. The calcite core regions are identified by the double-headed arrows, and correspond to the maximum intensity in the greyscale tomographic scans displayed below. output fields. Indeed, providing surrogate models not strongly correlated with a multiple-output neural network may require numerous hidden layers that straightforwardly impact the overall computational cost. On the contrary, using distinct neural networks makes it possible to build independent surrogate models while retaining few hidden layers and, therefore, a reasonable number of neural network parameters. Correlations between the two neural network architectures, and then the outputs fields \(\varepsilon_{\Theta}\) and \(C_{\Theta}\), are merely achieved through the PDE model defining the multi-potential energy (41). This deep learning configuration is more meaningful for such a reactive data assimilation problem since a high correlation -- due to the neural network architecture -- with the latent field \(C_{\Theta}\) can highly disrupt the micro-porosity recovery. Relying on the PDE model to ensure relevant correlation hence appears as the most appropriate strategy. The first neural network establishing the micro-porosity surrogate model \(\varepsilon_{\Theta}\) is composed of 4 hidden layers with 32 neurons per layer and a hyperbolic tangent activation function. The output layer is complemented by a rectified hyperbolic tangent \(\mathrm{Tanh}^{r}(z)=0.5(\mathrm{Tanh}(z)+1)\) to ensure output values between 0 and 1. This neural network complexity provides the best approximation of the micro-porosity during the first sampling step 1 while maintaining moderate computational costs. Indeed, we analyze in Fig 4 the impact of the neural network architecture both on the computational time spent on the sampling procedure and the Bayesian Model Average (BMA) accuracy, computed as: \[\text{BMA-E}^{\varepsilon}=\|P\left(\varepsilon_{\Theta}\,|\,(x,t),\mathcal{D },\mathcal{M}\right)-\varepsilon\rangle\|_{\Omega^{*}\times(0,T_{f}^{*})}^{2}= \|P\left(\varepsilon_{\Theta}\,|\,(x,t),\mathcal{D},\mathcal{M}\right)-(1- \mathrm{Im}))\|_{\Omega^{*}\times(0,T_{f}^{*})}^{2} \tag{42}\] where the notation \(\|\cdot\|\) used here refers to the functional \(\mathbb{L}^{2}\)-norm and \(P\left(\varepsilon_{\Theta}\,|\,(x,t),\mathcal{D},\mathcal{M}\right)\) is the BMA approximation from equation (21) (_e.g._ see [72] or [53] for more details). For the optimal NN configuration, one estimates the sampling computational cost, providing the overall samples of the posterior distribution (20) based on the \(N_{\mathrm{obs}}\) training measurements, to about 17 min. The BMA prediction obtained through equation (21), over the whole computational domain \(\Omega^{*}\times(0,T_{f}^{*})\) is meanwhile immediate -- less than 1 s on GPU and a few seconds on CPU. Recovering the latent concentration \(C_{\Theta}\) will require less neural network expressivity compared to the micro-porosity field which needs to integrate noisy data, and unresolved morphological features in its reconstruction. In this sense, we assume that the second neural network defining the acid surrogate model is composed of 3 hidden layers with 32 neurons per layer and a hyperbolic tangent activation function. This is only one layer less compared to the first NN for \(\varepsilon_{\Theta}\) but enables saving \(1056\) parameters. The number of network parameters is, therefore, \(3297\) for the first sampling and increases to \(5538\) for the second and third sampling steps with the two additional inverse parameters. The other hyperparameters concerning the AW-HMC sampler are summarized in Table 2 for the sequential sampling steps. These sampler parameters involve, _inter alia_, several adaptive steps \(N\) during which the critical weights \(\lambda_{k}\) are automatically adjusted through an Inverse Dirichlet basis using the equation (26), and a number of overall sampling steps \(N_{s}\). The two other parameters, namely \(L\) and \(\delta t\), are intrinsically related to the Hamiltonian Monte Carlo structure of the AW-HMC sampler. Indeed, they are involved in the deterministic step that relies on the leapfrog symplectic integrator to solve for the Hamiltonian dynamical system (24) (see Sect. 4.1). We also refer to our methodological article [53] for more details on these hyperparameters and especially to Algorithm 1 (Adaptively Weighted Hamiltonian Monte Carlo) for their respective role in the sampling phases. \begin{table} \begin{tabular}{c|c|c|c|c} \hline Sampling step & Number of & Number of & Number of & Leapfrog time \\ & adaptive steps \(N\) & samples \(N_{s}\) & leapfrog steps \(L\) & step \(\delta t\) \\ \hline 1) Preconditioning \(\varepsilon_{\Theta}\) & 50 & 200 & 200 & \(1\times 10^{-3}\) \\ \hline 2) Preconditioning \(C_{\Theta}\) + Inference \(\alpha_{\Theta}\) & 20 & 200 & 200 & \(5\times 10^{-4}\) \\ \hline 3) Full data assimilation & 4 & 200 & 200 & \(2\times 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 2: **AW-HMC hyperparameters on the 1D+Time reactive inverse problem: Setting of the sampler hyperparameters for the three sequential sampling steps defined in Fig 2. The number of adaptive steps \(N\) along with the leapfrog parameters \(L\) and \(\delta t\) are AW-HMC sampler-specific parameters.** Figure 4: **Neural Network configuration choice for the surrogate model on the micro-porosity field:** a) Computational cost of sampling step 1 with respect to the neural network architectures both in terms of the number of hidden layers and neurons per layer. b) Bayesian Model Average (BMA) error between the surrogate model \(\varepsilon_{\Theta}\) and groundtruth \(\varepsilon\), computed as in equation (42), for different neural network architectures. Figure 5: **Uncertainty Quantification on 1D+Time reactive inverse problem with data assimilation:** Bayesian Model Average (BMA) predictions on the micro-porosity field \(\varepsilon_{\Theta}\) with their local uncertainties — given by the standard deviation on the posterior distribution of the predictions — and mean squared errors (MSE). The top row corresponds to the initial geometry from Fig 3a with tortuosity index \(\beta=1\). The bottom row is related to the initial porous sample from Fig 3b with \(\beta=0.5\). ### Numerical results We demonstrate the validity of our data assimilation approach on synthetic inverse problems of calcite dissolution whose initial core geometries are characterized in Fig 3, and for which the dynamical \(\mu\)CT images are provided through DNS (see Sect. 6.1). In the data assimilation Bayesian framework, we first select log-normal prior distributions on \((\mathcal{P}_{\mathrm{inv}})_{\Theta}=\{\alpha_{\Theta},\gamma_{\Theta}\}\), which ensures the positivity of the inverse parameters, and independent normal distribution for the neural network parameters \(\theta\). Nonetheless, appropriate change of variables on the inverse parameters, namely \(\bullet_{\Theta}=\mathbf{e}^{\widehat{\mathbf{\kappa}}_{\Theta}}\), makes it possible to consider Gaussian prior distributions on the newly defined set of parameter \(\Theta=\{\theta,\widehat{\mathcal{P}}_{\mathrm{inv}}\}\) (_e.g._ see [76] or [53]). This is the underlying hypothesis considered when defining the log-prior term in the potential energy (41), where we assume \(P(\Theta)\sim\mathcal{N}(0,\sigma_{\Theta}^{2}I_{p+d})\). In practice, we use the standard deviation \(\sigma_{\Theta}=10\) in the applications such that slightly diffuse distribution induces weakly informed priors on the \(\Theta\) parameters. We also impose weakly informed priors on the inverse parameters such that we do not rely on biased a-priori on their respective scaling. In this sense, we benefit from the AW-HMC sampler advantages to handle multiscale inverse problems with unknown informative priors. We also avoid hand-tuning of the distinct task uncertainties by setting all the \(\sigma_{k}\), \(k=0...5\), to be equal in equation (41). On the contrary, automatic adjustment of the weighting parameters \(\lambda_{k}\) will provide intrinsic task uncertainties during the sampling procedure. From the overall sampling procedure, we first obtain through equation (21) a Bayesian Model Average prediction on the porosity field \(\varepsilon_{\Theta}\) which is approximated by (_e.g._ see [72]): \[P\left(\varepsilon_{\Theta}|(x,t),\mathcal{D},\mathcal{M}\right)\simeq\frac{1} {N_{s}-N}\sum_{\tau=N}^{N_{s}}P\left(\varepsilon_{\Theta}|(x,t),\Theta^{t_{ \tau}}\right) \tag{43}\] where \(P\left(\varepsilon_{\Theta}\,|\,(x,t),\Theta^{t_{\tau}}\right)\) is the surrogate model prediction of the micro-porosity resulting from the sampling iteration \(\tau\) for the set of parameters \(\Theta\) -- including both the neural network and inverse parameters. Similarly, one can also compute local uncertainties on the output porosity field given the standard deviation metric on the posterior distribution of the predictions. These uncertainty quantification results are presented in Fig 5 along the whole dissolution time \(t^{*}\) for both the initial core geometries with distinct tortuosity indexes. We also compare the local uncertainties on the micro-porosity field with the traditional Mean Squared Errors (MSE) between the BMA surrogate prediction obtained by equation (43) and the groundtruth \(\varepsilon\). This shows enhanced mean squared errors on the core edges during the calcite core dissolution which are, however, embedded in the local uncertainties. The latter uncertainties also tend to increase in these regions, characterizing the challenge of capturing reliable core interfaces from the dynamical \(\mu\)CT images. In this sense, one can query the confidence of mineral reactivity assessment using merely differential imaging techniques on the dynamical \(\mu\)CT scans. From the dynamical observation of the calcite core dissolution, we Figure 6: **Uncertainty Quantification on the micro-porosity field at the initial state (\(t^{*}=0\)) for ID+Time reactive inverse problem:** Corrupted \(\mu\)CT image before dissolution, groundtruth on \(\varepsilon\), BMA prediction, and uncertainty on \(\varepsilon_{\Theta}\) plotted along the spatial dimensionless coordinates \(x^{*}\). Validation test cases with tortuosity indexes a) \(\beta=1\) and b) \(\beta=0.5\). obtain uncertainties on the initial state geometry represented in Fig 6. This shows that the posterior prediction on \(\varepsilon_{\Theta}\) covers the groundtruth micro-porosity field \(\varepsilon\) and provides upper and lower bounds for the residual, potentially unresolved, micro-porosity \(\varepsilon_{0}\) estimation in the porous matrix -- _e.g._\(1.8\%\leqslant\varepsilon_{0}\leqslant 9\%\) in the case \(\beta=1\) for the 95% confidence interval, corresponding to approximately two standard deviations. Moreover, we rely on the Bayesian Model Average Cumulative Error metric, denoted BMA-CE and introduced in [53], to quantify the sampling efficiency in terms of convergence along the marginalization process. We first compute the BMA-CE diagnostics for the porosity field \(\varepsilon\) based on: \[\text{BMA-CE}^{\varepsilon}(\tau)=\left\|\frac{1}{\tau-N}\sum_{i=N}^{\tau}P \left(\varepsilon_{\Theta}\,|\,(x,t),\Theta^{t_{i}}\right)-(1-\text{Im}) \right\|^{2}\qquad\forall\tau>N. \tag{44}\] Equation (44) hence defines for each sampling step \(\tau\), after the adaptive steps, a cumulative error characterizing the convergence of the BMA model toward the groundtruth \(\varepsilon\). Such a diagnostic is computed during the overall sampling procedure, covering the three successive steps of sequential reinforcement. In the same manner, we extend this notion of convergence to the PDE constraints by computing the BMA-CE metric on their respective residuals. We, therefore, introduce \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) the PDE constraints residuals arising from the reactive model (19) and involved in the multi-potential energy (41): \[\begin{split}\mathcal{F}_{1}(\varepsilon_{\Theta},C_{\Theta})& :=\alpha_{\Theta}\frac{\partial\varepsilon_{\Theta}}{\partial t}-C_ {\Theta}\\ \mathcal{F}_{2}(\varepsilon_{\Theta},C_{\Theta})&:= \gamma_{\Theta}\left(\frac{\partial C_{\Theta}}{\partial t}+\frac{\partial \varepsilon_{\Theta}}{\partial t}\right)-\left(\varepsilon_{\Theta}\frac{ \partial^{2}C_{\Theta}}{\partial x^{2}}-C_{\Theta}\frac{\partial^{2} \varepsilon_{\Theta}}{\partial x^{2}}\right)\end{split} \tag{45}\] to finally define their corresponding diagnostics BMA-CE\({}^{\mathcal{F}_{\bullet}}\): \[\text{BMA-CE}^{\mathcal{F}_{\bullet}}(\tau)=\left\|\frac{1}{\tau-N}\sum_{i=N}^ {\tau}P\left(\mathcal{F}_{\bullet}(\varepsilon_{\Theta},C_{\Theta})\,|\,(x,t ),\Theta^{t_{i}}\right)\right\|^{2}\qquad\forall\tau>N. \tag{46}\] These metrics are respectively computed on the sampling steps 2 and 3 for the residuals \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\), and the resulting convergence curves are presented in Fig 7 for both initial geometries. Successively introducing the additional PDE constraints results in deviations in the BMA-CE\({}^{\varepsilon}\) curve compared to the purely data-fitting sampling step 1. This means that the PDE model brings information on the porosity field \(\varepsilon\) recovery instead of providing overfitting predictions. One also gets that the PDE constraints are satisfied by the convergence of their residual BMA-CE curves. Figure 7: **Bayesian Model Average Cumulative Error diagnostics (BMA-CE), as defined in equations (44)-(46), for 1D+Time reactive inverse problem:** BMA-CE on the micro-porosity field prediction \(\varepsilon\) throughout the sampling iterations, and BMA-CE on the PDE constraint residuals \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) defined in (45) and introduced successively. The dotted vertical lines split the sampling steps in the sequential reinforcement of the multi-potential energy (41). Dissolution inverse problem on the initial geometries from a) Fig 3a with tortuosity index \(\beta=1\) and b) Fig 3b with tortuosity index \(\beta=0.5\). Figure 8: **Posterior distributions of 1D+Time reactive inverse problem:** a) Histogram of the marginal posterior distributions for the inverse parameter \(\alpha_{\Theta}\). b) Phase diagrams of its trajectory throughout the sampling, with the adaptive steps trajectories (in blue) and effective sampling (in red). The groundtruth values of \(\alpha\) are represented by the black dots. c) Resulting posterior distributions of the catalytic Damköhler number \(\text{Da}_{\text{II}}\), determined through the log-normal distribution from relation (47). The top and bottom row respectively corresponds to the tortuosity indexes \(\beta=1\) and \(\beta=0.5\). Regarding the inverse parameters inference, we represent in Fig 8a the histograms of the marginal posterior distributions of \(\alpha_{\Theta}:=(\mathrm{Da}_{\mathrm{n}}^{*})^{-1}\) for the two initial geometries from Fig 3. These distributions are obtained throughout the sampling steps 2 and 3, and provide intrinsic uncertainties on the parameter estimations. One also gets from Fig 8b the parameter trajectories when exploring the phase space distribution (22), where the convergence toward the mode during the adaptive steps is represented in blue. The final sampling, corresponding to the phase diagram trajectories for \(\tau>N\), thus ensures an efficient exploration of the parameter mode neighborhood. Following the sequential reinforcement strategy detailed in Sect. 5, one gets at the end of the sampling step 2 a description of the restricted RAI\({}^{-}\) domain altogether with an initial estimate on the distribution of \(D_{m}^{*}\). The latter is regarded as an initial a-priori on the parameter \(\gamma_{\Theta}\) in step 3, and is obtained as follows: * for the sampling iteration \(\tau=N...N_{s}\) in step 2, we compute the (cumulative) predictive BMA distributions of the two operators \[\frac{\partial C_{\Theta}}{\partial t}+\frac{1}{\upsilon C_{0}}\frac{\partial \varepsilon_{\Theta}}{\partial t}\quad\text{and}\quad\mathcal{D}_{i}( \varepsilon_{\Theta},C_{\Theta}),\] * we establish the RAI\({}_{\tau}^{-}\) as the admissible points of the RAI where \(D_{m}^{*}\) is predicted positive for each sampling iteration \(\tau>N\), * we compute a spatially averaged estimation of the parameter \(\left(\overline{D_{m}^{*}}\right)_{\tau}\) on this eligible domain, where \[\left(\overline{D_{m}^{*}}\right)_{\tau}=\frac{1}{\#\text{RAI}_{\tau}^{-}} \sum_{k\in\text{RAI}_{\tau}^{-}}(D_{m}^{*})_{\tau}(x_{k},t_{k})\qquad\forall \tau>N,\] * we can estimate a global distribution on \(D_{m}^{*}\) throughout the overall samples of step 2 where we discard the distribution tail after the \(80^{\text{th}}\) percentile, * we finally evaluate the prior distribution on the inverse parameter \(\gamma_{\Theta}:=(D_{m}^{*})^{-1}\) with its mean value \(\overline{\gamma_{\Theta}}\) used as an initial a-priori in step 3, In the case \(\beta=1\) for instance, one gets the global distribution on \(D_{m}^{*}\) shown in Fig 9a, which translates into the prior distribution on \(\gamma_{\Theta}\) given by Fig 9b with \(\overline{\gamma_{\Theta}}=3.8\times 10^{-1}\). Finally, we obtain the posterior distribution from the overall data assimilation problem throughout sampling step 3. This results in the distribution on the inverse parameter \(\gamma_{\Theta}\) represented in Fig 9c, with the uncertainty range \(\gamma_{\Theta}\in[0.274,0.582]\). Finally, one can estimate the posterior distribution on the catalytic Damkohler number \(\mathrm{Da}_{\mathrm{II}}\) resulting from the overall data assimilation problem on dynamical \(\mu\)CT images. This comes from the observation that each inverse parameter, namely \(\alpha_{\Theta}\) and \(\gamma_{\Theta}\), is sought according to a log-normal distribution through the change of variable \(\bullet_{\Theta}=e^{\overline{\bullet_{\Theta}}}\). Hence, we obtain two normal posterior distributions on the random variables \(X_{1}\) and \(X_{2}\) respectively associated with \(\ln(\alpha_{\Theta})\) and \(\ln(\gamma_{\Theta})\) such that \(X_{1}\sim\mathcal{N}\left(\mu_{\alpha},\sigma_{\alpha}^{2}\right)\) and \(X_{2}\sim\mathcal{N}\left(\mu_{\gamma},\sigma_{\gamma}^{2}\right)\). This combines into a normal distribution on \(\ln(\gamma_{\Theta}/\alpha_{\Theta})\) given by \((X_{1}-X_{2})\sim\mathcal{N}\left(\mu_{\gamma}-\mu_{\alpha},\sigma_{\gamma}^{ 2}+\sigma_{\alpha}^{2}\right)\), which is nothing more than a log-normal posterior distribution on the Damkohler number \(\mathrm{Da}_{\mathrm{II}}\). Indeed, one gets that the random variable \(X\) related to the dimensionless \(\mathrm{Da}_{\mathrm{II}}\) number follows \[X\sim\mathrm{Log}-\mathcal{N}\left(\mu_{\gamma}-\mu_{\alpha},\sigma_{\gamma}^ {2}+\sigma_{\alpha}^{2}\right):=\mathrm{Log}-\mathcal{N}\left(\mu,\sigma^{2} \right), \tag{47}\] whose mean and variance are respectively computed as \[\mathbb{E}[X]=e^{\mu+\sigma^{2}/2}\quad\text{and}\quad\mathrm{Var}(X)=e^{2\mu +\sigma^{2}}\left(e^{\sigma^{2}}-1\right). \tag{48}\] The resulting posterior distributions on the Damkohler number \(\mathrm{Da}_{\mathrm{II}}\) are represented in Fig 8c for both the initial calcite core geometries with their related tortuosity indexes. We finally obtain the uncertainty ranges \(\mathrm{Da}_{\mathrm{II}}\in[5.43,27.89]\) and \(\mathrm{Da}_{\mathrm{II}}\in[2.30,11.37]\) for the tortuosity indexes \(\beta=1\) and \(\beta=0.5\) respectively. ## 7 Pore-scale imaging inverse problem of calcite dissolution: 2D+Time application In this section, we apply the data assimilation methodology developed in Sect. 5 to inverse problems for calcite dissolution with heterogeneous porosity levels. We consider a more realistic application involving the dissolution of a 2D calcite core following the configuration of the benchmark developed in [42]. This test case can provide a basis for reactive inverse problems in isotropic porous samples, although the \(\mu\)CT measurements are still synthetic observations resulting from DNS altered with noise. ### Problem set up and dimensionless inverse formulation We consider a 2D calcite crystal with a cylindrical shape, heterogeneous porosity levels, and two apertures, whose initial geometry is defined by the numerical \(\mu\)CT image from Fig 10, corrupted with Gaussian noise. We define the physical domain \(\Omega\subset\mathbb{R}^{2}\) of width \(0.2\) mm corresponding to the two-dimensional flow channel surrounding the calcite core. We first solve the direct formulation of the dissolution process for this initial geometry on a Cartesian spatiotemporal grid of resolution \(N_{x}=N_{y}=100\) and \(N_{t}=350\). We assume a diffusive-dominated regime with continuous acid injection through non-homogeneous Dirichlet boundary conditions on \(\partial\Omega\). We also consider, as in the 1D+Time validation test cases, a strong acid solution with \(\mathrm{pH}=0\) such that the normalizing constant is given by \(C_{0}=$1\,\mathrm{mol}\mathrm{.L}^{-1}$\). The characteristic length \(L\) of these porous samples is set to \(L=0.1\) mm and the reactive parameters are respectively defined by \(K_{s}=$0.8913\,\mathrm{mol}\mathrm{.m}^{-2}\mathrm{.s}^{-1}$\), \(D_{m}=$10^{-9}\,\mathrm{m}^{2}\mathrm{.s}^{-1}$\), and \(\gamma_{\mathrm{H}^{+}}=$10^{-3}\,\mathrm{m}^{3}\mathrm{.mol}^{-1}$\), taken from the benchmark [42]. The reactive specific area \(A_{s}\) is set to \(A_{s}=$10^{3}\,\mathrm{m}^{-1}$\) -- which is slightly underestimated compared to the computed value around \(7.4\times 10^{3}\,\mathrm{m}^{-1}\). We account for the calcite molar volume \(\upsilon=$36.93\times 10^{-3}\,\mathrm{L}\mathrm{.mol}^{-1}$\), and set a tortuosity index \(\beta=0.5\). The DNS is performed until the overall calcite is dissolved which corresponds to a characteristic final time \(T_{f}=$175$\) s. Taken together, one gets a sequence of synthetic \(\mu\)CT images \(\mathfrak{Im}_{i,j}\) characterizing the dissolution process of the calcite core on the spatiotemporal domain \(\Omega\times(0,T_{f})\). Given these reactive parameters, we identify the same dissolution regime as in the 1D+Time test cases, with a catalytic Damkolner given by \(\mathrm{Da}_{\mathrm{ii}}=$8.913$\). The inverse formulation, however, results in distinct dimensionless numbers, namely \(\mathrm{Da}_{\mathrm{ii}}^{*}\) and \(D_{m}^{*}\), arising from the scaling dimensionless factor \(D_{\mathrm{ref}}\). This scaling factor is here determined by: \[D_{\mathrm{ref}}=\frac{T_{f}^{*}L^{2}}{T_{f}}=$5.714\times 10^{-11}\, \mathrm{m}^{2}\mathrm{.s}^{-1}$, \tag{49}\] Figure 9: **Prior and posterior distributions on the inverse parameter \(\gamma_{\Theta}\) for 1D+Time reactive inverse problem with tortuosity index \(\beta=1\): a) Global distribution on the estimated parameter \(D_{m}^{*}\) arising from a-posteriori analysis on the sampling step 2. We discard the distribution tail after the vertical dashed line corresponding to the \(80^{\text{th}}\) percentile. b) Resulting prior distribution on the inverse parameter \(\gamma_{\Theta}\), used as an a-priori in step 3. c) Posterior distribution on \(\gamma_{\Theta}\) obtained from the overall data assimilation problem throughout sampling step 3.** where the final dimensionless time is \(T_{f}^{*}=1\). Therefore, one gets from equation (18) the following dimensionless numbers characterizing this 2D+Time reactive inverse problem: \[\mathrm{Da}_{\textsc{ii}}^{*}=155.9775\quad\text{and}\quad D_{m}^{*}=17.5 \tag{50}\] which are related to the inverse parameters through \((\mathcal{P}_{\text{inv}})_{\Theta}=\{\alpha_{\Theta},\gamma_{\Theta}\}=\left\{ (\mathrm{Da}_{\textsc{ii}}^{*})^{-1},(D_{m}^{*})^{-1}\right\}\). We consider the dimensionless domain \(\Omega^{*}\times(0,T_{f}^{*})=[0,2]\times[-1,1]\times(0,1)\), given the characteristic length \(L\), to extract the observation dataset \[\mathcal{D}=\left\{(x_{k},y_{k},t_{k})\in[0,2]\times[-1,1]\times(0,1),\quad k= 1...N_{\text{obs}}\right\} \tag{51}\] where the number of training points \(N_{\text{obs}}=15907\) represents less than 1% of the data required for the full field reconstructions on the spatiotemporal domain \(\Omega^{*}\times(0,T_{f}^{*})\). This dataset \(\mathcal{D}\) is then divided into \(\mathcal{D}^{S}\), \(\mathcal{D}^{F}\), \(\mathcal{D}^{\text{RAI}}\), \(\text{RAI}^{+}\), \(\text{RAI}^{-}\) and \(\mathcal{D}^{\vartheta}\) that respectively cover around 15.5%, 0.5%, 48%, 11%, 20% and 5% of the \(N_{\text{obs}}\) training measurements. Finally, the multi-potential energy writes on this dataset decomposition: \[U(\Theta) =\frac{\lambda_{0}}{2\sigma_{0}^{2}}\left\|1-\varepsilon_{\Theta} -\mathrm{Im}\right\|_{\mathcal{D}^{S}}^{2}+\frac{\lambda_{1}}{2\sigma_{1}^{2} }\left\|1-\varepsilon_{\Theta}-\mathrm{Im}\right\|_{\text{RAI}^{+}}^{2}+\frac {\lambda_{2}}{2\sigma_{2}^{2}}\left\|(\upsilon C_{0})^{-1}\alpha_{\Theta} \frac{\partial\varepsilon_{\Theta}}{\partial t}-C_{\Theta}\right\|_{\text{ RAI}}^{2} \tag{52}\] \[+\frac{\lambda_{5}}{2\sigma_{3}^{2}}\left\|\gamma_{\Theta}\left( \frac{\partial C_{\Theta}}{\partial t}+(\upsilon C_{0})^{-1}\frac{\partial \varepsilon_{\Theta}}{\partial t}\right)-\mathcal{D}_{i}(\varepsilon_{\Theta},C_{\Theta})\right\|_{\text{RAI}^{-}}^{2}+\frac{1}{2\sigma_{\Theta}^{2}}\| \Theta\|^{2}\] where the heterogeneous diffusion operator \(\mathcal{D}_{i}(\varepsilon_{\Theta},C_{\Theta})\) is computed in its developed form (37c) with \(\beta=0.5\). This overall potential energy is sequentially reinforced throughout three successive sampling steps, as detailed in Sect. 5.2 and validated in Sect. 6 on 1D+Time data assimilation problems. ### Deep learning framework and computational efficiency Regarding the deep learning strategy, the framework is kept identical to the 1D+Time validation test cases (see Sect. 6.3). In this sense, we consider two distinct neural network architectures, for the micro-porosity and acid concentration surrogate models, which are respectively composed of 4 and 3 hidden layers with 32 neurons per layer. The number of network parameters is, therefore, \(3297\) for the first sampling and \(5538\) for Figure 10: **Initial \(\mu\)CT image defining the 2D porous sample geometry: Synthetic case with tortuosity index \(\beta=0.5\). The \(\mu\)CT measurements are normalized, corrupted with noise, and provide the observation dataset before the dissolution process. The cylindrical calcite core has a radius equal to \(0.05\) mm.** the second and third sampling steps with the two additional inverse parameters. The setting of the AW-HMC sampler hyperparameters is also summarized in Table 3 for the successive sampling steps. Besides, we investigate the impact of the problem dimensionality by analyzing the computational efficiency of the present data assimilation approach with sequential reinforcement process. In this sense, we compare the computational costs of the three successive sampling steps on the 1D+Time and 2D+Time reactive inverse problems. The results of these computational time measurements are presented in Table 4 for both configurations. The first columns compare the sampling times, which is the time required to provide the overall samples of the posterior distributions using the AW-HMC sampler. This training phase is performed on the \(N_{\mathrm{obs}}\) observation data which are randomly selected and non-uniformly distributed on the whole Cartesian grids -- respectively \(N_{x}\times N_{t}=200\times 240\) in 1D+Time and \(N_{x}\times N_{y}\times N_{t}=100\times 100\times 350\) in 2D+Time. All the successive sampling steps are performed on GPU devices, and the computational times are expressed in hours, minutes and seconds (hh:mm:ss). In these sampling/training phases, the present methodology does not suffer from the curse of dimensionality. The 1D+Time and 2D+Time data assimilation problems present similar computational times, although the number of training observations is two times larger in 2D+Time. This establishes that most of the computational cost of the problem is correlated to the number of neural network parameters -- as already confirmed in Sect. 6.3 and more specifically in Fig 4 -- rather than the number of training points. Since it appears that the same neural network architecture as in 1D+Time is significant to describe the 2D+Time inverse problem, the computational efficiency of this 2D+Time data assimilation is significantly improved. The second columns of Table 4 then compare the prediction time on CPU devices. This corresponds to the computational time necessary for the potential energy estimation and output field predictions on \begin{table} \begin{tabular}{|c|c|c|} \hline & Sampling time \(T_{\mathrm{GPU}}\) (in hh:mm:ss) & Prediction time \(T_{\mathrm{CPU}}\) (in hh:mm:ss) \\ \hline Sequential step 1 & 00:16:26 & 00:04:59 \\ \hline Sequential step 2 & 01:21:01 & 00:34:34 \\ \hline Sequential step 3 & 02:41:10 & 00:41:18 \\ \hline \end{tabular} \end{table} Table 4: **Computational times of the successive sequential steps on the 1D+Time and 2D+Time inverse problem:** Comparison of the sampling times on GPU devices (first columns), and the prediction times on CPU devices (second columns) between 1D+Time and 2D+Time configurations. All the computational times are expressed under the form hh:mm:ss to ease the readability. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Sampling step & Number of & Number of & Number of & Leapfrog time \\ & adaptive steps \(N\) & samples \(N_{s}\) & leapfrog steps \(L\) & step \(\delta t\) \\ \hline 1) Preconditioning \(\varepsilon_{\Theta}\) & 50 & 200 & 150 & \(1\times 10^{-3}\) \\ \hline 2) Preconditioning \(C_{\Theta}\) + Inference \(\alpha_{\Theta}\) & 40 & 200 & 150 & \(5\times 10^{-4}\) \\ \hline 3) Full data assimilation & 10 & 200 & 150 & \(3\times 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 3: **AW-HMC hyperparameters on the 2D+Time reactive inverse problem:** Setting of the sampler hyperparameters for the three sequential sampling steps defined in Fig 2. The number of adaptive steps \(N\) along with the leapfrog parameters \(L\) and \(\delta t\) are AW-HMC sampler-specific parameters. the whole domain \(\Omega^{*}\times(0,T_{f}^{*})\), along with the computation of the main differential operators required as additional outputs. Among the additional outputs, one finds the porosity time derivative at the end of step 1, and all the first-order derivatives and Laplacian operators for the porosity and concentration field at the end of step 2 -- which are used to evaluate the initial a-priori on \(\gamma_{\Theta}\). The first-order derivatives and Laplacian operators are also considered as output in step 3 to perform the a-posterior analysis based on the BMA-CE diagnostics. In contrast to the sampling phase, the computational time devoted to these predictions is larger when dimensionality increases. This comes from the observation that one needs to evaluate the output fields and additional differential operators on a Cartesian grid which is about 72 times larger in 2D+Time. The predictions of all these differential operators on the overall domain \(\Omega^{*}\times(0,T_{f}^{*})\) are achieved through automatic differentiation and hence are consistent with their evaluations along the training phase. This can straightforwardly be replaced by other standard differential schemes -- as finite differences or PSE schemes -- to evaluate these operators on the Cartesian grid, using merely the predictive porosity and acid concentration fields. However, the computational improvement of the process may not be that significant, since one needs to evaluate these operators for all the \(N_{s}\) samplings steps -- basically 600 times -- to take into account their intrinsic uncertainties. This means that it only takes a few seconds per sample to evaluate these differential operators through automatic differentiation, which is thus comparable to other usual schemes. On top of that, the prediction phases here occur on CPU devices due to memory usage that is not marginal. Typically the micro-porosity field prediction on its own required the storage of \(N_{s}\times N_{x}\times N_{y}\times N_{t}\) floats in 2D+Time, which is equivalent to about \(1.9\)GB. In this sense, both memory usage and computational efficiency of this prediction phase could be improved, and efforts must be made in this direction. In prospect, we would like to benefit from the parallel architecture of GPU devices by investigating and using appropriate domain decompositions of \(\Omega^{*}\times(0,T_{f}^{*})\). ### Results and discussion We apply our data assimilation approach with sequential reinforcement of the multi-potential energy on this 2D+Time reactive inverse problem of calcite dissolution, based on synthetic dynamical \(\mu\)CT observations generated by DNS. We guarantee the positivity of the inverse parameter inference by selecting log-normal prior distributions and applying the same change of variable \(\bullet_{\Theta}=e^{\widetilde{\bullet_{\Theta}}}\) as in the validation Sect. 6. This is combined with independent normal distribution on the neural network parameters, such that we assume the overall prior distribution \(P(\Theta)\sim\mathcal{N}(0,\sigma_{\Theta}^{2}I_{p+d})\). We also impose weakly informed priors on the inverse parameters since we do not impose a-priori information on their respective scaling. Figure 11: **Uncertainty Quantification on the micro-porosity field at the initial state (\(t^{*}=0\)) for 2D+Time reactive inverse problem: Corrupted \(\mu\)CT image before dissolution, groundtruth on \(\varepsilon\), BMA prediction, and uncertainty on \(\varepsilon_{\Theta}\). The results are plotted along the horizontal white dashed lines from Fig 12, at spatial coordinates a) \(y^{*}=-0.192\) and b) \(y^{*}=0.212\).** Figure 12: **Uncertainty Quantification on 2D+Time reactive inverse problem with data assimilation:** Bayesian Model Average (BMA) predictions on the micro-porosity field \(\varepsilon_{\Theta}\) with their local uncertainties and mean squared errors (MSE). Comparison with the \(\mu\)CT dynamical images at several dissolution times, in the dimensionless formulation: a) Initial condition at \(t^{*}=0\). Intermediate dissolution times at b) \(t^{*}=0.43\), c) \(t^{*}=0.602\) and d) \(t^{*}=0.837\). At the end of the sequential sampling, one gets the Bayesian Model Average prediction on the porosity field \(\varepsilon_{\Theta}\), approximated as in equation (42), along with its local uncertainties during the whole dissolution process. These uncertainty quantification results are presented in Fig 12 for several dissolution times, including the initial condition for \(t^{*}=0\) in Fig 12a. We compare these results with the synthetic \(\mu\)CT images and mean squared errors computed between the BMA surrogate prediction and the groundtruth \(\varepsilon\). We observe enhanced uncertainties on the calcite core interfaces, including the aperture edges, and this all along the dissolution process. The initial state exhibits heterogeneous uncertainty distribution on the whole calcite, with lower uncertainties on the pure solid region. As this mineral interface decreases due to the dissolution, the uncertainties tend to become more homogeneously distributed (see Fig 12c and 12d for instance). One can, however, notice that the local mean squared errors are significantly embedded in the micro-porosity uncertainties, ensuring reliable predictions. In Fig 11, we detail these results on the initial state geometry -- for \(t^{*}=0\) -- by plotting along the two dashed lines from Fig 12 the BMA and uncertainty on \(\varepsilon_{\Theta}\), the groundtruth values, and the \(\mu\)CT observations. Considering the dynamical dissolution process also provide insight into the upper and lower bounds of the residual micro-porosity \(\varepsilon_{0}\) for the initial calcite core geometry. In the porous matrix, we obtain the estimations \(3\%\leqslant\varepsilon_{0}\leqslant 10\%\) for the 95% confidence interval. The validation of the inference is first performed using the Bayesian Model Average Cumulative Error (BMA-CE) on the micro-porosity field, which is computed along the three successive sampling steps of sequential reinforcement. We then introduce \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) the PDE constraints residuals arising from the reactive model (19) and involved in the multi-potential energy (52): \[\begin{split}\mathcal{F}_{1}(\varepsilon_{\Theta},C_{\Theta})& :=(\upsilon C_{0})^{-1}\alpha_{\Theta}\frac{\partial\varepsilon_{ \Theta}}{\partial t}-C_{\Theta}\\ \mathcal{F}_{2}(\varepsilon_{\Theta},C_{\Theta})&: =\gamma_{\Theta}\left(\frac{\partial C_{\Theta}}{\partial t}+( \upsilon C_{0})^{-1}\frac{\partial\varepsilon_{\Theta}}{\partial t}\right)- \mathcal{D}_{i}(\varepsilon_{\Theta},C_{\Theta})\end{split} \tag{53}\] to estimate their BMA-CE\({}^{\mathcal{F}_{\bullet}}\) diagnostics on sampling steps 2 and 3. For the 2D+time data assimilation problem, the BMA-CE metrics on the micro-porosity field \(\varepsilon\) and PDE residuals are straightforwardly extended from the formulae (44) and (46). The results are provided in Fig 13 and highlight the convergence of each term toward final BMA errors, at the sampling iteration \(\tau=500\), scaling respectively about BMA-CE\({}^{\varepsilon}(\tau)=1.6\times 10^{-3}\), BMA-CE\({}^{F_{1}}(\tau)=1.4\times 10^{-2}\) and BMA-CE\({}^{F_{2}}(\tau)=6.4\times 10^{-2}\). From these convergence diagnostics, we observe a saturation of the PDE constraints that highlights the intrinsic uncertainties of their corresponding tasks in the multi-potential energy (52). In this sense, the PDE constraint \(\mathcal{F}_{2}\) involving the heterogeneous diffusion operator (37c) is the most uncertain term due to its high sensitivity to porosity variations. Nonetheless, we notice, as in the validation test cases from Sect. 6, that successive introduction of the PDE constraints brings information on the porosity field recovery by preventing overfitting issues. Figure 13: **Bayesian Model Average Cumulative Error diagnostics (BMA-CE) for 2D+Time reactive inverse problem: BMA-CE on the micro-porosity field prediction \(\varepsilon\) throughout the sampling iterations, and BMA-CE on the PDE constraint residuals \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) defined by equations (53) and introduced successively. The dotted vertical lines mark the sequential reinforcement of the multi-potential energy (52).** For the inverse parameters inference, we represent in Fig 14 the histograms of the marginal posterior distributions of \(\alpha_{\Theta}:=(\mathrm{Da}_{\mathrm{ll}}^{*})^{-1}\) and its trajectory in the phase space illustrating the convergence toward its mode during the adaptive steps -- represented in blue in Fig 14b. The latter shows that our data assimilation approach combined with the AW-HMC sampler from [53] makes it possible to capture the correct parameter range without prior knowledge of its scaling. Once the adaptive process ends, we effectively start sampling the inverse parameter mode neighborhood, represented by the phase space trajectories for \(\tau>N\) in red. We then follow the same process as in Sect. 6.4 to estimate the prior distribution on the inverse parameter \(\gamma_{\Theta}\), and we obtain \(\overline{\gamma_{\Theta}}=3.9\times 10^{-2}\) which is used as an initial a-priori in the sampling step 3 (see Fig 15a and Fig 15b). The posterior distribution on the inverse parameter \(\gamma_{\Theta}\), estimated throughout step 3, is represented in Fig 15c and provides the following predictive interval \(\gamma_{\Theta}\in[2.6\times 10^{-2},5.4\times 10^{-2}]\). One can, finally, estimate the posterior distribution on the catalytic Damkohler number \(\mathrm{Da}_{\mathrm{ll}}\) according to the log-normal distribution obtained by the relation (47). This results in the posterior distribution represented in Fig 14, and we obtain the uncertainty range \(\mathrm{Da}_{\mathrm{ll}}\in[2.12,10.16]\) for this data assimilation problem of calcite dissolution, which is consistent with the theoretical value. Figure 14: **Posterior distributions of 2D+Time reactive inverse problem:** a) Histogram of the marginal posterior distribution for the inverse parameter \(\alpha_{\Theta}\). b) Phase diagram of its trajectory throughout the sampling, with the adaptive steps trajectories (in blue) and effective sampling (in red). The groundtruth value of \(\alpha\) is represented by the black dot. c) Resulting posterior distribution of the catalytic Damköhler number \(\mathrm{Da}_{\mathrm{ll}}\), determined through the log-normal distribution from relation (47). ## 8 Concluding remarks This work intended to address two major challenges related to uncertainty quantification in pore-scale modeling of reactive flows, which plays a crucial role in the long-term management of CO\({}_{2}\) capture and storage. Providing reliable macro-properties changes due to geochemical processes, such as CO\({}_{2}\) mineral trapping and dissolution within the porous environment, is essential to query reservoir safety. In this sense, we aim to ensure that the evolving petrophysical properties provide meaningful characterizations of these chemical processes instead of intrinsic deviations arising from imaging limitations. Some intrinsic limiting factors remain when modeling pore-scale dynamical systems based on \(\mu\)CT scans and lead to several trade-offs that can bias the predictions. In particular, this results in unresolved micro-porosity, especially when the scan resolution does not fully capture morphological features of the pore space under the constraint of having a representative elementary volume (REV) of the sample. Quantifying sub-resolution porosity, which is a prevalent imaging artifact, was the first challenge we identified, and therefore we focused on quantifying morphological uncertainties in the micro-porosity field. Our second concern was to investigate the reliability of kinetic parameters, such as mineral reactivity, in the context of reactive processes. Indeed, these are critical parameters to account for in pore-scale modeling, though their experimental estimations can suffer from wide discrepancies. Estimating proper order of magnitude and uncertainty ranges appears essential to ensure reliable calibration of pore-scale models, and afterward trustable management of CO\({}_{2}\) mineral storage. The present article investigated both these issues by integrating uncertainty quantification concerns in the workflow of pore-scale modeling. Current methodologies investigating these problems regard them independently and fall into pure image treatment analysis related to experimental static or dynamical \(\mu\)CT images. Deep learning methodologies, for instance, are used to address sub-resolution porosity quantification. Multi-scale image reconstruction, which extrapolates the latent information of the porous structure, is obtained with Generative Adversarial Networks [77]. One also gets super-resolved segmented images from static \(\mu\)CT scans through Convolutional Neural Networks [7], which compensate for the unresolved morphological features. Regarding the mineral reactivity assessment, experimental works have been conducted on dynamical 4D \(\mu\)CT of carbonate dissolution. This provides, through differential imaging techniques, insight into local reaction rates at the mineral interfaces [50, 41]. However, all these approaches do not consider incorporating uncertainty quantification in the estimates. In contrast, alternatives accounting for deviations in the petrophysical properties due to imaging limitations are mainly purely model-related approaches [63, 54]. The main novelty of our work, therefore, lay in its ability to address both morphological uncertainty and reaction rate quantification from the perspective of coupling physics-based models with data-driven tech Figure 15: **Prior and posterior distributions on the inverse parameter \(\gamma_{\Theta}\) for 2D+Time reactive inverse problem:** a) Global distribution on the estimated parameter \(D_{m}^{*}\) arising from a-posteriori analysis on the sampling step 2. We discard the distribution tail after the vertical dashed line corresponding to the \(80^{\text{th}}\) percentile. b) Resulting prior distribution on the inverse parameter \(\gamma_{\Theta}\), used as an a-priori in step 3. c) Posterior distribution on \(\gamma_{\Theta}\) obtained from the overall data assimilation problem throughout sampling step 3. niques. In this sense, we have developed a data assimilation approach for pore-scale imaging problems that combine dynamical microtomography and physical regularization induced by PDE models of reactive processes. We integrated this novel data assimilation strategy into the Bayesian inference context through our efficient AW-HMC framework for BPINNs [53]. This also confirmed the great potential of this adaptive and self-balancing methodology and rendered BPINNs a promising approach to address complex data assimilation. In the pore-scale imaging context, in particular, we have focused on multitask inverse problems of calcite dissolution based on dynamical \(\mu\)CT images, along with two dimensionless inverse parameters and a latent concentration field. We have also assumed unknown informative priors on the different tasks scaling and relied on automatic adjustment of the uncertainties including noise-related estimations and model adequacy. In this sense, we provided reliable uncertainty quantification on the micro-porosity field description and reactive parameters. We also built our data assimilation upon a sequential reinforcement strategy of the multi-potential energy and thus the target posterior distribution. This involved successively integrating additional PDE constraints into the overall data assimilation process through dedicated sampling steps. Finally, we have also addressed computational concerns and have shown that suitable formulation of complex non-linear differential operators, especially the heterogeneous diffusion arising from Archie's law, can significantly reduce the computational costs of these operators. Taken together, we presented an intrinsic data assimilation strategy for pore-scale imaging inverse problem and demonstrated its efficiency on several 1D+Time and 2D+time calcite dissolution problems. Overall, our results confirmed enhanced morphological uncertainties localized on the calcite core edges throughout the dissolution process. This characterized the challenge of capturing reliable mineral interfaces from the dynamical \(\mu\)CT images, and therefore query the confidence of mineral reactivity assessment using merely differential imaging techniques on the \(\mu\)CT scans. Combining data-driven and physics-based approaches thus offers a promising alternative to overcome the limitations of each approach individually, and alleviate biased predictions. We also obtained reliable insight into the upper and lower bounds for the residual, potentially unresolved, micro-porosity \(\varepsilon_{0}\) in the porous matrix. These estimations can then be incorporated into direct numerical simulation solvers to measure the impact of these micro-porosity variations on the other petrophysical properties, such as permeability. This can also ensure that the macro-scale porosity evolutions due to the reactive processes are significant compared to these intrinsic morphological uncertainties at the pore scale. Finally, we have obtained posterior distribution on the dimensionless reactive parameters characterizing the dissolution inverse problems. We have shown that our data assimilation approach combined with the AW-HMC sampler made it possible to capture the correct parameter ranges without prior knowledge of their scaling, which confirmed the robustness and reliability of the inferences. Last but not least, we have identified uncertainty ranges on the usual catalytic Damkohler number \(\mathrm{Da}_{\mathrm{II}}\) resulting from the prescribed PDE model and dynamical observations of the dissolution process. This is of great interest to aggregate experimental investigations and direct numerical simulations, and therefore guarantee the reliability of pore-scale modeling and simulation of reactive flows. We now have the potential to effectively address robust and reliable uncertainty quantification in pore-scale imaging and to manage the impact of \(\mu\)CT limitations on the petrophysical properties and reactive parameters. As future prospects, it would be interesting to apply the present data assimilation approach on real \(\mu\)CT dissolution scans and extend the inference to different kinds of dissolution regimes. This would bring further insights into the relationship between experiments and mathematical modeling theory, which would improve dramatically the trust in computational approach of real-life reactive materials.
2301.11696
SLCNN: Sentence-Level Convolutional Neural Network for Text Classification
Text classification is a fundamental task in natural language processing (NLP). Several recent studies show the success of deep learning on text processing. Convolutional neural network (CNN), as a popular deep learning model, has shown remarkable success in the task of text classification. In this paper, new baseline models have been studied for text classification using CNN. In these models, documents are fed to the network as a three-dimensional tensor representation to provide sentence-level analysis. Applying such a method enables the models to take advantage of the positional information of the sentences in the text. Besides, analysing adjacent sentences allows extracting additional features. The proposed models have been compared with the state-of-the-art models using several datasets. The results have shown that the proposed models have better performance, particularly in the longer documents.
Ali Jarrahi, Ramin Mousa, Leila Safari
2023-01-27T13:16:02Z
http://arxiv.org/abs/2301.11696v1
SLCNN: Sentence-Level Convolutional Neural Network for Text Classification ###### Abstract Text classification is a fundamental task in natural language processing (NLP). Several recent studies show the success of deep learning on text processing. Convolutional neural network (CNN), as a popular deep learning model, has shown remarkable success in the task of text classification. In this paper, new baseline models have been studied for text classification using CNN. In these models, documents are fed to the network as a three-dimensional tensor representation to provide sentence-level analysis. Applying such a method enables the models to take advantage of the positional information of the sentences in the text. Besides, analysing adjacent sentences allows extracting additional features. The proposed models have been compared with the state-of-the-art models using several datasets. The results have shown that the proposed models have better performance, particularly in the longer documents. Text Classification, Deep Learning, Convolutional Neural Network, Natural Language Processing + Footnote †: _Corresponding author._ E-mail address: [email protected] ## 1 Introduction In recent years, the production of unstructured texts (documents) has grown exponentially. Unstructured texts can be found everywhere, e.g., emails, social media, chat conversations, comments and websites. Although text data can be a rich source of information, it is hard to extract value from this type of unstructured data. Text classification is a fundamental task in natural language processing (NLP). The task is the process of assigning a class label from a set of predefined classes to a given text according to its content, and has many applications such as sentiment analysis [15], spam detection [16] and topic categorization [1]. Text classification can be done manually or automatically. Despite the manual method is more accurate, it is very costly and time consuming. Therefore, to provide scalability, several machine learning, NLP and other techniques are used for automatic text classification. Supervised learning is a machine learning task of learning a function (classifier) using pre-labeled samples as a training dataset [17]. A key step in supervised learning is feature extraction. Traditional machine learning methods represent text with hand-crafted methods, e.g., n-grams [14]. Recently, deep learning methods have been used for automatic feature extraction, including convolutional neural networks (CNNs) [12], recurrent neural networks (RNNs) [15] and particularly long short-term memory (LSTM) [16]. In this paper, we present a new baseline model for text classification using CNN. In this model, documents are fed to the network as a three-dimensional tensor representation to provide sentence-level analysis. The paper is structured as follows. The previous works have been summarized in the next section. The details of the proposed methods are described in section 3. We have evaluated our approach on several benchmark datasets. The experimental results are presented in section 4. Finally, the paper concludes with future research directions in section 5. ## 2 Related works Different approaches have been proposed for text classification. Initial approaches were based on the classical machine learning techniques, which followed two stages, i.e., extracting hand-crafted features and classifying the documents. Typical features include bag-of-words (BoW), n-grams, and their TF-IDF1. Alternatively, several recent studies show the success of deep learning on text classification. As the neural networks receive their inputs numerically, word embeddings, e.g., word2vec [16] or GloVe [14], are usually used to represent words as a numerical vectors by capturing the similarities/regularities between words. There are variety of deep learning models for text classification. Due to the sequential nature of textual data, recurrent neural networks (RNN), including long short-term memory (LSTM) and gated recurrent units (GRU) [15] have been widely used in text processing. For example, in [17] authors examined generative and discriminative LSTM models for text classification. They found that although the generative models perform better than BoW, they have a higher asymptotic error rates than discriminative RNN-based models. Another popular model is CNN, which originally invented for computer vision [13]. Subsequently, CNN models have been applied in NLP and have achieved excellent results [12]. Many researchers have worked on the effective use of CNNs in text classification since a single layer word-level CNN was successfully used in sentence classification with a pre-trained word embeddings [10]. The proposed method in [11] was the first attempt to perform text classification entirely at the character-level, and reported competitive results. Their models use 70 characters by one-hot encoding, including 26 English letters, 10 digits, 33 other characters and the new line character. [1] adopted very deep convolutional networks, i.e., ResNet [1], to the character-level text classification. Some researchers tried to improve performance of the models by applying extra mechanisms. Attention is one of the most effective mechanism that selects significant information to achieve superior results [15]. Deep neural networks with attention mechanism can yield better results. Some of the remarkable examples include source-target attention and self-attention [10]. Particularly, two-level attention mechanism, i.e., word attention and sentence attention, was developed on GRU by [18] for document classification. In [18], authors used dense connections with multi-scale feature attention in order to produce variable n-gram features. Since this paper aims to present a new baseline model, employing such mechanisms has been avoided. ## 3 Method In this section, we describe the architecture of proposed Sentence-Level Convolutional Neural Network (SLCNN) for classifying the documents. The key idea of the model is that using positional information of each sentence in the document may improve the performance of the classifier. Furthermore, analysing adjacent sentences allows extracting some extra features, e.g., writing style features, which can be useful in some applications, such as spam review detection and fake news detection. Hence, we present two baseline models based on the CNN architecture for the text classification task. For this purpose, we introduce a three-dimensional representation of documents to enable sentence-level analysis. The pre-processing phase and the architecture of the SLCNN and its variant SLCNN+V are explained in the following subsections. ### Pre-processing During the pre-processing phase, the documents are cleaned by removing some unimportant characters, like the html tags and the punctuations. Then all words are normalized by converting to their lowercase forms. After that, as the most important step, each document is transformed into a three-dimensional tensor, illustrated in Figure 1. As shown in the figure, the sentences of the document form the first dimension of the tensor. In the same way, the words of the sentences shape the second dimension, while the third dimension represents the word vectors of the words. The pre-trained word embeddings, e.g., word2vec and GloVe, could be used for representing the word vectors. Since, the input size of the network must be fixed, and according to different size of both the texts and the sentences, we consider two thresholds, one for the number of sentences in the documents, T\({}_{d}\), and another for the number of words in the sentences, T\({}_{s}\). The documents and the sentences longer than the thresholds would be cropped and shorter ones would be padded by zeros. After some statistical analysis on the datasets in our experiments, as well as considering the structure of the SLCNN, we chose T\({}_{s}\)=46. In the same way, the threshold for the number of sentences in the documents is calculated by the following equation: \[T_{d}=\left[\mu+1.5\;\sigma\right] \tag{1}\] where \(\mu\) is the average number of sentences in the documents, and \(\sigma\) is the standard deviation. As a result, the outlier sizes are ignored to prevent model from constructing very large and sparse tensors. The relevant statistical data is provided in section 4. ### 3-2. The Architecture The architecture of the proposed models is illustrated in Figure 2. Overall, in the input layer, the documents are provided in the form of the 3D tensor, introduced in section 3-1. After that, using four horizontal convolutional blocks (HCB), one feature per filter is extracted for each sentence individually. In other words, one feature vector for each sentence is provided just before the fully-connected layers with the size equal to the number of filters. In this way, in addition to the word-level features, the positional information of the sentences is also used in the learning process. Moreover, as mentioned before, analysing of adjacent sentences can extract some useful features. For this purpose, the second model (SLCNN+V) is created by adding a vertical convolutional block (VCB) before fully-connected layers. Finally, there are two fully-connected (dense) layers which end to the output layer. Figure 1: Shape of the converted documents. Figure 2: The architecture of the proposed models. The dashed block (VCB) is used only in SLCNN+V. Looking at the details of the convolutional blocks, as shown in Figure 3, there are two sequential convolution layers, each one followed by a Rectified Linear Unit (ReLU) activation function, _f(x)= max (0, x)_. A convolution operation consists of a filter \(w\in\mathbb{R}^{s\times t\times d}\), which is applied to each possible window of \(s\kappa t\) features from its input _feature map_, \(X\), to produce a new feature map by equation 3: \[\begin{array}{l}X=\begin{bmatrix}x_{1,1}&x_{1,2}&...&x_{1,n}\\ x_{2,1}&x_{2,2}&...&x_{2,n}\\ \vdots&\vdots&\vdots\\ x_{m,1}&x_{m,2}&...&x_{m,n}\\ \end{bmatrix}\\ \\ \tilde{x}_{i,j}=f(w.\,x_{i,j:i+s-1,j+t-1}+b)\end{array} \tag{2}\] where \(x_{ij,x}\) is the concatenation of features within the specified interval, \(b\in\vec{H}\) is a bias term and \(f\) is a non-linear function such as the ReLU. For the HCB, we consider \(s\)=1 and \(t\)=2, and for the VCB \(s\)=2 and \(t\)=1. It should be noted that, in the first convolution layer of the first HCB, \(d\) (the third dimension of the filters) is equal to the size of the word vectors, and in other cases \(d\)=1. At the end of the blocks, there is a max-pooling operation, with the pooling size = 2, that is applied over the generated intermediate feature map to select the maximum value from any two adjacent features as a more important feature. The new feature map is calculated by following equations: \[\begin{array}{l}\tilde{x}_{i,j}=\begin{cases}\max\{\,x_{i,2j-1},x_{i,2j}\}&, for\,the\,HCB\\ \max\{\,x_{2i-1,j},x_{2i,j}\}&,for\,the\,VCB\end{cases}\end{array} \tag{3}\] The process of extracting one feature from one filter was described. The model uses multiple filters to obtain multiple features. The final extracted features are passed to the fully-connected layers that end to a softmax output layer which is the probability distribution over labels. For regularization, a dropout module (Hinton et al. 2012) is employed after each fully-connected layer. ## 4 Experiments ### Experimental settings The Natural Language Toolkit (NLTK) was used in order to tokenize words and sentences. In the input layer, as mentioned before, pre-trained word-embeddings are used to convert the words into the corresponding word vectors. We used 100-dimensional GloVe in our experiments. Out-Of-Vocabulary (00V) words were initialized from a uniform distribution with range [-0.01, 0.01]. We set number of filters to 128 for all the convolutional blocks. Also, we considered two different sizes for fully-connected layers, shown in Table 1. Both the dropout rates were set to 0.5. The model's parameters were trained by the Adam Optimizer (Kingma and Ba 2014), with the initial learning rate of 0.001. The model has been implemented using Keras and run for 50 epochs. Figure 3: The convolutional blocks. k is the number of filters. (a) HCB and (b) VCB. ### Benchmark Datasets We utilized six datasets covering different classification tasks compiled by [22]. General specifications are presented in Table 2. All data are evenly distributed across class labels. AG and DBPedia are news and ontology classification datasets, respectively. Yelp and Amazon are sentiment classification datasets, where '.P' (Polarity) in the dataset names indicates that the labels are binary while '.F' (Full) means that the labels refer to the number of stars. Some of the statistical information extracted from the datasets, after the pre-processing step, is summarized in Table 3. As presented in the table, by considering T\({}_{s}\)=46, the proportions of cropped sentences are between 2 and 2.9 percent, that shows the length of sentences in the different datasets are almost similar. By contrast, the number of sentences of the documents in the different datasets are quite different. By utilizing Equation 1, T\({}_{d}\) for AG News, DBPedia, Amazon and Yelp are equal to 4, 6, 10 and 20 respectively. Also, the proportions of cropped documents, using relevant T\({}_{d}\), are 0.4, 3, 3.6 and 6 percent for AG News, Amazon, DBPedia and Yelp respectively, which means that the variance of the number of sentences in the documents of Yelp is greater than others. ### Results We compared our models with several popular base models, e.g., linear models [22], RNN-based model, i.e., Discriminative-LSTM [30], and CNN-based models including classical word-level CNN [22], character-level CNN [22], very deep CNN [15] and CNN \begin{table} \begin{tabular}{l|c} \hline \hline Layers & Small & Large \\ \hline Fully-connected 1 & 512 & 1024 \\ \hline Fully-connected 2 & 512 & 1024 \\ Output & Depends on the problem \\ \hline \hline \end{tabular} \end{table} Table 1: Fully-connected layers in our experiments. \begin{table} \begin{tabular}{l|c c c c c|c c} \hline \hline Datasets & AG News & DBPedia & Yelp.P & Yelp.F & Amazon.P & Amazon.F \\ \hline \# of training samples & 120k & 560k & 560k & 650k & 3600k & 3000k \\ \hline \# of test samples & 7.6k & 70k & 38k & 50k & 400k & 650k \\ \hline \# of classes & 4 & 14 & 2 & 5 & 2 & 5 \\ \hline \hline \end{tabular} \end{table} Table 2: Datasets in our experiments. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Statistics & AG & DBPedia & Yelp.P & Yelp.F & Amazon.P & Amazon.F \\ \hline \# of sentences & 164k & 1505k & 5082k & 5958k & 18654k & 16986k \\ \hline Cropped sentences (\%) & 2 & 2.9 & 2.6 & 2.6 & 2.4 & 2.5 \\ \hline Cropped documents (\%) & 0.4 & 3.6 & 6 & 6 & 3.1 & 3 \\ \hline Documents that contain cropped sentences (\%) & 2.5 & 6.9 & 16.1 & 16.4 & 10.3 & 10.9 \\ \hline \# of sentences in the longest text & 15 & 25 & 141 & 151 & 85 & 99 \\ \hline \# of words in the longest sentence & 135 & 1302 & 1104 & 1175 & 522 & 520 \\ \hline Vocab size & 62k & 786k & 283k & 311k & 1546k & 1464k \\ \hline T\({}_{d}\) & 4 & 6 & 20 & 20 & 10 & 10 \\ \hline \# of trainable parameters in SLCNN _small_ & 783k & 920k & 1831k & 1832k & 1176k & 1177k \\ \hline \# of trainable parameters in SLCNN _large_ & 1835k & 2107k & 3930k & 3933k & 2619k & 2622k \\ \hline \# of trainable parameters in SLCNN+V _small_ & 653k & 723k & 1176k & 1177k & 848k & 850k \\ \hline \# of trainable parameters in SLCNN+V _large_ & 1508k & 1649k & 2554k & 2557k & 1899k & 1902k \\ \hline Training time for a single epoch (s) & 10 & 51 & 150 & 170 & 510 & 440 \\ \hline \hline \end{tabular} \end{table} Table 3: The statistical information of the datasets. with fastText embedding (Joulin et al. 2017). Since our aim was to provide new baseline models, and using other mechanisms, such as the attention, has been avoided, therefore such models have been excluded from the comparison. The results are listed in Table 4 based on accuracy. Overall, it can be seen that the proposed models have outperformed all the models in half of the datasets, DBPedia, Yelp.P and Yelp.F. Especially, the improvement is significant in Yelp datasets, i.e., around 2 percent in Yelp.P and around 5 percent in Yelp.F compared to character-level and word-level CNNs. In terms of Amazon datasets, the SLCNN+V was ranked third after VDCNN and character-level CNN with around 94 and 58.1 percent in Amazon.P and Amazon.F, respectively. If we look at AG News, despite competitive results with other CNN models, n-grams and Discriminative-LSTM have achieved better results. One of the main reasons we can mention is the number of sentences in the documents. So that the proposed models perform better in documents with large number of sentences, i.e., Yelp. Another reason that hinders better performance in Amazon datasets is the very high vocabulary size (see Table 3), since we used the word embedding with just over 1M vocabularies in our experiments. ## 5 Conclusion and future works This paper offers new baseline models for text classification using a sentence-level CNN. The key idea is representing the documents as a 3D tensor to enable the models to sentence-level analysis. The proposed models have been compared with the state-of-the-art models using several datasets. The results have shown that the proposed models have better performance, particularly in the longer documents. As future works, the attention mechanism will be utilized in the proposed models in order to improve the overall performance. Also, we will work on sentence standardization. We believe that applying a standard form of sentences enables the proposed models to use compositional methods (with different 3D filters), due to the 3D structure of the input tensor.
2310.07979
Graph-SCP: Accelerating Set Cover Problems with Graph Neural Networks
Machine learning (ML) approaches are increasingly being used to accelerate combinatorial optimization (CO) problems. We investigate the Set Cover Problem (SCP) and propose Graph-SCP, a graph neural network method that augments existing optimization solvers by learning to identify a much smaller sub-problem that contains the solution space. Graph-SCP uses both supervised learning from prior solved instances and unsupervised learning aimed at minimizing the SCP objective. We evaluate the performance of Graph-SCP on synthetically weighted and unweighted SCP instances with diverse problem characteristics and complexities, and on instances from the OR Library, a canonical benchmark for SCP. We show that Graph-SCP reduces the problem size by 60-80% and achieves runtime speedups of up to 10x on average when compared to Gurobi (a state-of-the-art commercial solver), while maintaining solution quality. This is in contrast to fast greedy solutions that significantly compromise solution quality to achieve guaranteed polynomial runtime. We showcase Graph-SCP's ability to generalize to larger problem sizes, training on SCP instances with up to 3,000 subsets and testing on SCP instances with up to 10,000 subsets.
Zohair Shafi, Benjamin A. Miller, Tina Eliassi-Rad, Rajmonda S. Caceres
2023-10-12T01:57:27Z
http://arxiv.org/abs/2310.07979v2
# Graph-SCP: Accelerating Set Cover Problems with Graph Neural Networks+ ###### Abstract. Machine learning (ML) approaches are increasingly being used to accelerate combinatorial optimization (CO) problems. We look specifically at the Set Cover Problem (SCP) and propose Graph-SCP, a graph neural network method that can augment existing optimization solvers by learning to identify a much smaller sub-problem that contains the solution space. We evaluate the performance of Graph-SCP on synthetic weighted and unweighted SCP instances with diverse problem characteristics and complexities, and on instances from the OR Library, a canonical benchmark for SCP. We show that Graph-SCP reduces the problem size by \(30\)-\(70\%\) and achieves run time speedups up to \(25\)x when compared to commercial solvers (Gurobi). Given a desired optimality threshold, Graph-SCP will improve upon it or even achieve \(100\%\) optimality. This is in contrast to fast greedy solutions that significantly compromise solution quality to achieve guaranteed polynomial run time. Graph-SCP can generalize to larger problem sizes and can be used with other conventional or ML-augmented CO solvers to lead to potential additional run time improvement. ## 1. Introduction Machine Learning (ML) algorithms have been widely used in many domains like computer vision and natural language processing, but have only recently been applied to solving and accelerating combinatorial optimization (CO) problems. These applications come in broadly two categories, namely learning a model in an end-to-end manner (e.g., (Garon et al., 2016)), i.e., use a learned model to generate a feasible solution, or learning a model to aid an existing CO solver (e.g., (Garon et al., 2016)). The survey by Bengio et al. (2016) provides a comprehensive overview of methods used within each category. We look at one particular NP-hard CO problem, the set cover problem (SCP), and use a learned model to speedup the run time while maintaining solution quality. Concretely, we propose Graph-SCP, where we cast an instance of SCP as a graph and learn a graph neural network (GNN) to predict a subgraph that contains the solution. The nodes of this subgraph are then passed into a conventional CO solver, using for illustration Gurobi (Gurobi, 2016), a state-of-art approach. Graph-SCP achieves between \(30\)-\(70\%\) reduction of input problem size which leads to run times improvements of up to \(25\)x. Given a desired solution optimality threshold, Graph-SCP will improve upon it or even achieve \(100\%\) optimality. We show results for SCP instances ranging across various densities, sizes, and costs (weighted vs. unweighted) and demonstrate the ability of Graph-SCP to generalize across various problems characteristics. Given that Graph-SCP works to reduce the input size to a solver and does not interfere with the workings of the solver, it can be used with other conventional or ML-augmented CO solvers to lead to potential additional run time improvements. ## 2. Problem Definition and Methods ### Set Cover Problem Given a binary matrix \(A\in\mathbb{R}^{m\times n}\), the set cover problem is defined as covering all \(m\) rows by the minimum cost subset of the \(n\) columns. Costs of each column are represented in the column vector \(c\in\mathbb{R}^{n\times 1}\). An example of such a covering matrix is shown in Figure 1(B). More formally, \[x_{j}=\begin{cases}1&\text{if column $j$ is in solution }\\ 0&\text{otherwise}\end{cases}\forall j\in n \tag{1}\] \[\operatorname*{minimize}_{j\in n}c_{j}x_{j} \tag{3}\] \[s.t. \sum_{j\in n}A_{ij}x_{j}\geq 1,\;\forall i\in m\] (4) \[x_{j}\in\{0,1\}. \tag{2}\] Given such a matrix, we can then define its density as \[d=\frac{q}{m\times n}, \tag{5}\] where \(q\) is the number of non-zero entries in the matrix \(A\)(Garon et al., 2016). Given an instance of SCP, we represent the SCP instance as a directed graph by treating the covering matrix as an adjacency matrix with elements in the universe as rows and sets as columns (Figure 1(C)). For the remainder of this section, we adopt a slightly different framing of SCP for pedagogy. Concretely, given a set of elements \(\{1,2,\ldots m\}\) (called the universe) and a collection of \(n\) sets whose union equals the universe, the set cover problem is to identify a sub collection of the \(n\) sets whose union equals the universe (Figure 1(A)). A universe node is added to the graph abstraction discussed above, with directed edges from the universe node to each element node that needs to be covered as shown in Figure 1(C). Observe how this creates a directed tripartite graph representation. For each node in this graph, we also consider the following features: * Cost of each column node, representing the cost of the corresponding subset, with costs of universe and element nodes set to 0. * Layer (or category) of each node. We use a binary indicator variable, with the value set to 1 for the universe and element nodes and 0 for nodes representing the subsets. * Cover, i.e., the number of elements of the universe covered by each subset node. In the next section, we discuss our approach to reducing the SCP problem size by framing it as a subgraph selection learning task, ### Graph-SCP The field of graph representation learning has demonstrated the important role of learned features in improving many inference tasks on graphs. We explore if learned features can similarly reveal important structure about the SCP solution space. In particular we are interested to understand if nodes that fall in the optimal solution are easy to separate in some learned feature space. Figure 2 shows a high level summary of Graph-SCP, which is discussed in detail below. #### 2.2.1. Subgraph Selection Task Given the previously described graph abstraction of the SCP problem, we aim learn offline, a GNN model to predict which subset of nodes are relevant to the solution. More formally, given a graph \(G\) with node features \(X\), we learn a GNN function \(f(G,X)\) to predict a classification vector \(y\in\{0,1\}\) Figure 1. (A) A simple example of a set cover problem. Here, each subset has an equal cost. (B) The set cover instance represented in the covering matrix format. (C) The set cover instance represented in graph format. This graph is passed into a GNN to learn predictions of a subset of nodes/sets important to solving SCP (highlighted in pink). The prediction values for Layer 0 and Layer 1 nodes (grey-colored nodes) are discarded, since only Layer 2 nodes contribute to the solution. Figure 2. Overall system diagram for Graph-SCP. A GNN is trained offline to predict a subgraph of nodes that contain the solution to the SCP instance. At run time, the GNN is used (only once) to generate predictions. Graph-SCP picks nodes at a set percentile threshold as the subgraph. If the objective criteria is not met, the threshold is decreased, thereby selecting a larger subgraph. where 1 indicates nodes in the subgraph which still contain the SCP solution and 0 otherwise. To generate training data, we solve instances using Gurobi, a state-of-art CO solver and label the subsets ( i.e., nodes in our graph abstraction) that are part of the solution as 1 with all other subsets labelled as 0. #### 2.2.2. Thresholding The trained GNN model outputs continuous values between 0 and 1. In order to select a subset of nodes, Graph-SCP selects nodes at a predefined percentile threshold and passes the selected nodes into a CO solver. In our experiments, we demonstrate using Gurobi, however other solvers of choice can be used. Graph-SCP checks if the objective value returned by the solver with the reduced problem size is at least a user-defined optimal objective value _opt_. If not, the percentile threshold is reduced, thereby selecting a larger set of nodes and passing it into the solver. The selection of the initial threshold has an impact on the run time performance as well as the quality of the solution obtained. For our experiments, we considered to illustrate with _opt=95%_, however depending on the desired trade-off between solution quality and run-time other threshold choices might be more appropriate. The impact of varying initial thresholds are discussed in the Results section in Figure 8. #### 2.2.3. Experimental Setup We experiment with several popular GNN architectures including Graph Attention Networks (GAT) (Garibani et al., 2017), a vanilla Graph Convolutional Network (GCN) (Garibani et al., 2018), GraphSAGE (Garibani et al., 2019) and Chebyshev GCN (Garibani et al., 2019). Details of a comparative study between architectures are discussed in the Results section (Figure 7). For the remainder of this work, we use the best performing architecture, the Graph Attention Network (GAT) with two graph attention layers (64 and 128 attention heads respectively) followed by 3 dense fully connected layers with 128 nodes in each layer. We use batch normalization and dropout layers with dropout probability of 60% after each layer. A binary cross-entropy loss is used to train the model. Interestingly, we notice that the model generalises better if trained for fewer epochs per SCP instance with repeated passes over the instances. Thus, instead of a single pass with a larger number of epochs, we train every instance for 250 epochs with two passes over the dataset. To generate training data and demonstrate the generalizability of Graph-SCP, we generate instances with various densities and characteristics. We also use the canonical OR Library (Beng et al., 2019) at the testing stage. The instances generated for training reflect the OR library in range of densities, number of columns, and number of rows, but incorporate additional variation in the distribution of costs. We categorize **the instances** into 5 instance types and show **results for each**: * **Instance Type 1**: These instances have costs picked uniformly between [50, 100 200], with densities around 0.2. * **Instance Type 2**: These instances have equal costs for all sets and have densities between 0.1 and 0.2. * **Instance Type 3**: These instances are similar to Instance Type 1, but with lower densities around 0.1 and costs picked using a Poisson distribution with \(\lambda=20\). * **Instance Type 4**: These instances have costs picked using a Poisson distribution with \(\lambda=20\) and have a lower density ranging around 0.04. * **Instance Type 5**: These instances were picked from the OR Library (Beng et al., 2019). We use 10 sets of instances defined in the OR Library (Beng et al., 2019, size complexity. Similar to (Bengio et al., 2017), we test against instances with number of columns ranging from 1000-5000 with results shown in Figure 5. Note, we use the same model trained on Instance Types 1-4 (with a maximum of 400 rows). Costs were set using a uniform distribution between \([0,100]\). Graph-SCP achieves optimal value for all instances in this experiment while maintaining an average of 1.35x run time improvement. Next, we run several ablation and comparative studies to understand the impact of our modelling choices. We show representative results using Instance Type 1, with results for other Instance Types omitted due to space constraints. ### Node Feature Study To arrive at the most predictive set, we train and test 4 models with all combinations of the node features (we always include cost as a feature) on Instance Type 1. Results shown here were found to generalize to other instance types. Results are shown in Figure 6, where the \(x-\)axis shows the speedup in run time compared to Gurobi as a baseline with the \(y-\)axis showing the quality of the solution in terms of ratio between the objective value obtained by Graph-SCP and Gurobi. Observe that using Costs and Cover leads to the largest speedup, whereas using all 3 node features leads to the best solution quality. In this work, we chose to use Costs and Cover as the two node features in order to achieve the best speedup. ### GNN Architecture Study We compare 4 commonly used GNN architectures: Graph Attention Networks (GAT) (Garvin et al., 2017), Graph Convolutional Network (GCN) (Garvin et al., 2017), GraphSAGE (Garvin et al., 2017) and Chebyshev GCN (Chen et al., 2017). The results of each type of GNN are showed in Figure 7. We observe that every GNN except Chebyshev GCN achieves a faster run time than Gurobi, with GAT achieving the highest speedup at 8x. These architectures were trained on all Instance Types and tested on Instance Type 1 only, with results seen to generalise across test Instance Types (results omitted due to space constraints). ### Threshold Sensitivity Analysis We study the impact of varying the initial threshold on run time and objective performance. Test instances from Instance Type 1 were used with results generalising across Instance Types. Shown in the \(y-\)axis on the left are the speedups achieved with the \(y-\)axis on the right showing solution quality. Observe that higher initial thresholds lead to faster run times with lower thresholds leading to better solution quality since a larger portion of the original problem is passed in. Setting the initial threshold too high might lead to multiple calls to the solver after reducing the threshold. ## 4. Discussions and Limitations The performance of Graph-SCP depends on the initial threshold as discussed earlier in Figure 8. This threshold needs to be adjusted according to the density and characteristics of the instances. For instances with small problem sizes, Graph-SCP can be marginally slower than directly solving the problem using a solver, because the overhead of creating and processing the graph is larger than the benefit of reducing the problem size. However, Graph-SCP still achieves a significant reduction in the number of nodes. For larger instances, Graph-SCP is faster than directly solving the problem, because the reduction in the problem size is more substantial. However, when the number of columns is very high, the forward pass through the GNN can be time-consuming and negate the advantage of Graph-SCP. A possible way to improve Graph-SCP is to use a warm-starting technique, where the solution obtained from the current step is used as an initial guess for the next step, instead of solving each sub-problem from scratch. This could speedup the overall system and reduce the number of iterations. Throughout our experiments, setting the right initial threshold per instance type allowed Graph-SCP to call Gurobi only once. However, implementing the ability to warm start could allow one to set higher thresholds allowing Graph-SCP to make multiple calls to Gurobi with progressively larger problem sizes without loss in run-time performance. ## 5. Related Work We have recently seen a growth of literature that explores the use of ML to replace or augment traditional solvers to various CO problems with the objective of accelerating run times, and where appropriate, to improve solution quality. Surveys by Bengio et al. (Bengio et al., 2017) and Cappart et al. (Coppart et al., 2018) provide a detailed summary of this ongoing research Under the approach category that uses ML to replace a CO solver, Numeroso et al. (Numeroso et al., 2018) apply the Neural Algorithmic Reasoning framework proposed by Velickovic et al. (Velickovic et al., 2019) to jointly optimize for the primal and dual of a given problem. Specifically, they look at training models to find joint solutions for both the min cut and the max flow formulations and show that it leads to better overall performance. For NP-hard problems, Khalil et al. (Khalil et al., 2018) propose \begin{table} \begin{tabular}{c|c|c|c|c|c} Instance Type & \(m\) & \(n\) & \(z\) & \(d\) & Cost \\ \hline Instance Type 1 & 100–400 & 100–1000 & 50–250 & 0.22–0.29 & Uniform Random (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016) \\ Instance Type 2 & 100–300 & 100–500 & 20–60 & 0.16–0.28 & Equal \\ Instance Type 3 & 200–350 & 300–350 & 45–55 & 0.13–0.18 & Poisson (\(\lambda=20\)) \\ Instance Type 4 & 200 & 1000–3000 & 4–5 & 0.04–0.05 & Poisson (\(\lambda=20\)) \\ Instance Type 5 (OR Library) & 100–500 & 1000–5000 & 10–140 & 0.02–0.2 & 1–100 \\ \end{tabular} \end{table} Table 1. Characteristics of SCP instances used during training and testing of Graph-SCP. Here, \(m\) is the number of rows, \(n\) the number of columns, \(z\) the number of non-zero entries per column and \(d\) is the density of the instance. Instance Types 1, 2 and 3 have densities in the range of 10–20% with Instance Type 4 having low densities in the range of 4–5%. Instance Type 5 were picked from the OR Library (Ash a reinforcement learning approach that utilizes graph embeddings to learn a greedy policy that incrementally builds a solution. The aim of our work, was to focus on augmenting and accelerating an existing solver allowing us to carry over any of the performance guarantees that come along with such solvers. As part of approaches that use ML to aid conventional solvers, Gasse et al. (Gasse et al., 2017) use a GNN to learn expensive to compute branch-and-bound heuristics for Mixed Integer Program (MIP) solvers. The learning is done offline and the learned models are then used at run time in place of the heuristic to achieve faster run times while maintaining the quality of the solution. This work is similar to our approach in that it uses ML to aid a CO solver, however, it does so by speeding up the internal workings (branching heuristic) of the solver. Graph-SCP differs in that it reduces the input problem size complexity, and thereby can be used in conjunction with the method proposed by Gasse et al. to potentially achieve further improvements in run-time. Kruber et al. (Kruber et al., 2017) use supervised learning (e.g., KNN, RBF) to determine when a Dantzig-Wolfe decomposition should be applied to a MIP. ## 6. Conclusion We present Graph-SCP, a GNN method that can augment conventional or ML-augmented optimization solvers by learning to identify a much smaller sub-problem containing the solution space. In doing so, Graph-SCP achieves faster run times while still taking advantage of performance guarantees that come along with conventional solutions. Given a desired optimality threshold, Graph-SCP will improve upon it or even achieve 100% optimality with up to 25x faster run times. Unlike most ML solutions for CO, the GNN prediction module within Graph-SCP needs to be run only once. We show that Graph-SCP can generalize across SCP instances with diverse characteristics and complexities and show that it performs well on instances from the canonical OR Library. Through various ablation Figure 3. (A) Instance Type 1 (B) Instance Type 2 (C) Instance Type 3 (D) Instance Type 4 (E) Instance Type 5. Across all 5 instances, we see that Graph-SCP achieves over 95% of the optimal objective, reaching 100% for Instance Type 2 (B) and over 98% for Instance Types 3 and 5 ((C) and (D)). In terms of speedup, we see the largest speedup of about 25x for Instance Type 1 (A) and the smallest speedup of about 1.4x for Instance Type 5 (E). Instance Type 5 are instances from the OR Library (Gasse et al., 2017) that are split into different sets based on their characteristics. Shown in (E) are results averaged across all sets. A detailed breakdown of these sets is shown in Table 2. Note that the greedy algorithm runs significantly faster across all 5 instances, however with poor objective values. In each subplot, the area of the plot above the red line (random sample) correspond to the benefits of learning. Also, observe that Graph-SCP has tight variances compared to the baselines. studies we provide insights on how the different modeling choices affect Graph-SCP performance.
2307.02216
Chimera states in neural networks and power systems
Partial, frustrated synchronization and chimera-like states are expected to occur in Kuramoto-like models if the spectral dimension of the underlying graph is low: $d_s < 4$. We provide numerical evidence that this really happens in case of the high-voltage power grid of Europe ($d_s < 2$), a large human connectome (KKI113) and in case of the largest, exactly known brain network corresponding to the fruit-fly (FF) connectome ($d_s < 4$), even though their graph dimensions are much higher, i.e.: $d^{EU}_g\simeq 2.6(1)$ and $d^{FF}_g\simeq 5.4(1)$, $d^{\mathrm{KKI113}}_g\simeq 3.4(1)$. We provide local synchronization results of the first- and second-order (Shinomoto) Kuramoto models by numerical solutions on the FF and the European power-grid graphs, respectively, and show the emergence of \red{chimera-like} patterns on the graph community level as well as by the local order parameters.
Shengfeng Deng, Géza Ódor
2023-07-05T11:46:26Z
http://arxiv.org/abs/2307.02216v2
# Chimera states in neural networks and power systems ###### Abstract Partial, frustrated synchronization and chimera states are expected to occur in Kuramoto-like models if the spectral dimension of the underlying graph is low: \(d_{s}<4\). We provide numerical evidence that this really happens in case of the high-voltage power grid of Europe (\(d_{s}<2\)) and in case of the largest, exactly known brain network corresponding to the fruit-fly (FF) connectome (\(d_{s}<4\)), even though their graph dimensions are much higher, i.e.: \(d_{g}^{EU}\simeq 2.6(1)\) and \(d_{g}^{FF}\simeq 5.4(1)\), \(d_{g}^{KKII113}\simeq 3.4(1)\). We provide local synchronization results of the first- and second-order (Shinomoto) Kuramoto models by numerical solutions on the the FF and the European power-grid graphs, respectively, and show the emergence of chimera-like patterns on the graph community level as well as by the local order parameters. **We show that Kuramoto oscillator models on large neural connectome graph of the fruit-fly, a human brain, as well as on the power-grid of Europe produce chimera states. This is in agreement with the low spectral dimensions that we calculated by the eigenvalue spectra of the Laplacian of these networks. We compare these results with the topological dimension measurements and previous simulations, strengthening that frustrated synchronization should occur, which can generate slow relaxations, obtained in previous studies within the neighborhood of the synchronization transition point.** + Footnote †: preprint: APS/123-QED ## I Introduction Synchronization phenomena are very widespread in nature and the understanding of their behavior is in the focus of interest. In neural systems, like the brain oscillatory behavior of building elements has been measured by different techniques, while in case of power grids, the alternating currents can also be described by coupled oscillators. Both systems are expected to operate close to the synchronization transition point. In case of the normal brain, self-tuning to the critical point is hypothesized [1] and confirmed by experiments [2] and theoretical considerations [3]. The advantage of criticality is the optimal computational performance, sensitivity as well as dynamically generated long-range memory and interactions [4]. In case of power grids the competition of supply and demands tune the system close to the synchronization transition point [5]. Synchronization models described by the first Kuramoto equation [6] have recently been investigated on complex networks and partial synchronization was found if the spectral dimension is below 4 even if generalizations of the Euclidean dimension, the graph and the Hausdorff dimension are high or diverge [7]. Partial synchronization is more probable in strongly connected modules or communities, which also happens both in biological and technical structures. Modular and most often hierarchical organization is known in general brain networks [8], among others in case of the fruit-fly (FF) connectome [9; 10], as well as in power grids [11]. Thus synchronization occurs in the strongly coupled modules first, while in the loosely coupled parts, nodes may remain desynchronized for the same conditions, which was called frustrated synchronization [12], reminiscent of the semi-critical Griffiths Phases (GP) of condensed matters [13]. Besides, in these phenomena, fluctuations of the global order parameters diverge in an extended control parameter space. Recently this was shown in case of Kuramoto models on brain connectomes [14; 15; 10; 12; 16] as well as in case of power-grids [17; 18; 11]. One can also relate such structures, emerging in these heterogeneous systems, to chimera states, in which subsets of an ensemble of identical, interacting oscillators exhibit distinct dynamical states, such as one group of synchronized oscillators and one group of desynchronized oscillators [19]. Firstly chimeras were defined in systems of identical oscillators [19; 20]. In such a case, a non-zero phase lag term is essential for partial synchronization to occur. Realistic models, however, require oscillators to be heterogeneous and chimeras have been detected on complex [21; 22], brain-like networks [23]. The purpose of the present study is to provide numerical evidences of such structures by solving Kuramoto equations in seemingly different areas of complex systems, on the largest available brain connectome and on the European high voltage power-grid network. We show they are characterized by a low spectral dimension: \(d_{s}<4\). In these models quenched heterogeneity is present structurally, by the topology of the graphs as well as by the different self-frequencies of nodes. ## II Methods In this section, we detail the models and the methods we applied to describe synchronization on different networks. ### The first-order Kuramoto model Several oscillator models have been used in biology, the simplest possible one is the Hopf model [24], which has been used frequently in neuroscience, as it can describe a critical point with scale-free avalanches, with sharpened frequency response and enhanced input sensitivity. The local dynamics of each brain area (node) is described by the normal form of a supercritical Hopf bifurcation, also called a Landau-Stuart oscillator, which is the canonical model for studying the transition from noisy to oscillatory dynamics. Another complex model, describing more non-linearity [25] is the Kuramoto model [26; 6], with phases \(\theta_{i}(t)\), located at the \(N\) nodes of a network, according to the dynamical equation \[\dot{\theta}_{i}(t)=\omega_{i}^{0}+K\sum_{j}W_{ij}\sin[\theta_{j}(t)-\theta_{i} (t)]\,. \tag{1}\] The global coupling \(K\) is the control parameter of this model, by which we can tune the system between asynchronous and synchronous states. The summation is performed over the nearest neighboring nodes, with connections described by the weighted/unweighted adjacency matrix \(W_{ij}\) and \(\omega_{i}^{0}\) denotes the intrinsic frequency of the \(i\)-th oscillator. For simplicity, we used the Gaussian distribution with zero mean and unit variance for the self-frequency distribution \(g(\omega_{i}^{0})\) with respect to a rotating frame [14]. Using this model the resting state critical behavior on large human connectomes [14; 15] was compared with that of the FF [10] on the global order parameter level and the topology dependence has been pointed out, which suggested extended fluctuation region and GP-like behavior in case of the human connectomes in contrast with the FF network. Very recently we have also investigated an extension of Eq. (1) to the Shinomoto-Kuramoto (SK) model, with periodically driven forces [27] to describe task phase of the brain models [16] \[\dot{\theta}_{j}(t) = \omega_{j}^{0}+K\sum_{k}W_{jk}\sin[\theta_{k}(t)-\theta_{j}(t)]\] \[+ F\sin(\theta_{j}(t))+\epsilon\eta_{j}(t)\.\] Here \(\epsilon\) describes an excitation, with a zero centered, Gaussian random annealed noise \(\eta_{j}(t)\) and a site-dependent periodic force term, proportional to a coupling \(F\), was also added. But in fact, a small \(\eta\) proved to be irrelevant, for the synchronization transition, caused by \(F\), in the presence of the chaotic noise. One of the main conclusions of Ref. [16] was that community dependent values of the Hurst exponent \(H\) and the \(\beta\) exponent, measuring self-similarity of time series, varied more with \(F>0\), than in the resting state of the brain, corresponding to \(F=0\). Now we shall test this community dependence of \(R\) and \(\Omega\) in the steady state. ### The second-order Kuramoto model The time evolution of power-grid synchronization is described by the swing equations [28], set up for mechanical elements (e.g. rotors in generators and motors) with inertia. It is formally equivalent to the second-order Kuramoto equation [29], for a network of \(N\) oscillators with phases \(\theta_{i}(t)\): \[\dot{\theta}_{i}(t) = \omega_{i}(t) \tag{3}\] \[\dot{\omega}_{i}(t) = \omega_{i}^{0}-\alpha\dot{\theta}_{i}(t)+K\sum_{j=1}^{N}A_{ij}\sin [\theta_{j}(t)-\theta_{i}(t)]\.\] Here \(\alpha\) is the damping parameter, which describes the power dissipation, or an instantaneous feedback [17], \(K\) is the global coupling, related to the maximum transmitted power between nodes; and \(A_{ij}\), which is the adjacency matrix of the network, contains admittance elements. The quenched external drive, denoted by \(\omega_{i}^{0}\), which is proportional to the self-frequency of the \(i\)-th oscillator and carries a dimension of inverse squared time \([1/s^{2}]\), describes the power in/out of a given node when Eq. (3) is considered to be the swing equation of a coupled AC circuit, but here, similar to the first-order Kuramoto model, we have chosen it zero centered Gaussian random variable as rescaling invariance of the equation allows to transform it out within a rotating frame. For simplicity, one can assume that \(\omega_{i}(0)\) is drawn from the same distribution as \(\omega_{i}^{0}\) and numerically set \(\omega_{i}(0)=\omega_{i}^{0}\), amounting to taking \([s]\)=1. In our present study the following parameter settings were used: the dissipation factor \(\alpha\), is chosen to be equal to 0.4 to meet expectations for power grids, with the \([1/s]\) inverse time physical dimension assumption. To characterize the phase transition properties the phase order parameter \(R(t)\) has been studied for both the first- and second-order Kuramoto models. To ensure the relaxation to the steady states, we measured the Kuramoto phase order parameter \[z(t_{k})=r(t_{k})\exp[i\theta(t_{k})]=1/N\sum_{j}\exp\left[i\theta_{j}(t_{k}) \right]\, \tag{4}\] where \(0\leq r(t_{k})\leq 1\) gauges the overall coherence and \(\theta(t_{k})\) is the average phase at discrete sampling times \(t_{k}\), which was chosen to follow an exponential growth: \(t_{k}=1+1.08^{k}\) to spare memory space. The calculation of derivatives was done adaptively at small time steps via the Bulirsch-Stoer stepper [30]. The sets of equations (1), (2) and (3) were solved numerically for \(10^{3}-10^{4}\) independent initial conditions in [18], initialized by different \(\omega_{i}^{0}\)-s and different \(\theta_{i}(0)\)-s if disordered initial phases were invoked. Then sample averages for the phases and the frequencies give rise to the Kuramoto order parameter \[R(t_{k})=\langle r(t_{k})\rangle\, \tag{5}\] and the variance of the frequencies \[\Omega(t)=\frac{1}{N}\sum_{j=1}^{N}(\overline{\omega}(t)-\omega_{j}^{2}(t)). \tag{6}\] We don't discuss the time-dependent behavior of the global order parameters as this has been investigated in detail in [31; 18]. In the steady state, which we determined by visual inspection of \(R(t)\) and \(\Omega(t)\), we measured their half values and the standard deviations \(\sigma(R(t))\) and \(\sigma(\Omega(t))\) in order to locate the transition points. In the paper we used the \(\sigma(R)\), \(\sigma(\Omega)\) values, obtained by sample and time averages in the steady state. ### Topological and spectral dimensions The effective graph (topological) dimension \(d_{g}\) is defined by \[N(r)\sim r^{d_{g}}, \tag{7}\] where we counted the number of nodes \(N(r)\) with chemical distance \(r\) or less from randomly selected seeds and calculated averages over many trials [32; 33]. In most cases, \(d_{g}\) as obtained from this cluster growing method can be served as an estimation for the more rigorously defined Hausdorff dimension \(d_{H}\approx d_{g}\)[34]. In Ref. [10] the graph dimension of the FF was estimated to be \(d_{g}^{FF}=5.4(1)\), while in [18] we provided a value for the unweighted European power-grid \(d_{g}^{EU}=2.6(1)\). For regular Euclidean lattices, it has been shown that a true transition to global synchronization under the thermodynamics limit is only possible for \(d>4\) in both the first- and the second-order Kuramoto models [35; 36], while for \(d\leq 4\), there is only a crossover from the asynchronous phase to a partial synchronous phase characterized by an increasingly broadened variance of \(R\) and a shifting crossover point \(K_{c}^{\prime}\) as the system size increases [37; 36]. The natural question is then if it is the topological dimension \(d_{g}\) defined in Eq. (7), or equivalently the Hausdorff dimension \(d_{H}\), that dictates the synchronization properties for general networks which may assume non-integer dimensions. Refs. [37; 7] suggested that the synchronization properties of a general network should be related to the spectral dimension derived from the eigenvalue spectrum of the graph Laplacian matrix, and even more so for the so-called complex network manifolds studied therein, which are constructed out of finite-dimensional simplicies but are characterized by an infinite Hausdorff dimension due to their small-world properties [37; 38]. Graph spectral properties of complex networks have been shown to be particularly relevant to the structures of networks [39]. Following Refs. [37; 7], we adopt the normalized Laplacian \(L\) with elements \[L_{ij}=\delta_{ij}-A_{ij}/k_{i} \tag{8}\] for unweighted networks, where \(k_{i}\) denotes the degree of node \(i\). Similarly, for weighted networks, the elements of the normalized Laplacian are given by \[L_{ij}=\delta_{ij}-W_{ij}/k_{i}^{\prime}, \tag{9}\] where \(k_{i}^{\prime}=\sum_{j}W_{ji}\) denotes the weighted in-degree of node \(i\). The normalized Laplacian has real eigenvalues \(0=\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{N}\), the density of which scales as [37; 40] \[\rho(\lambda)\simeq\lambda^{d_{g}/2-1} \tag{10}\] for \(\lambda\ll 1\), where \(d_{s}\) is the spectral dimension. The cumulative density is then given by \[\rho_{c}(\lambda)=\int_{0}^{\lambda}d\lambda^{\prime}\rho(\lambda^{\prime}) \simeq\lambda^{d_{g}/2}. \tag{11}\] Since Eqs. (10) and (11) hold for small \(\lambda\) values, for the fruit-fly connectome and the European high-voltage power grid network that are going to be studied in Sec. III, with which \(N\gg 1\), we will only extract the densities for the first 200 smallest eigenvalues for ease of eigenvalue computation without loss of generality. As illustrated in Fig. 1, Euclidean lattices in dimension \(d\) have spectral dimension \(d_{s}=d\). Therefore in this case the spectral dimension is also equal to the Hausdorff dimension of the lattice, \(d_{s}=d_{H}\). However, in general, networks can have non-integer spectral dimension \(d_{s}\) not equal to their Hausdorff dimension. Ref. [37] demonstrated that in lower spectral dimensions \(d_{s}<4\), there is a parameter regime that exhibits frustrated synchronization with spatio-temporal fluctuations even in the stationary state. Then, similar to the emergence of rare regions in Griffiths phases [13], one should expect to observe states with rare regions-usually called "chimera states" - in such frustrated synchronization as well, as we will demonstrate in what follows with the aid of the local order parameter of Kuramoto models. ### Local order parameters of the first- and second-order Kuramoto models To investigate the heterogeneity further, we measured the local Kuramoto order parameter, defined as the partial sum of phases for the neighbors of node \(i\) \[r_{i}(t)=\frac{1}{N_{\text{i.neigh}}}\left|\sum_{j}^{|N_{\text{i.neigh}}}A_{ij }e^{i\theta_{j}(t)}\right|. \tag{12}\] This local Kuramoto measure was firstly suggested by Restrepo _et al._[41; 42] to quantify the local synchronization of nodes, which allows us to visualize regions of synchronized/unsynchronized chimera-like behavior and which will be the main quantity of interest of this paper. ## III Chimera states ### Chimera states in the fruit-fly (FF) and in a human connectome First, we examine chimera states in the first-order Kuramoto model on the FF connectome. Connectomes are Figure 1: Spectral dimensions of \(d\)-dimensional regular lattices (\(d=1,2,3,4\)) of different lateral sizes \(l\), measured through Eq. (11). The real dimension of a lattice is asymptotically approached as \(l\rightarrow\infty\). defined as structural networks of neural connections of the brain [8]. For the fruit fly, we used the hemibrain dataset (v1.0.1) from [43], which has \(N_{FF}=21\,662\) nodes and \(E_{FF}=3\,413\,160\) edges, out of which the largest single connected component contains \(N=21\,615\) and \(E=3\,410\,247\) directed and weighted edges, with weights being the number of connections between a pair of nodes. The number of incoming edges varies between 1 and 2708. The weights are integer numbers, varying between 1 and 4299. The average node degree is \(\langle k\rangle=315.129\) (for the in-degrees it is: 157.6), while the average weighted degree is \(\langle w\rangle=628\). The adjacency matrix, visualized in [10], shows a weak hierarchical modular structure, however, it is not random. For example, the degree distribution is much wider than that of a random graph and exhibits a fat tail. The analysis in [10] found a weight distribution \(p(W_{ij})\) with a heavy tail, and assuming a power-law (PL) form, a decay exponent \(2.9(2)\) could be fitted for the \(W_{ij}>100\) region. The modularity quotient of a network is defined by [44] \[Q=\frac{1}{N\langle k\rangle}\sum_{ij}\left(A_{ij}-\frac{k_{i}k_{j}}{N\langle k \rangle}\right)\delta(g_{i},g_{j})\,. \tag{13}\] The maximum of this value corresponds to the optimal community structure characterizes how modular a network is, where \(\delta(g_{i},g_{j})\) is 1 when nodes \(i\) and \(j\) were found to be in the same community \(g\), or 0 otherwise. Community detection algorithms based on modularity optimization get the closest to the actual modular properties of the network. The modularity was calculated using community structures detected by the Louvain method [45], from which we obtained \(Q_{FF}\approx 0.631\)[16]. The effective graph (topological) dimension, obtained by the breadth-first search algorithm is \(d_{g}^{FF}=5.4(5)\). To compute the spectra dimensions, we extracted the cumulative density distributions of the first 200 eigenvalues of both the Laplacian matrices of the unweighted and weighted FF connectomes and plot them in a log-log scale as shown in Fig. 2. For small enough \(\lambda\) values, the distributions indeed display a scaling regime, permitting estimation of spectral dimensions by Eq. (11), as listed in the plot legend and Table 1. Now, even though \(d_{g}^{FF}>4\), since \(d_{s}<4\), one should expect frustrated synchronization in some parameter regime where chimera states may be observed. To that end, we solved Eq. (1) and Eq. (2) numerically with respect to different coupling strengths and \(F\) on the weighted FF connectome and observed the respective global phase order parameter \(R(t\rightarrow\infty)\) in the stationary state. The stationary state is typically reached after a few hundred time steps; see, for example, the first panel of Fig. 3. Practically, we followed the dynamics up to \(t=1000\) to ensure stationarity. In Ref. [10], we had estimated the critical coupling \(K_{c}\simeq 1.6\) from the peak of the variance of \(R(t\rightarrow\infty)\) with respect to \(K\). By using this critical coupling, we further calculated the local order parameter Eq. (12) for each node, averaged over 20 independent simulation runs. In Fig. 3, the local order parameters for three representative time steps are displayed by encoding the respective values to a color map. Since the simulations were started from a fully asynchronous state, we see that the system gradually evolves into a more synchronous state at larger times. However, even in the globally stationary state characterized by a constant \(R\) value, the local order parameters show rather inhomogeneous patterns, with some parts of the connectome are more synchronized (greener regions) while some other parts are less synchronized (redder regions), indicating the emergence of chimera states [41; 42]. Note the disparity of the synchronization levels between different regions is quite large in this case, with greener regions almost fully synchronized and red regions fully unsynchronized. What is more, as partially shown by the second and the third panels at \(t=748\) and \(t=1885\), the distribution of the local order parameters can still evolve in the globally stationary state. Simulation results seemed to suggest quite random temporal behavior for the local order parameters (not shown here), but more careful studies for the long-time behavior are still needed to examine if it is periodic with a very long period. These results are thus suggestive of strong spatio-temporal fluctuations in chimera states, as it is typical for frustrated synchronization [37]. To provide more evidence of the Chimera states, we have calculated the order parameters in the steady state at \(K=1.6\) in the nine largest communities, determined by the Louvain method [16]. However, the community dependence is rather weak in case of the Kuramoto model. So we enhanced the local synchronization by adding periodic forces within the framework of the SK model. The transition point shifts in the range \(F_{c}\in[0.05,0.1]\). Even more evident community dependence could be found in the frequency synchronization points estimated by the peaks of the variances of the order parameter \(\Omega\). As one can see in Fig. 6, frequency entrainment occurs in the range \(F_{c}^{\prime}\in[0.025,0.1]\) in different communities. \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{Fruit-fly conn. European grid} \\ \hline \(d_{g}\) & 5.4(1)[10] & 2.6(1)[18] \\ \(d_{s}\) (unwei.) & 3.375 & 1.717 \\ \(d_{s}\) (wei.) & 2.342 & 1.507 \\ \hline \end{tabular} \end{table} Table 1: The graph dimensions and spectral dimensions of the FF connectome and the European grid network. Both spectral dimensions for unweighted and weighted networks are given. Figure 2: The cumulative eigenvalue densities of the unweighted (blue solid square) and the weighted (red dot) FF network, extracted from the first 200 smallest eigenvalues. That means that for certain forces some communities are in the super-, while others are in the sub-critical states locally, suggesting Chimeras. As it has already been shown in Refs. [10; 16] the FF graph exhibits a weak modular structure. Much higher level of modularity can be observed in human connectomes, albeit on a coarse grained scale, describing the white matter. The large human connectomes obtained by DTI of MRI [10; 16; 46; 47] has node numbers of order of a million. We have not been able to calculate the local order parameters and the spectral dimensions for such large systems. In [47] we calculated their graph dimensions, which proved to be above 3 of the embedding space, but lower than 4, due to the long fiber tracts, connecting distant regions. We show here, that synchronization of communities of the KKI-113 graph exhibits much more visible differences than that of the FF, suggesting strong Chimeras in case of the Kuramoto model running on them. This small-world graph contains 799 133 nodes, connected via 48 096 500 undirected and weighted edges and exhibit a hierarchical modular structure, because it was constructed from cerebral regions of the Desikan-Killany-Tourville parcellations, which is standard in neuroimaging [48]. The modularity quotient is much higher, than that of the FF: \(Q_{KKI-113}\approx 0.915\) and the topological dimension is just \(d_{g}=3.4(1)\)[16]. As one can see on Fig. 7 in certain communities the synchronization is high at \(K=3\), while others are still practically unsynchronized at this coupling, suggesting Chimera states. Figure 4: The cumulative eigenvalue densities of the unweighted (red solid square) and the weighted (blue dot) European power-grid network, extracted from the first 200 smallest eigenvalues. Figure 3: Evolution of \(R(t)\) towards the steady state following a start from a fully unsynchronized state (upper left panel) and local Kuramoto results at five different time steps encoded by the color map of \(r_{i}\) for the weighted FF connectome. Results were averaged over 20 samples. In the color-maps, red corresponds to low local synchronization, and green to high synchronization. ### Chimera states in the European high-voltage power grid network Unlike neural networks on which the oscillators are massless a power-grid network are massive and should be described by the second-order Kuramoto model. In this subsection, we attempt to show if chimera states can emerge in such systems. Power-grid networks are genuinely hierarchical modular networks if the detailed information for the medium- and low-voltage parts of the grids are also incorporated. Practically, it is almost impossible to infer the entire structure of large power-grid networks, but it is feasible to mimic it by adding medium- and low-voltage parts to the high-voltage (HV) skeleton, according to the empirical hierarchical distribution, as it was done in Ref. [11] We downloaded the European HV power grid from the "SciGRID Dataset"[49] encoding the 2016 status, deduced via processing google street-map. We have not supplemented this graph with lower-voltage parts, but it already contains 12 kV, 20 kV,... links, which belong to Figure 5: Local Kuramoto results in the stationary state encoded by the color maps of \(1-r_{i}\) for (a) the unweighted and (b) the weighted European high-voltage networks. Results were averaged over 20 samples. Red corresponds to low local synchronization, and green to high synchronization. The width of the gray edges in (b) is proportional to the logarithm of the weights. Figure 6: Community dependence of \(\sigma(\Omega)\) at \(K=1.6\) in the FF. Peaks, marking the local entrainment point \(F_{c}^{\prime}\), move from 0.025 to 0.1 for communities 8,6,4,3,2,1 and the whole (top to bottom curves). Results were obtained by averaging over 200 samples. Figure 7: Community dependence of \(R\) for different \(K\)-s showing different phase synchronizations, corresponding to Chimera states in the KKI-113 connectome. The the lowest curve denotes the synchronization of the whole system, which grows the slowest by increasing \(K\). Inset: Fluctuations of the same data, showing peaks at different \(K\)-s. The lowest curve, representing the whole system grows the slowest. Results were averaged over 100 samples. the middle-voltage category, according to the definition of the 100 kV threshold for HV lines. This graph contains \(N=13478\) nodes, interconnected via \(E=33844\) links. After symmetrizing it, an average degree \(\langle k\rangle=2.51\) was obtained. In Fig. 1 of Ref. [18] the degree distribution is shown. The tail of the degree distribution for \(k\geq 15\) could be well fitted by a stretched exponential \(8.25\times e^{-0.53(5)k}\) function, which renders this network at the threshold of robust/fragile: \(\gamma=3/2\), as according to the definition in [50], networks with a \(P(k>K)=Ce^{-k/7}\) cumulative degree distribution and \(\gamma<3/2\) are robust, based on a mean-field percolation theory under random node removals. The adjacency matrix, visualized on Fig. 2 of Ref. [18] proves that this is a highly modular graph, characterized by \(Q^{EU}=0.963\). Furthermore, it is a small-world network according to the definition of the small-worldness coefficient [18; 51]. By calculating the graph dimension using the breadth-first search algorithm as shown in the inset of Fig. 3 of Ref. [18], \(d_{g}^{EU}=2.6(1)\) was obtained. Since the coupling between a pair of nodes of a power grid is proportional to the maximal power \(P_{ij}\) transmitted between them and inverse to the imaginary part \(X_{ij}\) of the impedance of the transmission line, weights computed from the normalized values of \(P_{ij}/X_{ij}\) had also been considered to construct the weighted network. By again extracting the cumulative density distributions of the first 200 eigenvalues of both the Laplacian matrices of the unweighted and weighted European power-grid networks, Fig. 4 shows quite clean power laws. As listed in the plot legend and Table 1, the estimated spectral dimensions are both well below the critical dimension \(d_{c}=4\). Hence, one can again expect to observe chimera states, although instead, the second-order Kuramoto model (3) is going to be inspected in this case. We again tune the system to the verge of criticality. By solving Eq. (3) to obtain the peak of the variance of \(R(t\rightarrow\infty)\), the critical couplings had been estimated to \(K_{c}=80\) for the unweighted network [18] and \(K_{c}=7000\) for the weighted network. The stationary states are typically reached after a few hundred time steps (see Ref. [18]), but we solved them up to \(t=20000\) to ensure the stationarity of the system. The local order parameters, calculated in the stationary states after averaging over 100 samples, are then obtained with respect to these critical couplings. Fig. 5 shows that inhomogeneous patterns, encoded again in a color map, indeed overwhelm the system. Due to the higher levels of synchronization, there are quite some proportions of oscillators with their local order parameters relatively closer to 1 as compared to the FF connectome case. Hence the color map shows the quantity \(1-r_{i}\) instead. Since the differences in the local parameters in greener regions and redder regions are still quite apparent, we see that as suggested by the low spectral dimension, chimera states indeed can be observed in this case. Note that even though the weighted network is a bit less synchronous globally at \(K=7000\) [\(R(t\rightarrow\infty)\simeq 0.47\)] than the unweighted network at \(K=80\) [\(R(t\rightarrow\infty)\simeq 0.48\)], the weighted network still seems to be more synchronized locally in many parts of the network. This emphasizes the importance of incorporating edge weights to take into account more realistic couplings between the nodes. Comparing the local order parameter patterns in Fig. 3 and Fig. 5, it is also interesting to note that less synchronous regions are typically also less clustered as compared to regions with higher levels of synchronization. This is in some sense in reminiscence of the analysis in Ref. [12], in which it had been shown that chimera states can also be characterized by the order parameters of different moduli. To provide more evidence of the Chimera states, we have also calculated the steady-state \(R\) in the twelve largest communities, determined via the Louvain method with a modularity score close to the maximum \(Q\approx 0.795\), in the same way as in case of the FF [52]. As one can see in Fig.8, synchronization occurs at different couplings in different communities, such that for small \(K\)-s the small communities are fully ordered, while the larger ones are still desynchronized. This is related to the size dependence of \(K_{c}\) in case of crossover, however here the the communities are not independent. Note, that the fully ordered communities have less than 100 nodes. ## IV Summary In this paper, we have demonstrated that chimera states can occur in Kuramoto-type models on large networks if the spectral dimension is low, i.e. \(d_{s}<4\), even if the graph dimension is not necessarily like that. That happened in case of the graph of the FF connectome, which exhibits \(d_{g}=5.4(1)\). This is in agreement with the hypothesis, advanced for the first-order Kuramoto model in [7]. But as modularity is weak for FF, so do Figure 8: Community dependence of \(R\) for different \(K\)-s showing different phase synchronizations, corresponding to Chimera states. The thick black curve denotes the synchronization of the whole system, which grows the slowest by increasing \(K\). Inset: Fluctuations of the same data, showing different synchronization points. The thick black curve, representing the whole system has the rightmost peak. Results were obtained by averaging over 100 samples. the Chimeras. We can show them by a community-level analysis with an applied periodic external field. In contrast, for a large human connectome possessing high modularity, we show strong community dependence of the local synchronization. Power grids can be described by the second-order Kuramoto model, which possesses inertia. We found that the European HV power grid has a graph dimension \(d_{g}=2.6(1)\), but the spectral dimensions seem to be below \(d_{s}=2\). Still, the occurrence of chimera-like patterns can be observed via order parameters and confirmed by a community-level synchronization study. We demonstrated the level of local synchronization by showing the local Kuramoto order parameter, but similar results have been found by calculating the local frequency spreads. ###### Acknowledgements. We thank Kristof Benedek and Balint Hartmann for providing weight calculations of the European network, Istvan Papp for exploring the communities, Jeffrey Kelling for developing the GPU solver code, and Robert Juhasz for the helpful discussions. This research was funded by ELKH grant SA-44/2021, and the Hungarian National Research, Development, and Innovation Office NKFIH grant K128989. Most of the numerical work was done on KIFU supercomputers of Hungary. ## Data Availability Statement Data are available on request from the corresponding author.
2308.02493
Body Fat Estimation from Surface Meshes using Graph Neural Networks
Body fat volume and distribution can be a strong indication for a person's overall health and the risk for developing diseases like type 2 diabetes and cardiovascular diseases. Frequently used measures for fat estimation are the body mass index (BMI), waist circumference, or the waist-hip-ratio. However, those are rather imprecise measures that do not allow for a discrimination between different types of fat or between fat and muscle tissue. The estimation of visceral (VAT) and abdominal subcutaneous (ASAT) adipose tissue volume has shown to be a more accurate measure for named risk factors. In this work, we show that triangulated body surface meshes can be used to accurately predict VAT and ASAT volumes using graph neural networks. Our methods achieve high performance while reducing training time and required resources compared to state-of-the-art convolutional neural networks in this area. We furthermore envision this method to be applicable to cheaper and easily accessible medical surface scans instead of expensive medical images.
Tamara T. Mueller, Siyu Zhou, Sophie Starck, Friederike Jungmann, Alexander Ziller, Orhun Aksoy, Danylo Movchan, Rickmer Braren, Georgios Kaissis, Daniel Rueckert
2023-07-13T10:21:34Z
http://arxiv.org/abs/2308.02493v3
# Body Fat Estimation from Surface Meshes using Graph Neural Networks ###### Abstract Body fat volume and distribution can be a strong indication for a person's overall health and the risk for developing diseases like type 2 diabetes and cardiovascular diseases. Frequently used measures for fat estimation are the body mass index (BMI), waist circumference, or the waist-hip-ratio. However, those are rather imprecise measures that do not allow for a discrimination between different types of fat or between fat and muscle tissue. The estimation of visceral (VAT) and abdominal subcutaneous (ASAT) adipose tissue volume has shown to be a more accurate measure for named risk factors. In this work, we show that triangulated body surface meshes can be used to accurately predict VAT and ASAT volumes using graph neural networks. Our methods achieve high performance while reducing training time and required resources compared to state-of-the-art convolutional neural networks in this area. We furthermore envision this method to be applicable to cheaper and easily accessible medical surface scans instead of expensive medical images. ## 1 Introduction The estimation of body composition measures refers to the qualification and quantification of different tissue types in the body as well as the estimation of their distribution throughout the body. These measures can function as risk factors of individuals and be an indicator for health and mortality risk [1, 12]. One component of body composition analysis is the estimation of fatty tissue volume in the body. The strong correlation between body composition and disease risk has lead to a routine examination of measures indicating body composition in medical exams. The body mass index (BMI), for example, measures the ratio between a person's weight and height and has been shown to be an indicator for developing cardiovascular diseases, type 2 diabetes, as well as overall mortality [28; 12; 3; 32]. Additionally, the waist circumference and waist-hip-ratio can be used as an indication for body fat distribution [42; 48; 25; 6]. These metrics are easy, fast, and cheap to assess. However, they have strong limitations. They are imprecise as they do not allow for a more accurate assessment of the distribution of body fat or to differentiate between weight that stems from muscle or fat tissue. Understanding the specific differences between different types of fatty tissue and their impact on health risks is crucial for accurately assessing an individual's risk factors and enabling personalised medical care. Towards this goal, several works have investigated methods to identify variations of fat distribution in the body and the quantification of fatty tissues [54; 29]. Body fat can be divided into different types of fat. Two commonly investigated types are _visceral fat_ (VAT), which surrounds the abdominal organs, and _abdominal subcutaneous fat_ (ASAT), which is located beneath the skin. Studies have shown that especially visceral fat can have a negative impact on a person's health [40; 8; 47]. Therefore, a separate analysis of VAT and ASAT is an important step towards gaining accurate insights into body composition. Several works have investigated a precise estimation of VAT and ASAT volumes from medical images, like magnetic resonance (MR) [29] and computed tomography (CT) images [23], dual-energy X-ray absorptiometry (DXA) assessment [41], or ultrasound imaging [7]. Deep learning techniques have shown promising results in analysing these medical images in order to estimate body composition values [29; 23; 53; 43]. In this work, we perform VAT and ASAT volume prediction from full body triangulated surface meshes using graph neural networks (GNNs). We show that GNNs allow to utilise the full 3D data at hand, thereby achieving better results than state-of-the-art convolutional neural networks (CNNs) on 2D silhouettes, while requiring significantly less training time and therefore resources. Both ours and related work, such as [29], use data extracted from MR images. However, Figure 1: Visualisation of body surface meshes at different decimation rates; The most left mesh shows the original mesh, then left to right are visualisations of decimated meshes with ten thousand, one thousand, five hundred and two hundred faces. MR imaging is a very expensive technique, which is highly unequally distributed around the globe. The access to MR scanners in lower income countries is much more limited [18]. Furthermore, the acquisition of MR images is time consuming and very unlikely to be used for routine exams. Given the light computational weight and fast nature of our method, we envision it to be applied to data acquired from much simpler surface scans in the future and enable an incorporation into routine medical examination. ## 2 Background and Related Work In the following, we summarise related works on body fat estimation from medical (and non-medical) images, define triangulated meshes and the concept of graph neural networks and show some of their application to medical data, with a focus on surface meshes. ### Body Fat Estimation from Medical Imaging Body fat estimation has been part of routine medical assessments for decades through the analysis of simple measurements such as BMI or waist circumference [17]. However, more elaborate ways such as using proxy variables derived from medical images, like dual energy X-ray absorptiometry (DXA), CT or MR images, have achieved more accurate results. Multiple studies have successfully assessed patient body composition based upon DXA [22, 15, 41]. Hemke et al. [23] and Nowak et al. [43] show successful utilisation of CT images for body composition assessment. Works like [31] use segmentation algorithms to identify fatty tissue in MR scans, from which body composition values can be derived. Tian et al. [50] estimate body composition measures based on 2D photography, not even requiring medical imaging techniques. Many of these approaches focus on predicting specific types of adipose tissue [36, 39, 29, 31]. One idea, that has been followed by several works is the utilisation of silhouettes, a binary 2D projection of the outline of the body extracted from images. Xie et al. [54] use silhouettes generated from DXA whole-body scans to estimate shape variations and Klarqvist et al. [29] use silhouettes derived from MR Images for VAT and ASAT volume estimation using CNNs. The latter use two-dimensional coronal and sagittal silhouettes of the body outline and predict VAT and ASAT volume using convolutional neural networks. The silhouettes are extracted from the full-body magnetic resonance (MR) scans of the UK Biobank dataset [49]. In our work, we propose to switch from full medical images or binary silhouettes to surface meshes for fat volume prediction, which allows to integrate the full potential of the 3D surface into deep learning methods, while using the light-weight and fast method of graph neural networks (GNNs). ### Triangulated Meshes In this work, we use triangulated surface meshes of the body outline. A mesh structure can be interpreted as a specific 3D representation of a graph. A graph \(G:=(\mathit{V},\mathit{E})\) is defined by a set of nodes \(\mathit{V}\) and a set of edges \(\mathit{E}\), connecting pairs of nodes. The nodes usually contain node features, which can be summarised in a node features matrix \(\mathbf{X}\). A triangulated mesh \(\mathit{M}\) has the same structure, commonly holding the 3D coordinates of the nodes as node features. All edges form triangular faces that define the surface of the object of interest -in our case: body surfaces. A visualisation of such meshes can be found in Figure 1. ### Graph Neural Networks Graph neural networks have opened the field of deep learning to non-Euclidean data structures such as graphs and meshes [11]. Since their introduction by [20] and [46], they have been utilised in various domains, including medical research [2, 14]. Graphs are, for example, frequently used for representations of brain graphs [9], research in drug discovery [10], or bioinformatics [55, 56]. One native data structure that benefits from the utilisation of graph neural networks are surface meshes [11]. GNNs on mesh datasets have also advanced research in the medical domain such as brain morphology estimation [5], which can be used for Alzheimer's disease classification, or for the predicting of soft tissue deformation in image-guided neurosurgery [45]. In general, GNNs follow a so-called message passing scheme, where node features are aggregated among neighbourhoods, following the underlying graph structure [27, 13, 24, 30]. This way, after each iteration, a new embedding for the node features is learned. In this work, we use Graph SAGE [21] convolutions, which were designed for applications on large graphs. The mean aggregator architecture for a node \(v\in\mathcal{V}\) at step \(k\) is defined as follows: \[h_{v}^{k}=\sigma\left(\mathbf{W}\cdot\mathrm{MEAN}(\{h_{v}^{k-1}\}\cup\{h_{u} ^{k-1},\forall u\in\mathcal{N}_{v}\})\right). \tag{1}\] \(\mathcal{N}_{v}\) is the neighbourhood of node \(v\), \(\mathbf{W}\) is a learnable weight matrix, and MEAN the mean aggregator, which combines the node features of \(v\) at the previous step and the node features of \(v\)'s neighbours. ## 3 Methods We construct three different model architectures: (a) a graph neural network, (b) a simple convolutional neural network (CNN), and (c) a DenseNet and compare their performance. All models are trained using the Adam optimiser [26] and Shrinkage loss [38] and all results reported are cross-validated based on a 5-fold data split. We use a Quadro RTX 8 000 GPU for our experiments and all models predict both targets -VAT and ASAT- with the same network, following the approach from [29]. #### 3.3.1 GNN Architecture We perform a whole-graph regression task on the input meshes. The model architecture consists of a three-layer GNN with SAGE graph convolutions [21] and batch normalisation layers, followed by a max aggregation and a three-layer multi-layer perceptron (MLP). Hyperparameters such as learning rate and GNN layers are selected by manual tuning. All GNNs are trained for 150 epochs. #### 3.2.2 CNN Architecture In order to compare our results to the work by Klarqvist et al. [29], we also train a DenseNet and a simpler CNN on the silhouette data. DenseNet is a CNN which is more densely connected, where each layer takes all previous outputs as an input. For our DenseNet implementation, we follow the architecture in [29]. We additionally construct a simpler CNN architecture that consists of three 2D convolutions, followed by a three-layer MLP, matching the design of the graph neural networks. Both convolutional networks are trained for 20 epochs on a 2D input image, that consist of a sagittal and a coronal view of the binary silhouette masks of the MR images, following the pipeline in [29]. ## 4 Experiments and Results We use a subset of the UK Biobank dataset [49], which is a large-scale medical database. It contains a variety of imaging data, genetics, and life-style information from almost 65 000 subjects and was acquired in the United Kingdom. In this work, we use the neck-to-knee magnetic resonance images of a subset of 25 298 subjects, for which the labels are available (12 210 male and 13 088 female). The mean age of this cohort is 62.95 years. The VAT and ASAT distributions of male and female subjects are visualised in Figure 2. We can see that female subjects tend to have a higher ASAT volume, whereas male subjects tend to have more VAT. As labels, we used the reported VAT and ASAT volumes in the UK Biobank (field IDs: 22407 and 22408). ### Data Processing The experiments in this work are performed on triangulated body surface meshes that are extracted from the neck-to-knee MR images from the UK Biobank Figure 2: Distribution of VAT (left) and ASAT (right) volume of male and female subjects in the cohort. Male subjects tend to have more VAT volume, whereas female subjects tend to have more ASAT volume. [44]. These were acquired in stations and merged through stitching [33]. In order to extract the surface meshes, we first perform an algorithmic whole-body segmentation by a succession of morphological operations on the stitched MR scans. We then convert these segmentations into surface meshes using the marching cubes algorithm [37] and the open3d library [57]. In order to investigate how much the surface meshes can be simplified, we decimate them into meshes consisting of different numbers of faces. We use meshes with 10 000, 5 000, 1 000, 500, 200, and 100 faces. The number of nodes is always half the number of faces, following Euler's formula for triangular meshes [16]. Subsequently, the meshes are registered into a common coordinate system, using the iterative closest point algorithm [4]. As a reference subject, the most average subject in the dataset was selected based on height, weight, and age. The resulting decimated and registered surface meshes are then used for graph learning. Figure 1 shows an example of a body surface mesh at different decimation rates. ### Results Table 1 summarises the results of the GNNs and CNNs for ASAT and VAT volume prediction. We report the 5-fold cross-validation results on the test set of the best performing models, evaluated on the validation loss. We compare the results of our graph neural networks (GNNs) with the results achieved by the DenseNet from [29] and the results of a simpler CNN (which we call _CNN_ in the tables). We furthermore report the training times of all models, measured by the full training process for 150 and 20 epochs for GNNs and CNNs, respectively. All GNNs are trained on the body surface meshes, whereas the CNNs are trained on the silhouettes, following the approach proposed in [29]. We evaluate the GNNs on body surface meshes at different decimation rates of ten thousand, five thousand, one thousand, 500, 200, and 100 faces per mesh (see Figure 1 for a visualisation of some of these decimated meshes). The best test performances are highlighted in bold, so are the shortest training times. We can see that the Figure 3: R2 score results of VAT (left) and ASAT (right) predictions for all subjects, only males, and only females. simpler CNN architecture almost matches performance of the DenseNet proposed by [29], while requiring less training time. The GNNs outperform the CNN and the DenseNet, when the utilised meshes are not heavily decimated. But even highly decimated surface meshes with one hundred faces, only result in minor performance loss while requiring less than ten times less training time compared to the DenseNet. We envision the utilisation of the surface meshes and graph neural networks to allow for more efficient model training and the utilisation of the full 3D structure of the body, while keeping resource requirements low. Male and female subjects show different distributions in VAT and ASAT volume. While male subjects tend to have more VAT, females tend to have more ASAT. Figure 2 shows the distributions of the fat volumes of the two sex groups. We therefore compare the results of our method for female and male subjects separately. Table 2 summarises the results of all GNNs and CNNs for VAT and ASAT volume prediction split by sex. The best performing model for each fat type and sex is highlighted in bold. We can see that the predictions of VAT volume tends to be better on male subjects whereas the prediction of ASAT volume achieves slightly higher scores for the female subject. The GNNs, however, seem to show a slightly lower gap in performance between the sex groups. We attribute the difference in performance on the different fatty tissue types to the varying distributions in fat volume between the sex groups. \begin{table} \begin{tabular}{l l l l l} \hline **Tissue** & **Model** & **Decim.** & **Test R2** & **Time (min)** \\ \hline VAT & GNN (ours) & 100 & 0.858 \(\pm\) 0.001 & **8.36** \\ & & 200 & 0.872 \(\pm\) 0.001 & 8.63 \\ & & 500 & 0.882 \(\pm\) 0.001 & 9.01 \\ & & 1k & 0.888 \(\pm\) 0.001 & 10.11 \\ & & 5k & **0.893 \(\pm\) 0.002** & 22.36 \\ & & 10k & 0.893 \(\pm\) 0.003 & 37.75 \\ \cline{2-5} & \multicolumn{1}{l}{CNN (ours)} & - & 0.874 \(\pm\) 0.001 & 16.20 \\ \cline{2-5} & \multicolumn{1}{l}{DenseNet} & - & 0.878 \(\pm\) 0.004 & 95.79 \\ \hline ASAT & GNN (ours) & 100 & 0.909 \(\pm\) 0.001 & **8.36** \\ & & 200 & 0.921 \(\pm\) 0.002 & 8.63 \\ & & 500 & 0.931 \(\pm\) 0.001 & 9.01 \\ & & 1k & 0.935 \(\pm\) 0.002 & 10.11 \\ & & 5k & 0.938 \(\pm\) 0.000 & 22.36 \\ & & 10k & **0.941 \(\pm\) 0.002** & 37.75 \\ \cline{2-5} & \multicolumn{1}{l}{CNN (ours)} & - & 0.921 \(\pm\) 0.002 & 16.20 \\ \cline{2-5} & \multicolumn{1}{l}{DenseNet} & - & 0.934 \(\pm\) 0.002 & 95.79 \\ \hline \end{tabular} \end{table} Table 1: Results for **VAT** and **ASAT** volume estimation; We report the R2 scores on the test set with standard deviations based on 5-fold cross validation, as well as the training times of the full training in minutes. ## 5 Discussion and Conclusion In this work, we introduce a graph neural network-based method that enables adipose tissue volume prediction for visceral (VAT) and abdominal subcutaneous (ASAT) fat from triangulated surface meshes. The assessment of fatty tissue has high clinical relevance, since it has been shown to be a strong risk factor for diseases like type 2 diabetes and cardiovascular diseases [28; 32]. Especially a separate estimation of the two different fat tissues VAT and ASAT has shown to be a relevant medical assessment, since VAT is known to have a higher correlation with disease development compared to ASAT [40; 8; 47]. We here use graph neural networks and triangulated surface meshes, extracted from full-body MR scans and show that they achieve accurate VAT and ASAT volume predictions. We investigate how different decimation rates impact model performance and training times. Figure 4 visualises this correlation. The bars in the left figure show the average ASAT volume prediction R2 scores on the test set of the GNNs trained on the differently decimated meshes. The overlaid line plot notes the corresponding training times. We can see that at one thousand faces, we reach an optimal trade-off between training time and performance. Training the GNN on the meshes with one thousand faces only takes about 10 minutes and achieves high results of 0.893 R2 on VAT and 0.935 on ASAT volume prediction. On the right in Figure 4, we visualise the linear relation between the training time and the number of faces in the meshes. Training time also corresponds linearly to energy consumption in kWh. We attribute the comparably high performance of the \begin{table} \begin{tabular}{l l l l l} \hline \hline **Fat tissue** & **Model** & **Decimation** & **Female R2** & **Male R2** \\ \hline VAT & GNN (ours) & 100 & 0.782 \(\pm\) 0.004 & 0.824 \(\pm\) 0.003 \\ & & 200 & 0.804 \(\pm\) 0.006 & 0.840 \(\pm\) 0.003 \\ & & 500 & 0.815 \(\pm\) 0.008 & 0.854 \(\pm\) 0.003 \\ & & 1k & 0.827 \(\pm\) 0.004 & 0.861 \(\pm\) 0.001 \\ & & 5k & 0.831 \(\pm\) 0.006 & **0.868 \(\pm\) 0.002** \\ & & 10k & **0.837 \(\pm\) 0.002** & 0.867 \(\pm\) 0.004 \\ \cline{2-5} & CNN (ours) & - & 0.804 \(\pm\) 0.003 & 0.845 \(\pm\) 0.002 \\ \cline{2-5} & DenseNet & - & 0.811 \(\pm\) 0.006 & 0.849 \(\pm\) 0.006 \\ \hline ASAT & GNN (ours) & 100 & 0.923 \(\pm\) 0.003 & 0.852 \(\pm\) 0.004 \\ & & 200 & 0.934 \(\pm\) 0.001 & 0.870 \(\pm\) 0.006 \\ & & 500 & 0.940 \(\pm\) 0.002 & 0.890 \(\pm\) 0.002 \\ & & 1k & 0.945 \(\pm\) 0.001 & 0.895 \(\pm\) 0.004 \\ & & 5k & 0.945 \(\pm\) 0.000 & 0.903 \(\pm\) 0.002 \\ & & 10k & **0.948 \(\pm\) 0.001** & **0.906 \(\pm\) 0.005** \\ \cline{2-5} & CNN (ours) & - & 0.934 \(\pm\) 0.002 & 0.870 \(\pm\) 0.002 \\ \cline{2-5} & DenseNet & - & 0.944 \(\pm\) 0.001 & 0.891 \(\pm\) 0.003 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of **VAT** and **ASAT** volume prediction split by subject sex; all reported values are R2 scores on the test set, cross-validated across 5 folds. strongly decimated meshes to the fact that the most outer coordinates/nodes still remain in the meshes, which carry a lot of information about the outline of a body. The light-weight nature of GNNs allows for the usage of the full 3D data, while significantly reducing resource requirements and run time compared to 3D image-based methods. This shows great promise in the effort of bridging the gap between cheap, fast, but imprecise measures -such as BMI and waist circumference- and time-consuming, costly, but accurate methods such as medical imaging (CT, MR, or DXA). ## 6 Limitations and Future Work We see high potential in the utilisation of surface meshes and graph neural networks, given that the full 3D data can be utilised compared to only using binary silhouette projections like in [29]. The low training times as well as the high scores of the GNNs show the successful application to fat volume prediction. We note that we compare the run time of the training loops only. This does not include any pre-processing that is required for both silhouette-based and surface mesh-based approaches. The GNN architecture is based on SAGE graph convolutions [21], because they achieved the best results in our experiments, compared to graph attention networks [51] and graph convolutional networks [27]. A potential improvement of our method would be the utilisation of other mesh-specific convolutions such as adaptive graph convolution pooling [19] or FeaStNet [52]. Another interesting direction to explore is the utilisation of deeper GNNs. Li et al. [34], for example, introduce a method that enables the utilisation of deeper GNNs without over-smoothing -a commonly known problem with GNNs. Over-smoothing refers to the issue that deep GNNs do not achieve high performance because all node embeddings in the graph converge to the same value [35]. Figure 4: Relationship between training time and decimation rate of the meshes; The left plot shows the ASAT R2 scores (bars) and the corresponding training time, the right plot shows the linear relation between the training time or the energy consumption in kWh and the number of faces of the meshes. Our experiments are performed on surface meshes, that were extracted from MR images. However, we envision this method to work equally well on designated surface scans, without requiring expensive and time-consuming MR scans. We intend to investigate this in future work and apply our method to surface scans, which are for example acquired for dermatological examinations. This would eliminate the need for expensive MR scans and could lead to an embedding of this technique into routine medical examination. #### 5.0.1 Acknowledgements TM and SS were supported by the ERC (Deep4MI - 884622). This work has been conducted under the UK Biobank application 87802. SS has furthermore been supported by BMBF and the NextGenerationEU of the European Union.
2310.00541
Robust Nonparametric Hypothesis Testing to Understand Variability in Training Neural Networks
Training a deep neural network (DNN) often involves stochastic optimization, which means each run will produce a different model. Several works suggest this variability is negligible when models have the same performance, which in the case of classification is test accuracy. However, models with similar test accuracy may not be computing the same function. We propose a new measure of closeness between classification models based on the output of the network before thresholding. Our measure is based on a robust hypothesis-testing framework and can be adapted to other quantities derived from trained models.
Sinjini Banerjee, Reilly Cannon, Tim Marrinan, Tony Chiang, Anand D. Sarwate
2023-10-01T01:44:35Z
http://arxiv.org/abs/2310.00541v1
# Robust Nonparametric Hypothesis Testing to Understand Variability in Training Neural Networks ###### Abstract Training a deep neural network (DNN) often involves stochastic optimization, which means each run will produce a different model. Several works suggest this variability is negligible when models have the same performance, which in the case of classification is test accuracy. However, models with similar test accuracy may not be computing the same function. We propose a new measure of closeness between classification models based on the output of the network before thresholding. Our measure is based on a robust hypothesis-testing framework and can be adapted to other quantities derived from trained models. Sinjini Banerjee\({}^{\star}\) Reilly Cannon\({}^{\dagger}\) Tim Marrinan\({}^{\dagger}\) Tony Chiang\({}^{\dagger}\) Anand D. Sarwate\({}^{\star}\)\({}^{\star}\)Rutgers University \({}^{\dagger}\)Pacific Northwest National Lab + Footnote †: The work of S.B. and A.D.S. were supported in part by a Pacific Northwest National Laboratory Program (PNNL) under contract DR00022921. R.C., T.M., and T.C. were partially supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative via the Laboratory Directed Research and Development (LDRD) at PNNL. T.C. and A.D.S. were also partially supported by the Statistical Inference Generates kNowledge for Artificial Learners (SIGNAL) program at PNNL. DNN variability, Nonparametric hypothesis testing, Robust Kolmogorov-Smirnov test ## 1 Introduction State-of-the-art deep learning models have been remarkably successful in achieving state-of-the-art performance on complex tasks in the fields of healthcare, education, cyber-security, and other important domains. Training these models take significant time, energy, and financial resources. Because training is done with stochastic algorithms for nonconvex optimization, models produced by different training runs in general converge to different solutions. These solutions can be very different, even if they have a similar objective value and test loss. Two models may differ in their predictions on the same test point: this phenomenon is known as churn [1] and is a cause for model irreproducibility. Deep learning models are often continuously retrained as new data arrives. This necessitates algorithmic and architectural changes in state-of-the-art models to improve their performance on new data. The run-to-run variability in training models makes it difficult to conclude if a certain initialization or hyperparameter tuning made a meaningful difference in model performance or if it just "got lucky" due to the (unavoidable) presence of randomness in the optimization. Without this knowledge, comparing training configurations to assess if one is better than another becomes a difficult task. Reproducibility in training is an active area of recent work. Gundersen et al. [2] identified model initialization, random batch shuffling during mini-batch stochastic gradient descent (SGD), data sampling, and parallel execution over GPUs as some of the major sources of randomness present in the training procedure. Fort et al. [3], and Bouthillier et al. [4] showed random data ordering has a smaller effect than random initialization on model performance. Sompealli et al. [5] used decision boundaries to characterize the reproducibility of models. Summers and Dinneen [6], and Jordan [7] argue that variance matters only during initial conditions of the training procedure. Most work assesses the impact randomness has on the _test accuracy_ of models or _churn_ between models, both of which focus on the _decisions_ made by predictive (classification) models. Closeness of decisions does not imply closeness of the trained models and high test accuracy does not imply the stability of the learned features or their meaningful contribution to class differentiation [8]. In this paper we try to quantify the variability caused by training in terms of the _network outputs used to make the decision_. Figure 1 illustrates the difference: the sold and dashed lines represent two decision boundaries between red (circle) and blue (cross). Test accuracy measures incorrect decisions and the churn is given by the region between the two curves. We are interested in the shading, which shows the network output which is thresholded to make decisions. If we think of the training algorithm as generating a random sample from a function space, we can use other tools to understand model variability. As a first step, instead of looking at whether models make the same number of correct decisions, we can examine the distribution of "confidence" values (the _logit gap_) from functions learned by different runs of a fixed deep neural network architecture. In particular, we use measures derived from nonparametric hypothesis testing [9]. We can frame model similarity as asking whether two models generate similar distributions of logit gaps or, with an appro Figure 1: Illustration of two models that have the same test accuracy but different decision boundaries. priately defined null, use a goodness-of-fit test to assess whether a particular model is close to the null. Nonparametric goodness-of-fit tests are often sensitive in the large sample regime. To remedy this we use concepts derived from robust statistics [10]. ## 2 Problem Setup We focus on a binary classification problem1 in which data is generated from an (unknown) probability distribution \(\pi\) on the space of feature-label pairs \(\mathcal{X}\times\mathcal{Y}=\mathbb{R}^{d}\times\{0,1\}\). We interpret a predictive model (e.g. a neural network) as a function \(m\colon\mathcal{X}\to\mathbb{R}\), where we interpret \(m(x)\) as a log posterior probability ratio \(m(x)=\log\frac{\mathbb{P}(y=1|x)}{\mathbb{P}(y=0)x}\).The _prediction_ produced by the model is \(\hat{y}(x)=\mathbb{1}(m(x)\geq 0)\). We can also think of the model as computing a pair of outputs \(m^{+}(x)\) and \(m^{-}(x)\) where the posteriors are given by the softmax function: \(\mathbb{P}(y=1\mid x)=\frac{\exp(m^{+}(x))}{\exp(m^{+}(x))+\exp(m^{-}(x))}\). This makes \(m(x)=m^{+}(x)-m^{-}(x)\), so we refer to \(m^{+}(x)\) and \(m^{-}(x)\) as _logits_ and \(m(x)\) as the _logit gap_. Footnote 1: While we focus here on binary classification, our framework can be extended to more general prediction problems. A neural network with a given architecture is parameterized by a space of parameters (e.g. the weights) \(\Theta\). Thus for every parameter setting \(\theta\in\Theta\) the network computes a function \(m(x;\theta)\) and the neural network defines a family of functions \(\mathcal{M}=\{m_{\theta}:\mathcal{X}\to\mathcal{Y}\colon\theta\in\Theta\}\). Let \([n]=\{1,2,\ldots,n\}\). A training algorithm takes a _training set_\(\mathcal{D}_{\mathrm{train}}=\{(x_{i},y_{i}):i\in[N]\}\stackrel{{ \mathrm{i.i.d.}}}{{\sim}}\pi\) and "learns" a parameter setting \(\theta\) by approximately minimizing an empirical risk \(\hat{R}(\theta;\mathcal{D}_{\mathrm{train}})\) computed over the training data. The output of the training algorithm is a model \(m(x;\theta)\) with predictions \(\hat{y}(x;\theta)=\mathbb{1}(m(x;\theta)\geq 0)\). The optimization algorithms used in NN training are approximate in two ways. First, the risk minimization problem is in general nonconvex so in general they will converge to a local minimum. Second, they are usually stochastic, as in SGD, which means the estimated parameters \(\theta\) are themselves random variables. We can think of an NN training algorithm as sampling from a distribution on \(\Theta\) and therefore sampling from the space of functions \(\mathcal{M}\). Since two runs of the training algorithm can produce different functions, it is natural to ask how different these functions are. We can try to answer this using a _test set_\(\mathcal{D}_{\mathrm{test}}=\{(x_{j},y_{j})\colon j\in[N^{\prime}]\}\) which we assume is also sampled i.i.d. from \(\pi\). The _test accuracy_ of a model is \[A(\theta)=\frac{1}{N^{\prime}}\sum_{i=1}^{N^{\prime}}\mathbb{1}(\hat{y}(x_{j} ;\theta)=y_{j}). \tag{1}\] Two models \(m(\cdot;\theta_{1})\) and \(m(\cdot;\theta_{2})\) which have similar test accuracy make the same number of mistakes. The _churn_ is defined by \[C(\theta_{1},\theta_{2})=\frac{1}{N^{\prime}}\sum_{i=1}^{N^{\prime}}\mathbb{1} \left(\hat{y}(x_{j};\theta_{1})\neq\hat{y}(x_{j};\theta_{2})\right), \tag{2}\] which is the fraction of training points where the models disagree. Two models with low churn make almost the same decisions (regardless of whether they are correct or not). Both test accuracy and churn focus on the _predictions_ made by models and do not use information about the logit gap function \(m(x)\) beyond its sign. Looking at \(m(x)\) directly gives us other approaches to assess whether models are similar or not: two models may have similar accuracy and low churn but can have very different logit gaps. Given the test set, our central object of inquiry is the empirical distribution of logit gaps, which we can write as a cumulative distribution function (CDF): \[F(x;\theta)=\frac{1}{N^{\prime}}\sum_{j=1}^{N^{\prime}}\mathbb{1}(m(x_{j}; \theta)\leq x). \tag{3}\] With some abuse of notation, we consider \(\mathcal{P}\) to be the set of probability measures on \(\mathbb{R}\) and a \(F\) as an element of \(\mathcal{P}\). We consider the following statistical setup. The training algorithm takes the training set \(\mathcal{D}_{\mathrm{train}}\) and uses randomization in several different ways to produce a model. Each run of the training algorithm uses independent randomness: training is therefore sampling parameters \(\theta_{1},\theta_{2},\ldots,\theta_{M}\) i.i.d. from an (unknown) distribution on \(\Theta\) induced by the training algorithm. These parameters correspond to \(M\) i.i.d. samples \(\{m_{k}(x)=m(x;\theta_{k}):k\in[M]\}\) taking values in \(\mathcal{M}\). When applied to the test set, these produce \(M\) CDFs \(\{F_{k}(x)=m(x;\theta_{k}):k\in[M]\}\). We can control which sources of randomness are used in the training by altering the training procedure. For example, we can use deterministic initialization or fixed batch ordering. Under these scenarios, we can generate \(M\) models and ask if the models are more different from each other when we turn on or off the sources of randomness. To measure how different the models are, we use a framework from nonparametric hypothesis testing. ## 3 Goodness-of-fit Testing for Logit gaps We can define an expected logit gap function \(\bar{m}(x)\) by integrating over the distribution on \(\mathcal{M}\) induced by the training algorithm. Given a test set \(\mathcal{D}_{\mathrm{test}}\), this produces an expected CDF which we will call \(F_{0}\). If we knew what \(F_{0}\) was, we could assess whether or not a given model \(m(x)\) is a representative sample from the training algorithm by testing if the model's CDF \(F(x)\) is close to the expected \(F_{0}(x)\). This corresponds to the following one-sided hypothesis test: \[\mathcal{H}_{0}\colon\{m(x_{j})\colon j\in[N^{\prime}]\}\sim F _{0} \tag{4}\] \[\mathcal{H}_{1}\colon\{m(x_{j})\colon j\in[N^{\prime}]\}\not<F_{0} \tag{5}\] This is a classical nonparametric goodness-of-fit testing problem which can be solved using the Kolmogorov-Smirnov (KS) test [11, 12, 13]: \(d_{k}(F_{0},F)=\|F_{0}(x)-F(x)\|_{\infty}\stackrel{{\mathcal{H}_{ 1}}}{{\sim}}\tau\), where \(F(x)\) is the empirical CDF from \(m(x)\) and we can set the threshold \(\tau\) to achieve the desired error tradeoff. In large sample settings, the KS test often rejects the null because even small changes in the sample can result in a significant shift in the \(L_{\infty}\) norm. Note that we do not expect our empirical samples to look exactly like they were drawn from \(F_{0}\). Ideally, we want a test which allows for some outliers. Furthermore, we do not know what the null hypothesis \(F_{0}\) is, so even computing the test statistic is not possible. To address these issues, we use ideas from _robust statistics_[10]. Given a distribution \(P\in\mathcal{P}\) and \(\alpha\in[0,1]\), we can define the set of \(\alpha\)-contaminated distributions as \(\mathcal{R}_{\alpha}(P)=\{(1-\alpha)P+\alpha Q\colon Q\in\mathcal{P}\}\). Alvarez-Esteban et al. [14] interpret \(\mathcal{R}_{\alpha}(P)\) as the set of so-called \(\alpha\)-trimmings of \(P\): \[\mathcal{R}_{\alpha}(P)=\bigg{\{}Q\in\mathcal{P}\colon Q\ll P,\frac{dP}{dQ} \leq\frac{1}{(1-\alpha)}P-\text{ a.s.}\bigg{\}}. \tag{6}\] The advantage of this definition, as shown by del Barrio et al. [15], is that we can compute, for a given \(F\), the closest \(L_{\infty}\) approximation to \(F_{0}\) in the set of \(\alpha\)-trimmings of \(F\). \[d_{k}(F_{0},\mathcal{R}_{\alpha}(F))=\min_{\tilde{F}\in\mathcal{R}_{\alpha}(F)} \lVert F_{0}-\tilde{F}\rVert_{\infty}. \tag{7}\] The core idea behind trimming is to ask if trimming a small fraction of samples (from the test set) would allow a KS test to accept the null hypothesis. The set \(\mathcal{R}_{\alpha}(F)\) of \(\alpha\) trimmings of P can be characterized in terms of a "trimming function" \(h\) and we can efficiently find the optimizing \(\tilde{F}\in\mathcal{R}_{\alpha}(F)\) in (7) by optimizing over these trimming functions. This technique led to a recent work of del Barrio et al. [15] that proposes a _robust KS test_ based on trimming. ## 4 Using Trimming to Estimate Variability Our main contribution is a new way to measure the dissimilarity of trained models. To do this we first develop a two-sample version of the robust KS test discussed in the previous section. **Using a deep ensemble as a base model.** While we do not know the expected CDF \(F_{0}\), note that for a given test point \((x_{j},y_{j})\), we can use the empirical mean \(\frac{1}{M}\sum_{k=1}^{M}m_{k}(x)\) as an estimate of expected logit gap function \(\bar{m}(x)=\mathbb{E}[m(x)]\) at test point \(x\). To evaluate whether a particular model with CDF \(F_{\ell}\) is close to \(F_{0}\) we can instead measure its closeness to the _leave-one-out ensemble_ (LOOE): \[m_{-\ell}(x) =\frac{1}{M-1}\sum_{k\neq\ell}m_{k}(x) \tag{8}\] \[F_{-\ell}(x) =\frac{1}{N^{\prime}}\sum_{j=1}^{N^{\prime}}\mathbb{1}(m_{-\ell }(x_{j})\leq x). \tag{9}\] The LOOE corresponds to a deep ensemble predictor [16, 17, 18], which has been used to reduce variability in DNN models. Taking the average of model "confidences" across independent training runs make them closer to their expected values, lowering variability. Figure 2 shows how the logit gap samples obtained from averaging over all candidate models compare with candidate models in the pool. The ensemble model produces a lower number of samples with small logit gaps (samples with higher uncertainty), and large logit gaps (overconfident samples). **Proposed algorithm.** In our robust two-sample KS test, we are comparing two empirical CDFs, a candidate \(F_{\ell}\) and the LOOE \(F_{-\ell}\), and asking if they were generated by a common underlying distribution: \[\mathcal{H}_{0}\colon F_{\ell},F_{-\ell}\text{ are from the same distribution} \tag{10}\] \[\mathcal{H}_{1}\colon F_{\ell},F_{-\ell}\text{ are not from the same distribution} \tag{11}\] We use bootstrap sampling to resample \(N^{\prime}\) independent test points to compute \(F_{\ell}\) and \(F_{-\ell}\). For a fixed \(\alpha\), we calculate the \(\alpha\)-trimming of \(F_{\ell}\) that minimizes the \(L_{\infty}\) distance to \(F_{-\ell}\): \[d_{k}(F_{-\ell},\mathcal{R}_{\alpha}(F_{\ell}))=\min_{F\in\mathcal{R}_{\alpha }(F_{\ell})}\lVert F_{-\ell}-F\rVert_{\infty}. \tag{12}\] Our hypothesis test is then \[d_{k}(F_{-\ell},\mathcal{R}_{\alpha}(F_{\ell}))\underset{\mathcal{R}_{0}}{ \overset{\mathcal{H}_{1}}{\gtrless}}\tau. \tag{13}\] To set the threshold \(\tau\), we observe that if \(\tilde{F}\) is the minimizer in (12), the two-sample version of the Dvoretzky-Kiefer-Wolfowitz inequality due to Wei and Dudley [19] implies that \[\mathbb{P}\bigg{(}\sup_{x}\Bigl{|}\tilde{F}(x)-F_{-\ell}(x)\Bigr{|}>\tau\bigg{)} \leq Ce^{-2N^{\prime}\tau^{2}}. \tag{14}\] For \(N^{\prime}>458\) the leading constant \(C\) can be replaced with \(2\). In that case, to set the probability of falsely rejecting \(\mathcal{H}_{0}\) to \(\delta\), we can set the right hand side equal to \(\delta\) and solve for \(\tau=\sqrt{N^{\prime}\ln\frac{2}{\delta}}\). We use this test in Algorithm 1 to compute the largest \(\alpha\) for which the test fails to reject \(\mathcal{H}_{0}\). The output \(\hat{\alpha}\) is our proposed measure of discrepancy between model \(m_{\ell}\) and the LOOE model. ``` Data:\(\mathcal{D}_{\mathrm{test}}\) with \(N^{\prime}>458\) samples, trained models \(\{m_{k}\colon k\in[M]\}\), model index \(\ell\in[M]\), threshold \(\tau\), resampling number \(B\), trimming levels \((\alpha_{1},\ldots,\alpha_{T})\) Result: trimming level estimate \(\hat{\alpha}\) for\(b=1\) to \(B\)do Compute \(F_{\ell}\) using \(N^{\prime}\) resampled from \(\mathcal{D}_{\mathrm{test}}\); Compute \(F_{-\ell}\) in (9) using \(N^{\prime}\) resampled from \(\mathcal{D}_{\mathrm{test}}\); \(\mathrm{Reject}\gets 1\), \(\hat{\alpha}_{b}\gets 0\), \(t\gets 1\); while\(\mathrm{Reject}=1\)and \(t\leq T\)do \(\alpha\leftarrow\alpha_{t}\); \(\mathcal{H}\leftarrow\) output of (13) ; if\(\mathcal{H}=\mathcal{H}_{0}\)then Reject \(\gets 0\), \(\hat{\alpha}_{b}\leftarrow\alpha\) else \(t=t+1\); \(\hat{\alpha}\leftarrow\frac{1}{B}\sum_{b=1}^{B}\hat{\alpha}_{b}\); ``` **Algorithm 1** Estimate \(\hat{\alpha}\) measure ## 5 Experiments To illustrate our approach, we used a small convolutional neural network on two classes from the CIFAR-10 dataset [20]: \(N=12000\) and \(N^{\prime}=4000\). Our network had two convolutional layers (having \(32\) and \(16\) features, respectively, with a \(3\times 3\) kernel size) followed by one hidden layer of 64 units and a final layer of 2 units that output the raw logits of the network. The small size allows us to train many more models to explore model training in different scenarios: \(\mathsf{S}_{\mathrm{init}}\) with only random initialization, \(\mathsf{S}_{\mathrm{batch}}\) with only random batch selection in SGD, \(\mathsf{S}_{\mathrm{train}}\) with only randomly resampled training data, and \(\mathsf{S}_{\mathrm{all}}\) with all sources of randomness. For each scenario we trained \(M=100\) models for \(50\) epochs. **Similar accuracy/churn do not imply small \(\hat{\alpha}\).** To show that our measure \(\hat{\alpha}\) provides a different understanding of model closeness, Figure 2 shows the logit gap distribution for two models with respect to their LOOE. The churn is computed w.r.t to the LOOE. In general, if models need a small trimming level to be accepted under the null hypothesis they are likely also to have a low churn and high test accuracy. However, the opposite is not necessarily always true. As seen in Figure 2, Model 2 has produced more uncertain samples with a higher probability when compared to another candidate model with a similar test accuracy and churn. This could make Model 2 more prone to adversarial attacks. Based on this we observe that a high test accuracy or a low churn is not enough to conclude that one model is a better model than all other candidate models belonging to the same pool. Thus, the recommendation is to choose models that not only have achieved good test accuracy and low churn but are also better representatives of the LOOE. Relying on \(\hat{\alpha}\) avoids models that somehow "got lucky" and lets us choose better models that have performed meaningfully well. **Comparing the evolution of \(\hat{\alpha}\) and test accuracy.** To understand the difference between \(\hat{\alpha}\), test accuracy, and churn, we considered the baseline scenario \(\mathsf{S}_{\mathrm{all}}\). We trained \(M=100\) models for \(50\) epochs to examine the evolution of each model over those 50 epochs compared to the LOOE at 50 epochs. We have only included test accuracy plots due to space restrictions but the churn of these models w.r.t to their LOOE also follows a similar behavior. Figure 3 shows four snapshots of the relationship between the test accuracy of a model and \(\hat{\alpha}\). At smaller epochs, although test accuracy improves, \(\hat{\alpha}\) remains large for most models. After training the models long enough, we stop seeing a large improvement in test accuracy but candidate models move closer in distribution toward their LOOE model at a higher epoch. This suggests that candidate models evolve to become better representatives of the training algorithm long after reducing test accuracy variability, and \(\hat{\alpha}\) is indicative of this evolution. **Using \(\hat{\alpha}\) to examine the impact of randomization.** Our focus is on two epochs, one very early on in the training (epoch 2) and another after training for a long period of time (epoch 50). At epoch 2, models differing according to \(\mathsf{S}_{\mathrm{init}}\) had a higher percentage of models that do not reject the null hypothesis, with a low \(\hat{\alpha}\), than \(\mathsf{S}_{\mathrm{train}}\), or \(\mathsf{S}_{\mathrm{batch}}\). This would suggest that \(\mathsf{S}_{\mathrm{init}}\), contributes the least among individual sources of variance to \(\mathsf{S}_{\mathrm{all}}\), by a significantly large margin. However, both \(\mathsf{S}_{\mathrm{init}}\), and \(\mathsf{S}_{\mathrm{batch}}\) seem to contribute less than \(\mathsf{S}_{\mathrm{train}}\) in test accuracy variability at epoch 2. This sensitivity of \(\hat{\alpha}\) to model similarity/dissimilarity is lost at higher epochs. At epoch 50, we see a reduction in variability both in test accuracy and \(\hat{\alpha}\). This points us to the work done by [7], that individual sources stop contributing independently to the total variance observed from changing all sources of randomness if you train the models long enough. ## 6 Conclusion and Future Work In this work, we highlighted a different approach to analyzing variability in deep neural networks. Our proposed framework is based on a robust two-sample hypothesis testing problem that uses impartial trimming of the empirical CDF of samples obtained from the logit gap function. The purpose of this test is to assess the similarity/dissimilarity between candidate models in a pool with their consensus (leave-one-out ensemble) model by down-weighting that part of the data that has a greater influence on the dissimilarity. We also provide some evidence that our new measure, the trimming level \(\hat{\alpha}\), could be a more informative metric to assess model performance over the commonly used test accuracy. While in this paper we describe the methodology and an example on a small model, future extensions of this work include applications to very large deep net models and extensions to multi-class classification. Another direction is to explore two-sample hypothesis testing based on other distance metrics like the Wasserstein metric. In this work, we chose to focus on samples from the logit gap function as our probe to understand deep net variability. We can look at other functions of the trained models such as eigen-distribution of the Jacobian or the Neural Tangent Kernel of functions learned by these models. The eigen-distribution of these matrices can provide us information on how well individual candidate models can generalize to test data. Figure 4: Bar plots showing variation in test accuracy and alpha, for Epoch 2 (top row), and Epoch 50 (bottom row). Figure 3: Scatterplot of \(\hat{\alpha}\) vs. test accuracy for different epochs under \(\mathsf{S}_{\mathrm{all}}\). Each model is compared to the LOOE model at epoch 50. Figure 2: (Left) Histogram of logit gap from the ensemble model with the upper and lower envelope representing the maximum and minimum probability attained in each bin among individual candidate models. (Right) Histogram plots of logit gap samples from two candidate models and their LOOE models at a fixed epoch.
2303.01767
Implicit Stochastic Gradient Descent for Training Physics-informed Neural Networks
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems, but they are still trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features. In this paper, we propose to employ implicit stochastic gradient descent (ISGD) method to train PINNs for improving the stability of training process. We heuristically analyze how ISGD overcome stiffness in the gradient flow dynamics of PINNs, especially for problems with multi-scale solutions. We theoretically prove that for two-layer fully connected neural networks with large hidden nodes, randomly initialized ISGD converges to a globally optimal solution for the quadratic loss function. Empirical results demonstrate that ISGD works well in practice and compares favorably to other gradient-based optimization methods such as SGD and Adam, while can also effectively address the numerical stiffness in training dynamics via gradient descent.
Ye Li, Song-Can Chen, Sheng-Jun Huang
2023-03-03T08:17:47Z
http://arxiv.org/abs/2303.01767v1
# Implicit Stochastic Gradient Descent for Training Physics-informed Neural Networks ###### Abstract Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems, but they are still trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features. In this paper, we propose to employ implicit stochastic gradient descent (ISGD) method to train PINNs for improving the stability of training process. We heuristically analyze how ISGD overcome stiffness in the gradient flow dynamics of PINNs, especially for problems with multi-scale solutions. We theoretically prove that for two-layer fully connected neural networks with large hidden nodes, randomly initialized ISGD converges to a globally optimal solution for the quadratic loss function. Empirical results demonstrate that ISGD works well in practice and compares favorably to other gradient-based optimization methods such as SGD and Adam, while can also effectively address the numerical stiffness in training dynamics via gradient descent. ## 1 Introduction Gradient descent (GD) and practical stochastic gradient descent with mini-batch gradients (SGD) are widely used optimization algorithms, especially in optimizing deep neural networks. Formally, the goal of optimization is to find a weight vector \(\hat{\theta}\) in parameter space \(\mathbb{R}^{m}\) that minimizes a loss \(L(\theta)\). The GD algorithm is the updating procedure of model weights in the direction of the steepest loss gradient: \[\theta_{n+1}=\theta_{n}-\alpha\cdot\nabla L(\theta_{n}), \tag{1}\] where \(\alpha\) is the learning rate. The SGD replaces the gradient \(\nabla L(\theta)\) with a mini-batch gradient \(\nabla\hat{L}_{i}(\theta)\), where \(\hat{L}_{i}\) is the loss computed on mini-batch data instead of the whole dataset. The continuous gradient flow is defined as a curvature \(\theta(t)\) that satisfies the following ordinary differential equation (ODE): \[\frac{d}{dt}\theta(t)=-\nabla_{\theta}L(\theta(t)). \tag{2}\] It is easy to show that when the learning rate is sufficiently small, the discrete updates \(\{\theta_{n}\}_{n=0}^{\infty}\) computed by Eq.(1) stay close to a function \(\{\theta(t_{n})\}_{n=0}^{\infty}\) where \(t_{n}=n\alpha\). Variants based on GD/SGD, such as AdaGrad [2], RMSDrop [11], and Adam [12], have been developed in recent years. Despite its numerous successes in practical optimization tasks such as optimizing deep neural networks, GD/SGD may suffer from numerical instability in some key hyper-parameters, such as the learning rate and batch size. For example, if the learning rate is misspecified, GD/SGD may numerically diverge, and the model training fails. The main reason is the _stiffness_ in the gradient flow dynamics. Typically, the gradient flow dynamics is called a _stiff ODE_ when the gap between the maximum and minimum eigenvalues of the Hessian matrix is large [23]. We can simply perform a linearization for the gradient flow (2) and obtain \[\frac{d}{dt}\tilde{\theta}(t)=-\nabla_{\theta}^{2}L(\tilde{\theta}(t))\cdot \tilde{\theta}(t). \tag{3}\] The largest eigenvalue of the Hessian dictates the fastest time-scale of the ODEs. In the language of numerical analysis, to ensure the numerical stability of GD, we need \(\alpha\leq 2/\lambda_{\max}(\nabla_{\theta}^{2}L(\theta))\), where \(\lambda_{\max}(\nabla_{\theta}^{2}L(\theta))\) is the maximum eigenvalue of the Hessian matrix [1]. From the theory of numerical analysis, GD/SGD is not suitable for stiff ODEs, because a very small learning rate and very large number of iterations are required to maintain numerical stability. One of the outstanding first-order solvers with strong stability for stiff ODEs is the implicit (backward) Euler method: \[\theta_{n+1}=\theta_{n}-\alpha\cdot\nabla L(\theta_{n+1}), \tag{4}\] where a large learning rate can be used. Eq.(4) is also known as the implicit gradient descent (IGD) or implicit stochastic gradient descent (ISGD) method, as the next iteration \(\theta_{n+1}\) appears implicitly on the right side of Eq.(4), and cannot be computed explicitly. Physics-informed neural networks (PINNs) are neural networks with outputs constrained to approximately satisfy a system of partial differential equations (PDEs) by using a regularization functional \(\mathcal{R}(u_{\theta}(\mathbf{x}))\) that typically represents the residual of PDEs. A general loss function representation of PINNs takes the form \[L(\theta)=\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}|\mathbf{u}_{i}-u_{\theta}(\mathbf{x}_{ i})|^{2}+\mathcal{R}(u_{\theta}(\mathbf{x})), \tag{5}\] where the given set input output pairs \((\mathbf{x}_{i},\mathbf{u}_{i})\) are corresponding to the initial/boundary conditions of PDEs. The most popular optimizers for training PINNs are gradient descent based Adam optimizer and quasi-Newton based L-BFGS optimizer [13]. However, the additional regularization term \(\mathcal{R}(u_{\theta}(\mathbf{x}))\) has been shown to _increase the stiffness_ of the gradient dynamics [20], causing model training failures especially when the target functions to be approximated exhibit high-frequency or multi-scale features. Typically, for stiff solutions, L-BFGS is more likely to be stuck at a bad local minimum, and Adam may need very small learning rate and very large number of iterations. We claim that IGD/ISGD is more stable than GD/SGD and L-BFGS in the PINNs training when fitting multi-scale solutions. As an example, Figure 1 contrasts these approaches. As the solution of Poisson equation changes from smooth to multi-scale, the maximum eigenvalue of Hessian increases significantly and the gradient flow dynamics of PINN becomes stiff, both Adam and L-BFGS become divergent while our IGD/ISGD is still convergent. ### Contributions Our main contributions can be summarized in the following points: * We first propose to employ the IGD/ISGD method to train PINNs. We theoretically and numerically show that IGD/ISGD can overcome the stiffness in the gradient flow dynamics of PINNs, especially for PDEs with multi-scale solutions. * We used practical L-BFGS and Adam optimizer to deal with the implicit updates in IGD/ISGD, which is effective in practice. The computational cost is comparable to Adam. Furthermore, the method is stable for the learning rate and batch size, making it easier for nonexperts to process neural network training tasks. * The IGD global convergence property is proven. We theoretically prove that for two-layer fully connected neural networks with large hidden nodes, randomly initialized IGD converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. ### Related Work **Gradient Descent.** Global convergence of gradient descent based methods have been proved when training deep neural network despite the objective function being non-convex [14, 15, 16, 17]. The dynamics of neural network weights under GD converge to a point that is close to the minimum norm solution under proper conditions [2]. Toulis and his collaborators [18, 19, 20], for the robustness and generalization ability. The IGD/ISGD method was also applied to optimize the k-means clustering problem [21] and the objective matrix factorization loss function that appears in recommendation systems [23], and the convergence time was effectively improved. **PINNs.** With the rapid growth of deep learning, using neural networks to represent PDE solutions has attracted the attention of many researchers. Based on the early studies of [18, 19], Raissi, Perdikaris, and Karniadakis (2019) proposed the pioneering work of PINNs to solve both forward and inverse problems involving nonlinear PDEs. PINNs have demonstrated remarkable power in applications including fluid dynamics [17, 16, 18, 19], biomedical engineering [15], meta-material design [14, 13], software Figure 1: _1D Poisson equation_: Results for the heuristic example in Section 2.2 obtained by training a conventional PINN (5-layer, 200 hidden units, \(\tanh\) activations) with gradient descent based Adam optimizer, quasi-Newton based L-BFGS optimizer and our ISGD optimizer. All eigenvalues of \(\nabla_{\theta}^{2}L(\theta)\) are computed and arranged in increasing order. (a): smooth solution \(u_{L}(x)=\sin(2\pi x)\), the maximum eigenvalue is 1.1e+04, non-stiff and all three optimizers trained well. (b): solution with multi-scale features \(u_{H}(x)=\sin(2\pi x)+0.1\sin(50\pi x)\), the maximum eigenvalue is 4.6e+08, stiff and both Adam and L-BFGS failed to train, while our ISGD trained well. packages (Lu et al., 2021), and numerical simulators (Hennigh et al., 2021; Cai et al., 2021). Adaptive activation functions can be applied to accelerate PINN training (Jagtap, Kawaguchi, and Em Karniadakis, 2020; Jagtap, Kawaguchi, and Karniadakis, 2020; Jagtap et al., 2022). However, despite early empirical success, the original formulations of PINNs often struggles to handle problems exhibiting high-frequency and multi-scale behavior. Recent works by Wang, Teng, and Perdikaris (2021); Wang, Yu, and Perdikaris (2022); Wang, Wang, and Perdikaris (2021) have identified two fundamental weaknesses in conventional PINN formulations. The first is the remarkable discrepancy in the convergence rate between the data-based loss function and the physical-based loss function. The second is related to the spectral bias, which indeed exists in PINN models and is the leading reason that prevents them from accurately approximating high-frequency or multi-scale functions. In fact, they demonstrated that the gradient flow of PINN models becomes increasingly stiff for PDE solutions exhibiting high-frequency or multi-scale behavior. This result motivates us to use robust implicit numerical schemes such as IGD/ISGD for the numerical solution to the gradient flow of PINN models. ### Organization of the paper In Section 2, we present the methodology of the proposed IGD/ISGD method. The PINNs framework is also introduced briefly for completeness. Two heuristic examples are presented to show the strong stability of the IGD/ISGD method. In Section 3, we analyze the training dynamics of the IGD/ISGD method when applied to neural network training tasks. Some technical proofs are given in the Appendix. In Section 4, we report various computational examples for inferring the solution of ordinary/partial differential equations by PINNs. Additional computational examples for regression and classification problems are given in the Appendix. Finally, we conclude in Section 5 with a summary. ## 2 Methodology ### Physics-informed neural networks PINNs are neural networks that imbeds differential equations into neural network training. The initial/boundary condition data of the differential equations are treated as the supervised learning component in the objective loss function, while the residual of the differential equations is applied as an unsupervised regularization factor in the objective loss function. We consider a parametrized PDE system given by: \[\begin{array}{l}\mathcal{F}(\mathbf{x},u,u_{x},\ldots,\lambda)=0,\quad \mathbf{x}\in\Omega,\\ u(\mathbf{x})=g_{0}(\mathbf{x}),\quad\mathbf{x}\in\partial\Omega,\end{array}\] where \(\mathbf{x}\) are the spatial and time coordinates, \(u=u(\mathbf{x})\) is the solution to the PDE with boundary/initial data \(g_{0}(\mathbf{x})\), \(\mathcal{F}\) denotes the PDE residual, and \(\lambda\) is the PDE parameter. For example, \(\mathcal{F}=-u_{xx}-f(x)=0\) is the simplest 1D Poisson equation for a given function \(f(x)\). The vanilla PINN uses a fully connected feed-forward neural network \(u_{\theta}(x)\) to approximate the solution \(u(x)\) by minimizing the following loss function: \[L(\theta)=\omega_{d}L_{data}+\omega_{f}L_{PDE}, \tag{6}\] where \[L_{data} = \frac{1}{N_{d}}\sum_{j=1}^{N_{d}}|u_{\theta}(\mathbf{x}_{d}^{j} )-g_{0}(\mathbf{x}_{d}^{j})|^{2},\] \[L_{PDE} = \frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|\mathcal{F}(\mathbf{x}_{f}^{i })|^{2}.\] Here, \(\{\mathbf{x}_{d}^{j}\}_{j=1}^{N_{d}}\) represents the training data points on \(\partial\Omega\) while \(\{\mathbf{x}_{f}^{i}\}_{i=1}^{N_{f}}\) represents the set of residual points in \(\Omega\). \(\omega_{f}\) and \(\omega_{d}\) are the user-specified weighting coefficients for different loss terms. The first term \(L_{data}\) includes the known boundary/initial conditions and experimental data, which is the usual supervised data-driven part of the neural network. To compute the residuals in the loss function, automatic differentiation is applied to compute the derivatives of the solution with respect to the independent variables. This constitutes the physics-informed part of the neural network as given by the second term \(L_{PDE}\). The resulting optimization problem is to find the minimum of the loss function by optimizing the trainable parameters \(\theta\). Gradient descent based first-order optimizers such as SGD and Adam (Kingma and Ba, 2014), or quasi-Newton based optimizers like L-BFGS (Liu and Nocedal, 1989), are widely used in PINNs training. However, as Wang, Yu, and Perdikaris (2022) claimed, "...PINNs using fully connected architectures often fail to achieve stable training and produce accurate predictions, especially when the underlying PDE solutions contain high-frequencies or multi-scale features". The gradient flow dynamics of PINNs will become stiff as multi-scale phenomena appear, so explicit GD based optimizers may be unstable, and L-BFGS is more likely to be stuck at a bad local minimum. As we mentioned in the previous section, implicit schemes like IGD/ISGD are more stable to overcome the stiffness problems. Two illustrative examples are presented to show the robustness of IGD/ISGD in the next section. ### Heuristic examples with stability In this section, we present two heuristic examples to show the stability of IGD/ISGD and the instability of GD/IGD. **Analytical stiff problem.** The first example is to theoretically analysis the learning rate constraint in the gradient flow dynamics of stiffness problems. We denote a fabricated loss function by \[L(\theta_{1},\theta_{2})=\frac{K_{1}}{2}(\theta_{1}-\theta_{1}^{*})^{2}+\frac{ K_{2}}{2}(\theta_{2}-\theta_{2}^{*})^{2},\] where \(\theta_{i}\in\mathbb{R},\ i=1,2\) are two parameters to be optimized, \(K_{i}>0,\ i=1,2\) are two constants. The eigenvalues of the Hessian matrix of \(L(\theta_{1},\theta_{2})\) are characterized by \(K_{1}\) and \(K_{2}\). When \(K_{1}\) and \(K_{2}\) differ in scales, for example, \(K_{1}=10^{-4}\) and \(K_{2}=10^{4}\), the gradient flow of the loss function suffers from the stiffness phenomenon. A direct computation shows that the loss function update procedure of GD has the following relation: \[\frac{L(\theta_{1}^{n+1},\theta_{2}^{n+1})}{L(\theta_{1}^{n},\theta_{2}^{n})} \leq\max\{(1-\alpha K_{1})^{2},(1-\alpha K_{2})^{2}\}. \tag{7}\] Typically, we need \(D=\max\{(1-\alpha K_{1})^{2},(1-\alpha K_{2})^{2}\}\leq 1\) to guarantee loss decay, which implies \(\alpha\leq\frac{1}{\max\{K_{1},K_{2}\}}\). When \(K_{1}=10^{-4}\) and \(K_{2}=10^{4}\), we have \(\alpha\leq 10^{-4}\) and \(D\leq 1-10^{-8}\), meaning that the loss decays very slowly, and very large number of iterations (at least \(\mathcal{O}(10^{8})\)) are needed to converge. For a large learning rate \(\alpha\), the loss decay rate \(D\) may be greater than 1, and the loss may increase as the iterations increase, causing numerical instability in the gradient flow dynamics computation. For IGD method, the loss function update procedure has the following relation: \[\frac{L(\theta_{1}^{n+1},\theta_{2}^{n+1})}{L(\theta_{1}^{n},\theta_{2}^{n})} \leq\max\{\frac{1}{(1+\alpha K_{1})^{2}},\frac{1}{(1+\alpha K_{2})^{2}}\}. \tag{8}\] The loss decay rate \(D=\max\{\frac{1}{(1+\alpha K_{1})^{2}},\frac{1}{(1+\alpha K_{2})^{2}}\}\) satisfies \(D<1\) automatically for all learning rates \(\alpha>0\) and regardless of the scales of \(K_{1},K_{2}\), and \(D\) is even smaller for larger \(\alpha\). This shows the strong stability of IGD to deal with stiffness phenomena. **1D Poisson equation with multi-scale solution.** This heuristic example is to show the advantage of IGD/ISGD when the gradient flow dynamics of PINN is stiff. We consider a simple 1D Poisson equation \[-\Delta u(x)=f(x),\quad x\in(0,1) \tag{9}\] subject to the boundary condition \[u(0)=u(1)=0.\] We consider two fabricated solutions: one is \(u_{L}(x)=\sin(2\pi x)\) exhibiting low frequency on the whole domain, and another is \(u_{H}(x)=\sin(2\pi x)+0.1\sin(50\pi x)\) exhibiting low frequency in the macro-scale and high frequency in the micro-scale. Though this example is simple and pedagogical, it resembles many practical scenarios with multi-scale phenomenons. We represent the unknown solution \(u(x)\) by a 5-layer fully-connected neural network \(u_{\theta}(x)\) with 200 units per hidden layer. \(N_{r}=1000\) training points \(\{x_{i},f(x_{i})\}\) are uniformly sampled in the interval \((0,1)\). Figure 1 shows the results obtained by training PINN with gradient descent based Adam optimizer (Kingma and Ba 2014) with default settings for a maximum \(10^{7}\) epochs, quasi-Newton based L-BFGS optimizer (Liu and Nocedal 1989) with default settings, and our ISGD method with learning rate 0.1 for a maximum \(10^{4}\) epochs. We observe that all three optimizers can train PINN well for smooth solution \(u_{L}(x)\) when there is non-stiff. As multi-scale solution \(u_{H}(x)\) appears, the maximum eigenvalue of Hessian has a significant rise from 1.1e+04 to 4.6e+08. The gradient flow dynamics of PINN becomes stiff, and the popular Adam optimizer is incapable of training PINN to the correct solution even after a million training epochs. The L-BFGS optimizer is also failed to train. As a comparison, our ISGD method can train PINN well both for smooth \(u_{L}(x)\) as well as multi-scale \(u_{H}(x)\) with larger learning rate and smaller iterations. ### Loss decay of GD/IGD Wang, Teng, and Perdikaris (2021) shows that the loss decay of GD is \[L(\theta_{n+1})-L(\theta_{n}) \tag{10}\] \[= \alpha\left\|\nabla_{\theta}L(\theta_{n})\right\|_{2}^{2}\left(-1 +\frac{1}{2}\alpha\sum_{i=1}^{N}\lambda_{i}y_{i}^{2}\right),\] where \(\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{N}\) are eigenvalues of the Hessian matrix \(\nabla_{\theta}^{2}L(\xi)\), and \(\mathbf{y}=(y_{1},...,y_{N})\) is a normalized vector. When \(\{\theta_{n}\}_{n=0}^{\infty}\) reaches a local or global minimum, the Hessian matrix \(\nabla_{\theta}^{2}L(\xi)\) is semi-positive definite and all \(\lambda_{i}\geq 0\) for all \(i=1,...,N\). Moreover, for the multi-scale solution \(u_{H}(x)\), computational results show that many eigenvalues of \(\nabla_{\theta}^{2}L(\xi)\) every large (see Figure 1), i.e., stiff during gradient flow dynamics. As a result, it is very possible that \(L(\theta_{n+1})-L(\theta_{n})>0\), which implies that the GD method fails to decrease the loss. A similar computation approach (see the Appendix) shows that the loss decay of IGD is \[L(\theta_{n+1})-L(\theta_{n}) \tag{11}\] \[= \alpha\left\|\nabla_{\theta}L(\theta_{n+1})\right\|_{2}^{2}\left( -1-\frac{1}{2}\alpha\sum_{i=1}^{N}\lambda_{i}y_{i}^{2}\right),\] means that the loss will always decay regardless of the stiffness of the gradient flow dynamics of PINNs. In addition, the linear convergence rate of IGD is strictly proven in Section 3. ### Implementation of the IGD/ISGD method Although the IGD/ISGD method Eq.4 looks simple and theoretically stable, one difficulty that can not be ignored is the implicity of the nonlinear Eq.4. It can also be expressed as the celebrated proximal point algorithm (Yin et al. 2018; Rockafellar 1976): \[\theta_{n+1}=arg\min_{\theta}\left\{\frac{1}{2}\left\|\theta-\theta_{n}\right\| ^{2}+\alpha\cdot L(\theta)\right\}. \tag{12}\] Hence, when \(\alpha\) is sufficiently small, \(\theta_{n+1}\) is approximately close to its previous updates \(\theta_{n}\) with the original loss as a regularizer. This sub-optimization task requires additional computation and brings difficulties for the whole optimization process. To reduce the computational burden, we take a practical "ISGD,L-BFGS" (or "ISGD,Adam") optimizer for PINNs training with multi-scale solutions. Here "ISGD,L-BFGS" means that we first use ISGD with large learning rate for a certain number of iterations, and then switch to L-BFGS with default settings. In the sub-optimization problem (12), we also apply L-BFGS to compute \(\theta_{n+1}\). The optimizer L-BFGS does not require learning rate, and the neural network is trained until convergence, so the number of iterations is also ignored for L-BFGS (Liu and Nocedal 1989). Here, the successful application of L-BFGS in "ISGD,L-BFGS" optimizer is that both the sub-optimization problem and the subsequent optimization problem have good initial point \(\theta_{n}\), thus are easier for L-BFGS to achieve good convergence properties. The "ISGD,Adam" optimizer is to replace L-BFGS by Adam optimizer with default settings in the "ISGD,L-BFGS" optimizer when the parameters of PINNs are too large for the quasi-Hessian matrix computation. The details are illustrated in Algorithm 1. **Input**: initial \(\theta_{0}\); ISGD learning rate \(\alpha\) and maximum iterations \(K_{0}\); the inner Adam learning rate \(\gamma\) and maximum iterations \(K_{1}\) ; the outer Adam learning rate \(\eta\) and maximum iterations \(K_{2}\) **Output**: the optimized \(\theta^{*}\) ``` 1: Let \(n=0\). 2:while\(n<K_{0}\)do 3: Let \(\tilde{\theta}_{0}=\theta_{n}\) and \(k=0\). 4:while\(k\leq K_{1}\)do 5: Update \(\tilde{\theta}_{k+1}\) \(\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad ### PINN for ordinary differential equations Singularly perturbed ordinary differential equations have been successfully applied to many fields including gas dynamics, chemical reaction, fluid mechanics, elasticity, etc. To find the solution is a hot and difficult problem because it contains a very small parameter \(\epsilon\). We consider the second-order linear singularly perturbed boundary value differential equation \[\left\{\begin{array}{l}-\epsilon y^{{}^{\prime\prime}}(x)+y^{{}^{\prime}}(x)= f(x),\quad x\in(0,1),\\ y(0)=0,\quad y(1)=0.\end{array}\right. \tag{19}\] The true solution is chosen as \(y(x)=\frac{1-e^{\frac{x}{\epsilon}}}{e^{\frac{1}{\epsilon}}-1}+\sin(\frac{\pi }{2}x)\), and \(f(x)\) is given according to Eq.(19). \(\epsilon>0\) is a constant; when \(\epsilon\) is very small, a boundary layer exists near the boundary \(x=1\). Let \(y_{\theta}(x)\) be the neural network approximation of \(y(x)\), then the PINN loss function can be defined as \[L(\theta) = \frac{1}{2}\left[|y_{\theta}(0)-y(0)|^{2}+|y_{\theta}(1)-y(1)|^{ 2}\right]\] \[+\frac{1}{N}\sum_{i=1}^{N}\left|-\epsilon y_{\theta}^{{}^{\prime \prime}}(x_{i})+y_{\theta}^{{}^{\prime}}(x_{i})-f(x_{i})\right|^{2}.\] We choose \(N=400\) randomly sampled points to compute the loss function, a batch size of 40 for a small learning rate \(\alpha=0.001\), and a full batch size for a large learning rate \(\alpha=0.5\). A neural network with 4 hidden layers, every 50 units with _tanh_ activations, is applied in all the computations. The results are shown in Figure 2. For the case \(\epsilon=2\), the true solution is smooth. As shown in Fig. 2(a)(b), we find that the ISGD optimizer can significantly improve training convergence and remain stable for different learning rates. For the case \(\epsilon=0.01\), as shown in Fig. 2(f), the true solution has a boundary layer near \(x=1\), and the large gradient creates difficulties for the optimizers. As shown in Fig. 2(d)(e), more epochs and a smaller learning rate are required to be convergent for this singularity phenomenon. While the SGD and Adam optimizers are not convergent for large learning rates, the ISGD can still have stable convergent results, demonstrating the robustness of the proposed method. ### PINN for Poisson equation Poisson equation is an elliptic partial differential equation of broad utility in theoretical physics. We consider the Poisson equation on the domain \(\Omega=[0,1]\times[0,1]\) \[\left\{\begin{array}{l}-\frac{\partial^{2}u}{\partial x^{2}}-\frac{\partial ^{2}u}{\partial y^{2}}=f(x,y),\quad(x,y)\in\Omega,\\ u(x,y)=0,\quad(x,y)\in\partial\Omega.\end{array}\right. \tag{20}\] \begin{table} \begin{tabular}{c|l c c} \hline Example & Optimizer & Learning rate & \#Iterations \\ \hline 4.1 & SGD(Adam) & 0.001 & 120,000 \\ \cline{2-4} (\(\epsilon=2\)) & ISGD, Adam & 0.5, 0.001 & 102,000 \\ \hline 4.1 & SGD(Adam) & 0.001 & 400,000 \\ \cline{2-4} (\(\epsilon=0.01\)) & ISGD, Adam & 0.5, 0.001 & 360,000 \\ \hline & SGD(Adam) & 0.0005 & 2,000,000 \\ \cline{2-4} 4.2 & ISGD, Adam & 0.5, 0.0005 & 1,100,000 \\ \hline & SGD(Adam) & 0.0005 & 1,000,000 \\ \cline{2-4} 4.3 & ISGD, Adam & 0.5, 0.0005 & 550,000 \\ \hline \end{tabular} \end{table} Table 1: Hyper-parameters used in the three optimizers for the following 3 examples. “SGD(Adam)” represents SGD shares the same hyper-parameters with Adam. “ISGD, Adam” is referred in Algorithm 1. Figure 2: The optimization training results for ODE Eq.(19). Top row: \(\epsilon=2.0\). Down row: \(\epsilon=0.01\). The true solution is chosen as \(u(x,y)=\sin(\pi x)\sin(\pi y)+0.1\sin(10\pi x)\sin(10\pi y)\) with multi-scale features. The PINN loss function is defined as \[L(\theta)=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|u_{\theta}(x_{i},y_{i})-u(x_{i},y_{i})\right|^{2}+\] \[\frac{1}{N_{f}}\sum_{j=1}^{N_{f}}\left|\frac{\partial^{2}u_{\theta}(x_{j},y_{j })}{\partial x^{2}}+\frac{\partial^{2}u_{\theta}(x_{j},y_{j})}{\partial y^{2} }-f(x_{j},y_{j})\right|^{2}.\] We choose \(N_{b}=400\) randomly sampled points on \(\partial\Omega\), and \(N_{f}=4,000\) randomly sampled points in \(\Omega\) to compute the loss function. A neural network with 6 hidden layers, every 100 units with _tanh_ activations, is applied in all the computations. The three optimizer training results for \(\alpha=0.0005\) and \(0.5\) are shown in Fig. 3(a) and Fig. 3(b), respectively. We see that neither SGD nor Adam can train well as learning rate increases, but our ISGD trains well for different values of \(\alpha\). The PINN prediction is plotted in Fig. 3(c), and the absolute error is shown in Fig. 3(d), with an absolute error less than \(0.2\%\). We see that the PINN trained by the ISGD optimizer can obtain stable and accurate results for the Poisson equation (20). ### PINN for Helmholtz equation The Helmholtz equation is one of the fundamental equations of mathematical physics arising in many physical problems, such as vibrating membranes, acoustics, and electromagnetism equations. We solve the two-dimensional Helmholtz equation given by \[\left\{\begin{array}{l}\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^ {2}u}{\partial y^{2}}+k^{2}u(x,y)=f(x,y),\quad(x,y)\in\Omega,\\ u(x,y)=0,\quad(x,y)\in\partial\Omega.\end{array}\right. \tag{21}\] The exact solution for \(k=4\) is \(u(x,y)=\sin(\pi x)\sin(4\pi y)\), and the force term \(f(x,y)\) is given by the Eq.(21). We choose \(N_{b}=400\) randomly sampled points on \(\partial\Omega\), and \(N_{f}=4,000\) randomly sampled points in \(\Omega\) to compute the loss function. A neural network with 6 hidden layers, every 100 units with _tanh_ activations, is applied in all the computations. The three optimizer training results for \(\alpha=0.0005\) and \(0.5\) are shown in Fig. 4(a) and Fig. 4(b), respectively. The PINN solution is plotted in Fig. 4(c), and the absolute error is shown in Fig. 4(d), with an absolute error less than \(0.7\%\). We see that the PINN trained by the ISGD optimizer can obtain stable and accurate results for the Helmholtz equation (21). ## 5 Conclusion To overcome the numerical instability of traditional gradient descent methods to some key hyper-parameters, a stable IGD/ISGD method was proposed, analyzed and tested in this paper. The IGD/ISGD method includes implicit updates, and the L-BFGS or Adam optimizer can be combined to forward the updates. The global convergence of IGD/ISGD are theoretically analyzed and proven. We apply the IGD/ISGD method to train deep as well as physics-informed neural networks, showing that the IGD/ISGD method can effectively deal with stiffness phenomenon in the training dynamics via gradient descent. The techniques proposed in this paper stabilize the training of neural network models. This may result in making it easier for non-experts to train such models for beneficial applications, such as solving PDEs. ## Acknowledgments The first author is supported by the National Natural Science Foundation of China (No.62106103), Fundamental Research Funds for the Central Universities (No.ILA22023) and 173 Program Technical Field Fund (No.2021-JCJQ-JJ-0018). Figure 4: PINN training for Helmholtz equation (21). Figure 3: PINN training for the Poisson equation (20).
2307.03487
Learning Theory of Distribution Regression with Neural Networks
In this paper, we aim at establishing an approximation theory and a learning theory of distribution regression via a fully connected neural network (FNN). In contrast to the classical regression methods, the input variables of distribution regression are probability measures. Then we often need to perform a second-stage sampling process to approximate the actual information of the distribution. On the other hand, the classical neural network structure requires the input variable to be a vector. When the input samples are probability distributions, the traditional deep neural network method cannot be directly used and the difficulty arises for distribution regression. A well-defined neural network structure for distribution inputs is intensively desirable. There is no mathematical model and theoretical analysis on neural network realization of distribution regression. To overcome technical difficulties and address this issue, we establish a novel fully connected neural network framework to realize an approximation theory of functionals defined on the space of Borel probability measures. Furthermore, based on the established functional approximation results, in the hypothesis space induced by the novel FNN structure with distribution inputs, almost optimal learning rates for the proposed distribution regression model up to logarithmic terms are derived via a novel two-stage error decomposition technique.
Zhongjie Shi, Zhan Yu, Ding-Xuan Zhou
2023-07-07T09:49:11Z
http://arxiv.org/abs/2307.03487v1
# Learning Theory of Distribution Regression with Neural Networks ###### Abstract In this paper, we aim at establishing an approximation theory and a learning theory of distribution regression via a fully connected neural network (FNN). In contrast to the classical regression methods, the input variables of distribution regression are probability measures. Then we often need to perform a second-stage sampling process to approximate the actual information of the distribution. On the other hand, the classical neural network structure requires the input variable to be a vector. When the input samples are probability distributions, the traditional deep neural network method cannot be directly used and the difficulty arises for distribution regression. A well-defined neural network structure for distribution inputs is intensively desirable. There is no mathematical model and theoretical analysis on neural network realization of distribution regression. To overcome technical difficulties and address this issue, we establish a novel fully connected neural network framework to realize an approximation theory of functionals defined on the space of Borel probability measures. Furthermore, based on the established functional approximation results, in the hypothesis space induced by the novel FNN structure with distribution inputs, almost optimal learning rates for the proposed distribution regression model up to logarithmic terms are derived via a novel two-stage error decomposition technique. _Keywords_: Learning theory, distribution regression, neural networks, approximation rates, learning rates. ## 1 Introduction Recent years have witnessed a vast number of applications of deep learning in various fields of science and engineering. Deep learning based on deep neural networks has become a powerful tool to provide various models and structures for many complex learning tasks, taking advantages of the huge improvement of computing power in the era of big data [15]. The classical fully connected neural network for learning functions of input variable vector \(x=(x_{i})_{i=1}^{d}\in\mathbb{R}^{d}\) with \(J\) layers of neurons \(\{H^{(k)}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{k}}\}_{k=1}^{J}\) with width \(\{d_{k}\}_{k=1}^{J}\) is defined iteratively by \[H^{(k)}(x)=\sigma\left(F^{(k)}H^{(k-1)}(x)-b^{(k)}\right),\quad k=1,2,...,J, \tag{1.1}\] where \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is an activation function acting on vectors component-wise, \(F^{(k)}\) is a \(d_{k}\times d_{k-1}\) matrix, \(b^{(k)}\) is a bias vector, \(H^{(0)}(x)=x\) with width \(d_{0}=d\). Starting from 1980s, approximation theory of neural networks has been well developed in lots of studies, such as [8, 18, 1, 24, 4, 36, 2, 41, 21, 9]. The classical theory of deep neural networks circling around the generalization with FNNs has been walking towards mature within the last few past decades, and the regression problem by using neural networks is always a popular topic. There have been several studies on learning rates of classical least square regression with deep neural networks (DNNs), and satisfactory learning rates have been derived [5, 28]. However, in the existing work related to the regression analysis with DNNs, the input data are still limited to the Euclidean vectors. A natural question appears when the data we need to handle are probability measures and the neural network structure defined in aforementioned works is not suitable for handling the distribution data directly. That is, how can we utilize deep neural networks to handle such distribution data? Solving this problem is desirable, since in recent years there are a lot of applications dealing with distribution data [27, 33]. One of the most popular objects in statistical learning theory is the distribution regression. However, it is still difficult to perform distribution regression in FNNs since an initial model to realize distribution regression with neural networks is still lacking. One of the main difficulties to realize distribution regression by utilizing neural networks is to establish an appropriate network structure induced by FNNs that is fit for distribution inputs. To overcome the limitation of existing neural network techniques and answer the above question, in this paper we provide a novel FNN structure to realize distribution regression and establish a meaningful regression model, finally we derive the satisfactory learning rates. Recently, there has been some progress in distribution regression which aims at regressing from probability measures to real-valued responses [26, 32, 33, 13, 37]. In such problems, in contrast to classical regression problems with vector inputs, we need to handle the inputs consisting of probability measures. Hence the sampling process often involves two stages. In the first stage, the distribution samples are selected from some meta distribution. To further get access to the actual information of the probability measure samples, we need to draw samples from the distribution samples, that is the so called second stage process. In the literature of distribution regression, most of the existing works and their related regression models are based on a kernel mean embedding technique to transform probability measure samples to the space of their mean embeddings which is compact. In these works, they choose the hypothesis space as the reproducing kernel Hilbert space induced by some Mercer kernel \(K\), and use some regularization and integral operator techniques (e.g.[29, 30]), some nice learning rates are derived. However, to realize the learning theory of distribution regression with FNNs, the aforementioned traditional kernel-based techniques break down. One of the main reasons is that the architecture of the neural network is essentially different from the kernel based hypothesis space. Furthermore, as we need to handle the distribution data instead of the classical Euclidean vectors in neural networks, the classical neural network structure as in (1.1) cannot be utilized directly. Hence, some new neural network structures must be established for handling distributions. To this end, we will first define a novel neural network structure with the variable of probability measures that will be given in Definition 1 below. In the hypothesis space induced by the novel network structure, we establish a distribution regression framework with probability samples drawn from a class of Wasserstein space \((\mathcal{P}(\Omega),W_{p})\), where \(\mathcal{P}(\Omega)\) denotes a collection of Borel probability measures defined on a compact space \(\Omega\subset\mathbb{R}^{d}\), \(\mathcal{P}(\Omega)\) is naturally a subset of the space \(\mathcal{B}(\Omega)\) consisting of all Borel measures on \(\Omega\), and \(W_{p}\) denotes the well-known Wasserstein metric that will be defined later. We first introduce our two-stage distribution regression model where the inputs are distribution samples on the Wasserstein space \((\mathcal{P}(\Omega),W_{p})\). In our distribution regression model, we assume that the distribution samples cannot be observed directly, the data are generated through a two-stage sampling process. In the first stage of sampling, the dataset \(D=\{(\mu_{i},y_{i})\}_{i=1}^{m}\) is i.i.d. sampled from a meta Borel distribution \(\rho\) on \(\mathcal{U}\times\mathcal{Y}\), where \(\mathcal{U}=\mathcal{P}(\Omega)\), and \(\mathcal{Y}=\mathbb{R}\) is the output space. In the second stage of sampling, the dataset is \(\hat{D}=\{(\{x_{i,j}\}_{j=1}^{n_{i}},y_{i})\}_{i=1}^{m}\), where \(\{x_{i,j}\in\Omega\}_{j=1}^{n_{i}}\) are i.i.d. sampled from the probability distribution \(\mu_{i}\), which is a first-stage sample. If we use the empirical terminology \(\hat{\mu}_{i}^{n_{i}}=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}\delta_{x_{i,j}}\), then we can also denote the second-stage sample set by \[\hat{D}=\{(\hat{\mu}_{i}^{n_{i}},y_{i})\}_{j=1}^{m}.\] The classical fully connected neural network takes a vector as input, which is not suitable for distribution regression. Since we may have samples of different size \(n_{i}\) for different sample distribution \(\mu_{i}\) in the second stage of sampling, if we just vectorize all the \(n_{i}\) samples in the second stage to a vector as the input of a FNN, when the second-stage sample size \(n_{i}\) is different for different first-stage sample \((\mu_{i},y_{i})\), the input dimension of the vector for the FNN is also different, thus making the traditional FNN with vector inputs fails in the distribution regression problem. Therefore, we propose a novel general FNN structure which takes a distribution rather than a vector as the input for the learning of distribution regression. In practical settings, this network structure is able to utilize the empirical distribution \[\hat{\mu}_{i}^{n_{i}}=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}\delta_{x_{i,j}} \tag{1.2}\] of the second-stage sample rather than the vector as the input. For a Borel measure \(\mu\) and an integrable function vector \(h:\Omega\rightarrow\mathbb{R}^{D}\), if we use the vector integral notation \[\int_{\Omega}h(x)d\mu(x)=\left[\begin{array}{c}\int_{\Omega}(h(x))_{1}d\mu( x)\\ \int_{\Omega}(h(x))_{2}d\mu(x)\\ \vdots\\ \int_{\Omega}(h(x))_{D}d\mu(x)\end{array}\right],\] then the FNN scheme for distribution regression is given in the following definition. **Definition 1** (FNN for distribution regression).: _Let \(\mu\in\mathcal{P}(\Omega)\) be an input distribution, the FNN for distribution regression \(\{h^{(j)}:\mathcal{P}(\Omega)\rightarrow\mathbb{R}^{d_{j}}\}_{j=J_{1}}^{J}\) of type \((J_{1},J)\), with depth \(J\in\mathbb{N}\) and realizing level \(J_{1}\in\{0,1,...,J\}\), and width \(\{d_{j}\}_{j=1}^{J}\) is defined iteratively by:_ \[h^{(j)}(\mu)=\left\{\begin{array}{ll}\int_{\Omega}\sigma\left(F^{(J_{1})} \sigma\left(\cdots\sigma\left(F^{(1)}x-b^{(1)}\right)\cdots\right)-b^{(J_{1}) }\right)d\mu(x),&\mbox{if }j=J_{1},\\ \sigma\left(F^{(j)}h^{(j-1)}(\mu)-b^{(j)}\right),&\mbox{if }j=J_{1}+1,\ldots,J. \end{array}\right. \tag{1.3}\] _where \(\sigma\) is the ReLU activation function given by \(\sigma(u)=\max\{u,0\}\), \(F^{(j)}\in\mathbb{R}^{d_{j}\times d_{j-1}}\) is a full connection matrix with \(d_{0}=d\), and \(b^{(j)}\in\mathbb{R}^{d_{j}}\) is a bias vector._ An output of the above FNN is just a linear combination of the last layer \[c\cdot h^{(J)}(\mu),\] where \(c\in\mathbb{R}^{d_{J}}\) is a coefficient vector. In practical settings, an input probability measure always appears as an empirical distribution. For example, in practice, the actual input always has the form \(\hat{\mu}_{i}^{n_{i}}=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}\delta_{x_{i,j}}\), \(i=1,2,...,m\), where \(x_{i,j}\in\Omega\subset\mathbb{R}^{d}\). Then the explicit form of \(h^{(J)}(\hat{\mu}_{i}^{n_{i}})\) can be represented layer by layer from \[h^{(j)}(\hat{\mu}_{i}^{n_{i}})=\left\{\begin{array}{ll}\frac{1}{n_{i}}\sum_{ j=1}^{n_{i}}\sigma\left(F^{(J_{1})}\sigma\left(\cdots\sigma\left(F^{(1)}x_{i,j}-b^{( 1)}\right)\cdots\right)-b^{(J_{1})}\right),&\mbox{if }j=J_{1},\\ \sigma\left(F^{(j)}h^{(j-1)}(\hat{\mu}_{i}^{n_{i}})-b^{(j)}\right),&\mbox{if }j=J_{1}+1, \ldots,J.\end{array}\right.\] (1.3 \[*\] ) then the FNN structure becomes more flexible to handle such empirical variables. If we use \(\mathcal{H}_{\mathrm{FNN}}\) to denote some hypothesis space induced by the above FNN structure for probability measures in Definition 1 (the rigorous definition will be given in Section 2.3), then a distribution regression scheme can be described as \[f_{\hat{D},\mathcal{H}_{\mathrm{FNN}}}=\arg\min_{f\in\mathcal{H}_{\mathrm{FNN}}} \frac{1}{m}\sum_{i=1}^{m}\left(f(\hat{\mu}_{i}^{n_{i}})-y_{i}\right)^{2}. \tag{1.4}\] In this paper, we first derive rates of approximating by FNNs two classes of composite functionals with distribution variables. The functional induced by ridge functions which we call a ridge functional will be considered first. Then the functional induced by a composite function associated with a polynomial \(Q\) will be considered. We will construct the approximation for the former functional with a FNN of \(J=2\), \(J_{1}=1\) and then for the latter one with a FNN of \(J=3\), \(J_{1}=2\) that are both involved in the FNN structure given in Definition 1. Then with the help of the functional approximation results and a covering number bound developed for the hypothesis space induced by the FNN structure defined above, we derive learning rates for the FNN distribution regression scheme. The covering number argument is in fact based on a crucial fact of the compactness of the hypothesis space, which is also the foundation of the learning theory framework. In this paper, we have rigorously shown that the new-defined hypothesis space with distribution inputs from the Wasserstein space is compact. In fact, for a general distribution regression scheme without kernel regularization, developing appropriate theoretical results on the learning ability in some hypothesis space is still an open question itself. On the way of investigating the new distribution regression scheme and deriving learning rates, one novel two-stage error decomposition technique is proposed in this paper. To the best of our knowledge, such a two-stage error decomposition technique appears for the first time in the literature of the learning theory of distribution regression. It overcomes the difficulty of deriving learning rates for the non-kernel based distribution regression scheme for which the existing literature has not considered yet. Almost optimal learning rates for distribution regression up to logarithmic terms are derived by utilizing neural networks. Our method of deriving almost optimal learning rates also highlights the importance of the FNN structure we construct in the two-stage sampling distribution regression scheme. **Notations**: we fix some notations and terminologies related to the \(\|\cdot\|_{\infty}\) norm of function or functional in different settings of this paper. In additional to the notations of \(\|\cdot\|_{\infty}\) for common scale vectors or matrices, without specification, for a continuous function (or say a "functional") \(f\) defined on the compact space \((\mathcal{P}(\Omega),W_{p})\), we use the notation \(\|h\|_{\infty}=\sup_{\mu\in\mathcal{P}(\Omega)}|h(\mu)|\). For a vector of functions \(h:\mathcal{P}(\Omega)\to\mathbb{R}^{D}\) defined on \(\mathcal{P}(\Omega)\), we use \(\|h\|_{\infty}=\max_{1\leq i\leq D}\sup_{\mu\in\mathcal{P}(\Omega)}|(h(\mu)) _{i}|\). For a vector of functions \(h:\Omega\to\mathbb{R}^{D}\), we also use \(\|h\|_{\infty}=\max_{1\leq i\leq D}\sup_{x\in\Omega}|(h(x))_{i}|\). ## 2 Main Results We show the capacity of our proposed FNN for distribution regression by deriving the approximation rate of a class of composite nonlinear functional \(f\circ L_{G}\) with the variable of probability measures and a univariate function \(f:\mathbb{R}\to\mathbb{R}\) \[f\circ L_{G}:\mu\mapsto f\left(L_{G}(\mu)\right)=f\left(\int_{\Omega}G(x)d\mu( x)\right). \tag{2.1}\] The input \(\mu\) is a probability distribution defined on a compact set \(\Omega\subset\mathbb{B}\), where \(\mathbb{B}:=\{x\in\mathbb{R}^{d}:\|x\|_{2}\leq 1\}\) is the unit ball of the Euclidean space \(\mathbb{R}^{d}\), \(L_{G}\) is the inner functional of the composite nonlinear functional \(f\circ L_{G}\) in the form of \[L_{G}(\mu)=\int_{\Omega}G(x)d\mu(x)\] defined on the space \(\mathcal{P}(\Omega)\) of all Borel probability measures on \(\Omega\), and \(G\) is a function that induces the functional \(L_{G}\). In our setting, since \(\Omega\) is compact, the Riesz-Markov-Kakutani representation theorem asserts that the space \(\mathcal{B}(\Omega)\) of all finite Borel measures on \(\Omega\) is the dual of the space \(C(\Omega)\) of continuous functions on \(\Omega\). So, \(C(\Omega)\) is isometrically embedded into the closed space of the dual space of \(\mathcal{B}(\Omega)\). Hence, by considering a continuous function \(G\) on \(\Omega\) as the inducing function of the functional \(L_{G}\), the formulation of \(L_{G}(\mu)\) is naturally meaningful as a continuous functional defined on \(\mathcal{P}(\Omega)\subset\mathcal{B}(\Omega)\). Furthermore, since \(\Omega\) is a compact metric space equipped with the standard Euclidean metric, the space \(\mathcal{P}(\Omega)\) is also a compact metric space under the Wasserstein metric, which ensures that the learning theory architecture of the proposed neural networks for distribution regression is well-defined. In the following, we will consider two classes of functionals, one is induced by the ridge function in the form of \(G(x)=g(\xi\cdot x)\) which is extremely popular in the realm of applications related to neural networks. The other one is induced by a more general composite function in the form of \(G(x)=g(Q(x))\) with a \(q\)-degree polynomial \(Q\). ### Rates of approximating ridge functionals We first consider the case where \(G(x)=g(\xi\cdot x)\) is actually a ridge function, with a feature vector \(\xi\in\mathbb{R}^{d}\). We assume \(B_{\xi}=\|\xi\cdot x\|_{C(\Omega)}\) and \(g\in C^{0,1}[-B_{\xi},B_{\xi}]\), the space of Lipschitz functions on \([-B_{\xi},B_{\xi}]\), with semi-norm \(|g|_{C^{0,1}}:=\sup\limits_{x_{1}\neq x_{2}\in[-B_{\xi},B_{\xi}]}\frac{|g(x_{1 })-g(x_{2})|}{\|x_{1}-x_{2}\|_{2}}\), and \(B_{g}=\|g\|_{C([-B_{\xi},B_{\xi}])}\), and \(f\in C^{0,\beta}[-B_{G},B_{G}]\), the space of Lipschitz-\(\beta\) functions on \([-B_{G},B_{G}]\) with \(0<\beta\leq 1\), with semi-norm \(|f|_{C^{0,\beta}}:=\sup\limits_{x_{1}\neq x_{2}\in[-B_{G},B_{G}]}\frac{|f(x_{ 1})-f(x_{2})|}{\|x_{1}-x_{2}\|_{2}^{\beta}}\), in which \(B_{G}\) is defined as \[B_{G}=B_{g}+2B_{\xi}|g|_{C^{0,1}} \tag{2.2}\] and \(\|f\|_{\infty}=\|f\|_{C[-B_{G},B_{G}]}\). Then the functional to be approximated is of the form \[f\circ L_{G}^{\xi}:\mu\mapsto f\left(L_{G}^{\xi}(\mu)\right)=f\left(\int_{ \Omega}g\left(\xi\cdot x\right)d\mu(x)\right),\ \mu\in\mathcal{P}(\Omega). \tag{2.3}\] During the construction of a FNN to realize an approximation of the functional \(f\left(L_{G}(\mu)\right)\), we need to first approximate the inner functional \(L_{G}^{\xi}\) (we call it a ridge functional) induced by the feature vector \(\xi\in\mathbb{R}^{d}\) defined by \[L_{G}^{\xi}:\mu\mapsto\int_{\Omega}g\left(\xi\cdot x\right)d\mu(x), \tag{2.4}\] Our first result gives rates of approximating the composite functional (2.3) by a two-layer FNN for distribution regression defined in Definition 1. The _free parameters_ are defined as the implicit training parameters in our approximator construction, which is different from the total training parameters in the classical FNN structure due to the special structure of our construction. **Theorem 1**.: _Let the functional \(f\circ L_{G}^{\xi}\) be in the ridge form (2.3), where \(\xi\in\mathbb{R}^{d}\) is a feature vector, \(g\in C^{0,1}[-B_{\xi},B_{\xi}]\), and \(f\in C^{0,\beta}[-B_{G},B_{G}]\) for some \(0<\beta\leq 1\). Then for any \(N\in\mathbb{N}\), there exists a FNN of type \((1,2)\) and width \(2N+3\) having the structure of Definition 1 with \(F^{(j)}\), \(b^{(j)}\), \(j=1,2\), explicitly constructed such that the following approximation rates hold,_ \[\inf_{\|c\|_{\infty}\leq\frac{4\|f\|_{\infty}N}{B_{G}}}\left\{\sup_{\mu\in \mathcal{P}(\Omega)}\left|c\cdot h^{(2)}(\mu)-f(L_{G}^{\xi}(\mu))\right|\right\} \leq\frac{2B_{G}^{\beta}\left|f\right|_{C^{0,\beta}}+\left(2B_{\xi}|g|_{C^{0, 1}}\right)^{\beta}\left|f\right|_{C^{0,\beta}}}{N^{\beta}}. \tag{2.5}\] _The total number of free parameters in this network is \(\mathcal{N}=8N+d+12\)._ The architecture of the FNN of type \((1,2)\) we construct actually depends upon the formulation of the ridge functional \(f\left(L_{G}^{\xi}(\mu)\right)\). The functional \(L_{G}^{\xi}(\mu)\) includes the procedure of taking the integration w.r.t. the input distribution \(\mu\) on the ridge function \(g(\xi\cdot x)\), which can be approximated by a shallow net with one hidden layer. Therefore, the FNN we construct also takes the integration after the first hidden layer, corresponding to the number of hidden layer to be used for approximating ridge functions. Then we utilize another hidden layer outside the integration layer because the function \(f\) composed with the functional \(L_{G}^{\xi}\) can just be approximated by one hidden layer. We end this subsection by presenting an interesting example. Recall the Laplace transform of a probability measure \(\mu\in\mathcal{P}(\Omega)\) is defined as \[\mathcal{L}(\mu)(\xi)=\int_{\Omega}e^{-x\cdot\xi}d\mu(x),\quad\xi\in\mathbb{R }^{d}. \tag{2.6}\] Then the FNN structure can be used to approximate a class of functionals \(\mathcal{L}_{\xi}\) induced by the Laplace transform of the elements in \(\mathcal{P}(\Omega)\) at \(\xi\): \[\mathcal{L}_{\xi}:\mu\mapsto\mathcal{L}(\mu)(\xi). \tag{2.7}\] The following corollary describes rates of this approximation. **Corollary 1**.: _Let the functional \(\mathcal{L}_{\xi}\) be defined by (2.6) and (2.7), \(\xi\in\mathbb{R}^{d}\) is a feature vector. \(B_{\xi}=\|\xi\cdot x\|_{C(\Omega)}\). Then for any \(N\in\mathbb{N}\), there exists a FNN of type \((1,2)\) and width \(2N+3\) following the structure of Definition 1 with \(F^{(j)}\), \(b^{(j)}\), \(j=1,2\), explicitly constructed such that the following approximation rates hold,_ \[\inf_{\|c\|_{\infty}\leq 4N}\left\{\sup_{\mu\in\mathcal{P}(\Omega)}\Big{|}c \cdot h^{(2)}(\mu)-\mathcal{L}_{\xi}(\mu)\Big{|}\right\}\leq\frac{2e^{B_{\xi} }+6B_{\xi}e^{B_{\xi}}}{N}. \tag{2.8}\] _The total number of free parameters in this network is \(\mathcal{N}=8N+d+12\). Furthermore, for any \(\xi\in\Omega\), the above bound can be further improved to_ \[\inf_{\|c\|_{\infty}\leq 4N}\left\{\sup_{\mu\in\mathcal{P}(\Omega)}\Big{|}c \cdot h^{(2)}(\mu)-\mathcal{L}_{\xi}(\mu)\Big{|}\right\}\leq\frac{8e}{N}. \tag{2.9}\] ### Rates of approximating composite functionals with polynomial features Recall an important fact on the space \(\mathcal{P}_{q}^{h}(\mathbb{R}^{d})\) of homogeneous polynomials on \(\mathbb{R}^{d}\) of degree \(q\) from [20] and [22] that \(\mathcal{P}_{q}^{h}(\mathbb{R}^{d})\) has a basis \(\left\{(\xi_{k}\cdot x)^{q}\right\}_{k=1}^{n_{q}}\) for some vector set \(\left\{\xi_{k}\right\}_{k=1}^{n_{q}}\subset\mathbb{R}^{d}\setminus\{0\}\) and this vector set can even be chosen in such a way that the homogeneous polynomial set \(\left\{(\xi_{k}\cdot x)^{\ell}\right\}_{k=1}^{n_{q}}\) spans the space \(\mathcal{P}_{\ell}^{h}(\mathbb{R}^{d})\) for every \(\ell\in\{1,\ldots,q-1\}\), where \(n_{q}=\left(\begin{array}{c}d-1+q\\ q\end{array}\right)\) is the dimension of \(\mathcal{P}_{q}^{h}(\mathbb{R}^{d})\). Applying this fact to a polynomial \(Q\) of degree \(q\) yields the following lemma stated in [40]. **Lemma 1**.: _Let \(d\in\mathbb{N}\) and \(q\in\mathbb{N}\). Then there exists a set \(\left\{\xi_{k}\right\}_{k=1}^{n_{q}}\subset\left\{\xi\in\mathbb{R}^{d}:|\xi|=1\right\}\) of vectors with \(\ell_{2}\)-norm \(1\) such that for any \(Q\in\mathcal{P}_{q}(\mathbb{R}^{d})\) we can find a set of coefficients \(\left\{\gamma_{k,\ell}^{Q}:k=1,\ldots,n_{q},\ell=1,\ldots,q\right\}\subset \mathbb{R}\) such that_ \[Q(x)=Q(0)+\sum_{k=1}^{n_{q}}\sum_{\ell=1}^{q}\gamma_{k,\ell}^{Q}(\xi_{k}\cdot x )^{\ell},\qquad x\in\mathbb{R}^{d}. \tag{2.10}\] Now we consider a general class of functional \(f\left(L_{G}^{Q}(\mu)\right)\) to be approximated, where \(G\) is of a composite form \(G(x)=g\left(Q(x)\right)\) and \(Q\) is a polynomial of degree \(q\) on \(\Omega\) with the constant \(\widehat{B}_{Q}=\|Q\|_{C(\Omega)}\). For the polynomial \(Q\), Lemma 1 indicates that there is a vector \[\gamma_{Q}=\left[\gamma_{1,1}^{Q}\ \gamma_{1,2}^{Q}\ \ldots\ \gamma_{1,q}^{Q}\ \gamma_{2,1}^{Q}\ \gamma_{2,2}^{Q}\ \ldots\ \gamma_{2,q}^{Q}\ \ldots\ \gamma_{n_{q},1}^{Q}\ \gamma_{n_{q},2}^{Q}\ \ldots\ \gamma_{n_{q},q}^{Q}\right],\] such that \[Q(x)=Q(0)+\sum_{k=1}^{n_{q}}\sum_{\ell=1}^{q}\gamma_{k,\ell}^{Q}(\xi_{k}\cdot x )^{\ell},\qquad x\in\mathbb{R}^{d}. \tag{2.11}\] We are now ready to define the following constant \[B_{Q}=\widehat{B}_{Q}+2q\|\gamma_{Q}\|_{1}. \tag{2.12}\] For the function \(g\), we assume \(g\in C^{0,1}[-B_{Q},B_{Q}]\), the space of Lipschitz-1 functions on \([-B_{Q},B_{Q}]\), with semi-norm \(|g|_{C^{0,1}}\), and \(B_{g}=\|g\|_{C[-B_{Q},B_{Q}]}\). We can then define the second constant \[B_{G}=B_{g}+3B_{Q}|g|_{C^{0,1}} \tag{2.13}\] for introducing the following Theorem 2. For function \(f\), we assume \(f\in C^{0,\beta}[-B_{G},B_{G}]\), with semi-norm \(\left|f\right|_{C^{0,\beta}}\), and \(\|f\|_{\infty}=\|f\|_{C[-B_{G},B_{G}]}\). In this subsection, we aim at approximating the functional \[f\circ L_{G}^{Q}:\ \mu\mapsto f\left(L_{G}^{Q}(\mu)\right)=f\left(\int_{\Omega}g \left(Q(x)\right)d\mu(x)\right),\ \mu\in\mathcal{P}(\Omega). \tag{2.14}\] As in subsection 2.1, we need to first approximate the inner functional \(L_{G}^{Q}(\mu)\) on \(\mathcal{P}(\Omega)\) defined as \[L_{G}^{Q}:\mu\mapsto\int_{\Omega}g\left(Q(x)\right)d\mu(x), \tag{2.15}\] and derive the corresponding approximation rates of the composite nonlinear functional \(\mu\mapsto f\left(L_{G}^{Q}(\mu)\right)\) defined on \(\mathcal{P}(\Omega)\), with one more hidden layer to be used in the integral part of the FNN comparing with that of the ridge functional in subsection 2.1. **Theorem 2**.: _Let the functional \(f\circ L_{G}^{Q}\) be in the composite form (2.14), where \(Q\) is a polynomial of degree \(q\) on \(\Omega\), \(g\in C^{0,1}[-B_{Q},B_{Q}]\), and \(f\in C^{0,\beta}[-B_{G},B_{G}]\) for some \(0<\beta\leq 1\). Then for any \(N\in\mathbb{N}\), there exists a FNN of type \((2,3)\) and widths \(d_{1}=n_{q}(2N+3)\) \(d_{2}=d_{3}=2N+3\) following the structure of Definition 1 with \(F^{(j)}\), \(b^{(j)}\), \(j=1,2,3\) explicitly constructed, such that the following approximation rates hold,_ \[\inf_{\|c\|_{\infty}\leq\frac{4\|f\|_{\infty}N}{B_{G}}}\left\{\sup_{\mu\in \mathcal{P}(\Omega)}\left|c\cdot h^{(3)}(\mu)-f(L_{G}^{Q}(\mu))\right|\right\} \leq\frac{2B_{G}^{\beta}\left|f\right|_{C^{0,\beta}}+(3B_{Q}|g|_{C^{0,1}})^{ \beta}\left|f\right|_{C^{0,\beta}}}{N^{\beta}}. \tag{2.16}\] _The total number of free parameters in this network is \(\mathcal{N}=(2q+10)N+(d+q)n_{q}+3q+15\)._ The reason that we construct a FNN of type \((2,3)\) also depends upon the formulation of the composite functional \(f\left(L_{G}^{Q}(\mu)\right)\). The number of hidden layers in the integration part is chosen to be two since we actually need a net with two hidden layers to approximate the composite function \(g\left(Q(x)\right)\). In fact, the position of the integration w.r.t. the input distribution \(\mu\) for the FNN we construct should be consistent with that in the true functional, otherwise the derivation of an approximation rate might be unsuccessful. Specifically, when the polynomial \(Q\) in the functional-inducing function \(g(Q(x))\) takes the special form of \(Q(x)=\|x\|_{2}^{2}=x_{1}^{2}+x_{2}^{2}+\cdots+x_{d}^{2}\), then the composite functional-inducing function \(g\left(Q(x)\right)\) becomes the radial function \(g\left(\|x\|_{2}^{2}\right)\) and the functional has the form \[f\circ L_{G}^{\|\cdot\|_{2}}:\mu\mapsto f\left(L_{G}^{\|\cdot\|_{2}}(\mu) \right)=f\left(\int_{\Omega}g\left(\|x\|_{2}^{2}\right)d\mu(x)\right). \tag{2.17}\] Since we can write \[Q(x)=\|x\|_{2}^{2}=\sum_{k=1}^{d}\left(e_{k}\cdot x\right)^{2}\] in terms of the standard basis \(\{e_{k}\}_{k=1}^{d}\) of \(\mathbb{R}^{d}\), also \(\widehat{B}_{Q}=\|Q\|_{C(\Omega)}\leq 1\), we have \(B_{Q}=\widehat{B}_{Q}+2q\|\gamma_{Q}\|_{1}\leq 1+4d\). We are then able to derive the following rates of approximating \(f\circ L_{G}^{\|\cdot\|_{2}}\) by utilizing Theorem 2 directly. **Corollary 2**.: _Let the functional \(f\circ L_{G}^{\|\cdot\|_{2}}\) be in the composite form (2.17), where \(g\in C^{0,1}[-B_{Q},B_{Q}]\), and \(f\in C^{0,\beta}[-B_{G},B_{G}]\), for some \(0<\beta\leq 1\). Then for any \(N\in\mathbb{N}\), there exists a FNN of type \((2,3)\) and widths \(d_{1}=(2N+3)d\), \(d_{2}=d_{3}=2N+3\) following the structure of Definition 1, such that the following approximation rates hold,_ \[\inf_{\|c\|_{\infty}\leq\frac{4\|f\|_{\infty}N}{B_{G}}}\left\{\sup_{\mu\in \mathcal{P}(\Omega)}\left|c\cdot h^{(3)}(\mu)-f\left(L_{G}^{\|\cdot\|_{2}}( \mu)\right)\right|\right\}\leq\frac{2B_{G}^{\beta}\left|f\right|_{C^{0,\beta} }+(12d+3)^{\beta}|g|_{C^{0,1}}^{\beta}\left|f\right|_{C^{0,\beta}}}{N^{\beta}}. \tag{2.18}\] _The total number of free parameters in this network is \(\mathcal{N}=12N+d^{2}+d+18\)._ ### Distribution regression with FNN In this subsection, we conduct generalization analysis of the commonly used empirical risk minimization (ERM) algorithm for distribution regression based on FNNs, which measures the generalization ability of the truncated empirical target function of the ERM algorithm. We start with a formulation of the distribution regression model in this paper. First we recall some concepts on Wasserstein metric and the corresponding Wasserstein space. Such a space would serve as the basic underlying space for our regression analysis. Now let \((\Omega,d)\) be a complete, separable metric (Polish) space. Let \(p\in[1,\infty)\), for two probability measures \(\mu\) and \(\nu\) on \(\Omega\), the Wasserstein metric of order \(p\) is defined as \[W_{p}(\mu,\nu) = \left(\inf_{\pi\in\Pi(\mu,\nu)}\int_{\Omega\times\Omega}d(x,y)^{p }d\pi(x,y)\right)^{1/p}\] \[= \inf\left\{\left[\mathbb{E}d(X,Y)^{p}\right]^{\frac{1}{p}},\quad \mathrm{law}(X)=\mu,\quad\mathrm{law}(Y)=\nu\right\},\] where the infimum is taken over all pairs of random vectors \(X\) and \(Y\) marginally distributed as \(\mu\) and \(\nu\) (\(X\sim\mu\), \(Y\sim\nu\)). In above definition, \(\pi\) is also called a transference plan from \(\mu\) to \(\nu\) satisfying \(\int_{\Omega}d\pi(x,\cdot)=d\mu(x),\ \int_{\Omega}d\pi(\cdot,y)=d\nu(y)\). \(\Pi(\mu,\nu)\) is used to denote all such transference plans. For \(W_{p}\) defined above, earlier literatures have shown that it satisfies the basic axioms of a metric [34]. As an classical example, the distance \(W_{1}\), that is often called Kantorovich-Rubinstein metric, has played an important role in recent statistical learning problems [39, 38]. Now we consider a working space that is convenient for the study of our distribution regression model with FNNs. The space is the so-called Wasserstein space. Denote the collection of all Borel probability measures on \(\Omega\) by \(\mathcal{P}(\Omega)\). Let \(x_{0}\) be a fixed but arbitrarily chosen point in \(\Omega\), the Wasserstein space of order \(p\) is defined as \[\mathcal{P}_{p}(\Omega)=\left\{\mu\in\mathcal{P}(\Omega):\quad\int_{\Omega}d \left(x_{0},x\right)^{p}d\mu(x)<+\infty\right\}.\] This space does not depend on the choice of the point \(x_{0}\). Then \(W_{p}\) defines a (finite) metric on \(\mathcal{P}_{p}(\Omega)\). It can be found in [34] that, for any fixed \(p\in[1,\infty)\), the metric space \((\mathcal{P}_{p}(\Omega),W_{p})\) is a compact metric space when \((\Omega,d)\) is a compact subset of \(\mathbb{R}^{n}\) equipped with the standard Euclidean metric \(d(x,y)=\|x-y\|_{2}\) as in aforementioned subsection. On the other hand, it is easy to see that when \(d(x,y)=\|x-y\|_{2}\), for any \(\mu\in\mathcal{P}(\Omega)\), there holds \[\int_{\Omega}\|x_{0}-x\|_{2}^{p}\,d\mu(x)<+\infty\] when \(\Omega\) is compact. Hence, under this setting, we have \(\mathcal{P}_{p}(\Omega)=\mathcal{P}(\Omega)\) and the metric space \((\mathcal{P}(\Omega),W_{p})\) is compact. In the following of the paper, we will use the notation that \(\Omega\) is a compact space with standard Euclidean metric. For fixed \(p\in[1,\infty)\) which will be selected depending on different learning situations, the distribution regression framework will be considered in such a compact metric space \((\mathcal{P}(\Omega),W_{p})\). Inspired by the two-stage regression scheme for distribution regression, we propose our distribution regression model in Wasserstein space \((\mathcal{P}(\Omega),W_{p})\). The first-stage data set \(D=\{(\mu_{i},y_{i})\}_{i=1}^{m}\) are i.i.d. sampled from an unknown meta distribution that is a Borel probability measure \(\rho\) on \(\mathcal{Z}=\mathcal{U}\times\mathcal{Y}\), where \(\mathcal{U}=(\mathcal{P}(\Omega),W_{p})\) is the input metric space of Borel probability measures on \(\Omega\) with Wasserstein metric, and \(\mathcal{Y}=[-M,M]\) is the output space with some \(M>0\). The regression function \(f_{\rho}\) on \(\mathcal{U}\) is defined as \[f_{\rho}(\mu)=\int_{\mathcal{Y}}yd\rho(y|\mu), \tag{2.19}\] where \(\rho(y|\mu)\) is the conditional distribution at \(\mu\) induced by \(\rho\), and it minimizes the mean squared error \[\mathcal{E}(f)=\int_{\mathcal{Z}}(f(\mu)-y)^{2}d\rho.\] We denote \(\rho_{\mathcal{U}}\) as the marginal distribution of \(\rho\) on \(\mathcal{U}\), and \(\left(L^{2}_{\rho_{\mathcal{U}}},\|\cdot\|_{\rho}\right)\) as the space of square integrable functions with respect to \(\rho_{\mathcal{U}}\). For convenience of later representations in some places, for a continuous function \(f\) defined on the compact space \((\mathcal{P}(\Omega),W_{p})\), we also use the notation \[\|f\|_{\infty}=\sup_{\mu\in\mathcal{P}(\Omega)}|f(\mu)|. \tag{2.20}\] The hypothesis space we use for the ERM algorithm follows the 3-layer FNN structure constructed in the proof of Theorem 2 with a positive constant \(R\) depending on \(d,Q,f,g\) associated with the fixed functional \(f\circ L^{Q}_{G}\) to bound the parameters in the FNN given by \[\mathcal{H}_{(2,3),R,N}=\left\{c\cdot h^{(3)}(\mu):\|F^{(j)}\|_{\infty}\leq RN ^{2},\|b^{(j)}\|_{\infty}\leq R,\text{ for }j=1,2,3,\|c\|_{\infty}\leq RN\right\}, \tag{2.21}\] where the \(\|\cdot\|_{\infty}\) norm for matrices is the maximum value of \(\ell_{1}\)-norms of its rows. For the hypothesis space \(\mathcal{H}_{(2,3),R,N}\), one of the most fundamental problems is whether it is well-defined as a compact hypothesis space for the learning theory of our distribution regression model? In the next theorem, we give a positive answer by rigorously proving the compactness of space \(\mathcal{H}_{(2,3),R,N}\), we show that \(\mathcal{H}_{(2,3),R,N}\) can be actually used as the hypothesis space of the distribution regression model (1.4). We use \((C(\mathcal{P}(\Omega)),\|\cdot\|_{\infty})\) to denote space of continuous functions on \((\mathcal{P}(\Omega),W_{p})\) equipped with the infinity norm defined in (2.20). **Theorem 3**.: _The space \(\mathcal{H}_{(2,3),R,N}\) is a compact metric subspace of the space \((C(\mathcal{P}(\Omega)),\|\cdot\|_{\infty})\)._ Recall that for a subset \(\mathcal{H}\) of a normed space equipped with a norm \(\|\cdot\|\), the \(\epsilon\)-covering number \(\mathcal{N}(\mathcal{H},\epsilon,\|\cdot\|)\) is the minimum number of balls with radius \(\epsilon>0\) that cover \(\mathcal{H}\). After showing the compactness of \(\mathcal{H}_{(2,3),R,N}\), the covering number \[\mathcal{N}(\mathcal{H}_{(2,3),R,N},\epsilon,\|\cdot\|_{\infty})\] makes sense, is finite and would act as one of the main objects to establish other main results. Estimates of \(\mathcal{N}(\mathcal{H}_{(2,3),R,N},\epsilon,\|\cdot\|_{\infty})\) will be given in subsection 4.4. Since for the distribution regression model, the concrete information of the first stage distribution samples are unavailable, we can only observe the second-stage data \(\{(\{x_{i,j}\}_{j=1}^{n_{i}},y_{i})\}_{i=1}^{m}\), where \(\{x_{i,j}\in\Omega\}_{j=1}^{n_{i}}\) are i.i.d. sampled from the probability distribution \(\{\mu_{i}\}_{i=1}^{m}\), so we have to use the empirical distribution \[\hat{\mu}_{i}^{n_{i}}=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}\delta_{x_{i,j}}\] as the input of our FNN structure. Denote the empirical error with respect to the second-stage data \(\hat{D}\) as \[\mathcal{E}_{\hat{D}}(f):=\frac{1}{m}\sum_{i=1}^{m}\left(f(\hat{\mu}_{i}^{n_{i }})-y_{i}\right)^{2},\] then the empirical target function from the ERM algorithm using the hypothesis space (2.21) is the function in \(\mathcal{H}_{(2,3),R,N}\) that minimizes the empirical error: \[f_{\hat{D},R,N}:=\arg\min_{f\in\mathcal{H}_{(2,3),R,N}}\mathcal{E}_{\hat{D}}(f). \tag{2.22}\] The compactness of the hypothesis space \(\mathcal{H}_{(2,3),R,N}\) proved in Theorem 3 ensures the existence of a minimizer \(f_{\hat{D},R,N}\) of the above variational problem, and then the learning theory of the above distribution regression scheme makes sense in such a FNN-inducing space. We now define the projection operator \(\pi_{M}\) on the space of functional \(f:\mathcal{P}(\Omega)\to\mathbb{R}\) as \[\pi_{M}(f)(\mu)=\begin{cases}M,&\text{ if }f(\mu)>M,\\ -M,&\text{ if }f(\mu)<-M,\\ f(\mu),&\text{ if }-M\leq f(\mu)\leq M.\end{cases}\] Since the regression function \(f_{\rho}\) is bounded by \(M\), we use the truncated empirical target function \[\pi_{M}f_{\hat{D},R,N}\] as the final estimator. We first derive an oracle inequality for the distribution regression scheme (2.22), which is different from those for the traditional one-stage statistical regression or kernel-based two-stage distribution regression. The oracle inequality for the traditional regression framework in [28, lemma 4] fails in the distribution regression framework, since the empirical target function of the ERM algorithm is learned from the second-stage data rather than the first-stage data. We utilize a novel two-stage error decomposition method for distribution regression by including the empirical error of the first-stage sample \[\mathcal{E}_{D}(f):=\frac{1}{m}\sum_{i=1}^{m}\left(f(\mu_{i})-y_{i}\right)^{2}\] as an intermediate term of the error decomposition. Such a two-stage error decomposition is new in the literature of learning theory. We present a novel error decomposition in the following proposition. **Proposition 1**.: _Denote \(\mathcal{H}=\mathcal{H}_{(2,3),R,N}\), then for any \(h\in\mathcal{H}\) and \(f_{\hat{D},R,N}\) defined in (2.22),_ \[\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}\left(f_ {\rho}\right)\leq\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}_{D }\left(\pi_{M}f_{\hat{D},R,N}\right)+\mathcal{E}_{D}\left(\pi_{M}f_{\hat{D},R,N }\right)\] \[-\mathcal{E}_{\hat{D}}\left(\pi_{M}f_{\hat{D},R,N}\right)+ \mathcal{E}_{\hat{D}}\left(h\right)-\mathcal{E}_{D}\left(h\right)+\mathcal{E}_ {D}\left(h\right)-\mathcal{E}(h)+\mathcal{E}(h)-\mathcal{E}(f_{\rho}),\] _which can be further bounded by \(I_{1}(D,\mathcal{H})+I_{2}(D,\mathcal{H})+\left|I_{3}(\hat{D},\mathcal{H}) \right|+\left|I_{4}(\hat{D},\mathcal{H})\right|+R(\mathcal{H})\), in which_ \[I_{1}(D,\mathcal{H}) =\left\{\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E }\left(f_{\rho}\right)\right\}-\left\{\mathcal{E}_{D}\left(\pi_{M}f_{\hat{D}, R,N}\right)-\mathcal{E}_{D}\left(f_{\rho}\right)\right\},\] \[I_{2}(D,\mathcal{H}) =\left\{\mathcal{E}_{D}\left(h\right)-\mathcal{E}_{D}\left(f_{ \rho}\right)\right\}-\left\{\mathcal{E}(h)-\mathcal{E}\left(f_{\rho}\right) \right\},\] \[I_{3}(\hat{D},\mathcal{H}) =\mathcal{E}_{D}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}_{ \hat{D}}\left(\pi_{M}f_{\hat{D},R,N}\right),\] \[I_{4}(\hat{D},\mathcal{H}) =\mathcal{E}_{\hat{D}}\left(h\right)-\mathcal{E}_{D}\left(h\right),\qquad R(\mathcal{H})=\mathcal{E}(h)-\mathcal{E}(f_{\rho}).\] Based on the two-stage error decomposition, we derive a new oracle inequality for the distribution regression scheme using our hypothesis space \(\mathcal{H}_{(2,3),R,N}\) in the following theorem. **Theorem 4**.: _Consider the distribution regression framework described above with the first stage sample size \(m\), and the second stage sample size \(n_{1}=n_{2}=\cdots=n_{m}=n\). Then for any \(h\in\mathcal{H}_{(2,3),R,N}\) and \(\epsilon>0\), we have_ \[\text{Prob}\left\{\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_ {\rho}^{2}>2\left\|h-f_{\rho}\right\|_{\rho}^{2}+8\epsilon\right\}\] \[\leq \exp\left\{T_{1}N\log\frac{16M\widehat{R}}{\epsilon}+T_{2}N\log N -\frac{3m\epsilon}{2048M^{2}}\right\}\] \[+ \exp\left\{-\frac{m\epsilon^{2}}{2\left(3M+\left\|h\right\|_{ \infty}\right)^{2}\left(\left\|h-f_{\rho}\right\|_{\rho}^{2}+\frac{2}{3} \epsilon\right)}\right\}\] \[+ \exp\left\{\log 4m+T_{1}N\log\frac{80M\widehat{R}R^{2}N^{4}}{ \epsilon}+T_{2}N\log N-\frac{n\epsilon^{2}}{115200\max\{\left\|h\right\|_{ \infty}^{2},M^{2}\}R^{8}N^{16}\right\},\] _where \(R\), \(\widehat{R}\) are constants depending on \(d,Q,g,f\), and \(T_{1}\), \(T_{2}\) are constants depending on \(d,Q\) given explicitly in the proof._ Utilizing the above oracle inequality for distribution regression, we are able to achieve almost optimal learning rates for the proposed distribution regression scheme on the hypothesis space \(\mathcal{H}_{(2,3),R,N}\) that includes the FNN structure we construct. **Theorem 5**.: _If \(f_{\rho}=f\circ L_{G}^{Q}\) in the composite form (2.14), where \(Q\) is a polynomial of degree \(q\) on \(\Omega\), \(g\in C^{0,1}[-B_{Q},B_{Q}]\), and \(f\in C^{0,\beta}[-B_{G},B_{G}]\) for some \(0<\beta\leq 1\). If the first stage sample size \(m\) satisfies the restriction that_ \[\log\left(4m\right)\leq A_{6}m^{\frac{1}{2\beta+1}},\] _and the neural network parameter \(N\) and the second stage sample size \(n\) are chosen by_ \[N=\left[A_{4}m^{\frac{1}{2\beta+1}}\right],\quad n\geq\left\lceil A_{5}m^{\frac{4 \beta+17}{2\beta+1}}\right\rceil,\] _then there exists a constant \(A_{7}\) such that_ \[\mathbb{E}\left\{\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E} \left(f_{\rho}\right)\right\}\leq A_{7}m^{-\frac{2\beta}{2\beta+1}}\log m,\] _where \(A_{4},A_{5},A_{6}\) and \(A_{7}\) are constants depending on \(d,Q,g,f\) given explicitly in the proof._ Notice that in the proof of Theorem 4 and Theorem 5, we assume that all \(n_{i}=n\) for simplicity, but the proof still works for \(n_{i}\) chosen to be different, in which we just need to replace \(n\) by \(\min_{i}\{n_{i}\}\) in the proof, this also stresses the advantage of using distribution rather than vectors as inputs of FNNs for the distribution regression problem. We end this subsection by providing an intrinsic fact that the composite functional \(f\left(L_{G}(\mu)\right)\) should be continuous w.r.t. the probability variable \(\mu\). We put this fact in the following proposition and the proof will be given in the appendix. **Proposition 2**.: _For any fixed \(p\in[1,\infty)\), the composite functional \(f\left(L_{G}(\mu)\right)\) with distribution variable \(\mu\) is continuous on the metric space \(\left(\mathcal{P}(\Omega),W_{p}\right)\)._ ## 3 Related work and discussion In this section, we review some related works on approximation and generalization performance of fully connected neural networks for regression with vector inputs, and some researches on distribution regression, to demonstrate the novelty and superiority of our results. Dating back to thirty years ago, there have been many works analysing the approximation and generalization ability of fully connected neural networks on learning various classes of functions, wherein the Holder spaces are one of the most important classes of functions to be studied. At the beginning, most results about the approximation ability of the fully connected neural networks are derived using the \(C^{\infty}\) activation functions that satisfy two assumptions, that is for some \(b\in\mathbb{R}\), \(\sigma^{k}(b)\neq 0\) for any non-negative integer \(k\), and for some integer \(q\neq 1\), \(\lim_{u\rightarrow-\infty}\sigma(u)/|u|^{q}=0\) and \(\lim_{u\rightarrow\infty}\sigma(u)/u^{q}=1\). Such approximation rates can be found in [24] through a localized Taylor expansion method. It was shown that for any \(f\in W_{\infty}^{\beta}\left([-1,1]^{d}\right)\), there exists a net \(f_{N}\) such that \(\left\|f_{N}-f\right\|_{C([-1,1]^{d})}\leq c_{f,d,\beta}N^{-\beta/d}\), with a constant \(c_{f,d,\beta}\) independent of \(N\). ReLU was more efficient and effective in deep learning [15] and thus became more and more popular in practice, therefore it is desirable to find approximation results for deep ReLU neural networks. Such approximation results are derived in [36] that \(f\in W_{\infty}^{\beta}\left([-1,1]^{d}\right)\) can be approximated with an error \(\epsilon\), by using at most \(c\left(\log 1/\epsilon+1\right)\) layers and at most \(c\epsilon^{-d/\beta}\left(\log 1/\epsilon+1\right)\) weights and computation units. Based on these approximation results, generalization results for ERM algorithms using fully connected neural networks are then conducted in [5] and [28] to achieve optimal convergence rate in the traditional regression framework. However, these research results are all for the traditional regression problem, the vector-input FNN does not work for the distribution regression framework with two stages of sampling due to the fact that the different sizes of second-stage sample will result in different dimensions of input vectors if we just vectorize all the second-stage samples. Furthermore, the generalization analysis for the traditional regression problem also fails in the distribution regression since the ERM algorithm is based on the second-stage data \(\hat{D}\) rather than the first-stage data \(D\), resulting in the desire of a new method to bound the concentration error between the generalization error and the second-stage empirical error for the two-stage sampling distribution regression framework. To solve the limitation of the usage of vector-input neural network structure for the distribution regression framework, we propose a novel FNN structure that is able to take the empirical distribution as the input for the learning of distribution regression in practice. The idea of constructing the FNN structure with distribution input is slightly related to the work of [42], in which they transform the problem of learning symmetric functions to the functional learning problem with distribution input in a Wasserstein space, and construct a shallow net \[f(\mu)=\frac{1}{m^{\prime}}\sum_{j^{\prime}=1}^{m^{\prime}}b_{j^{\prime}} \tilde{\sigma}\left(\frac{1}{m}\sum_{j=1}^{m}c_{j^{\prime},j}\int\sigma_{ \alpha}\left(\left\langle w_{j^{\prime},j},\tilde{x}\right\rangle\right)d \left(\mu(x)\right)\right),\quad\tilde{x}=[x,R]^{T}. \tag{3.1}\] The architecture of the FNN for functional approximation and distribution regression we construct (Definition 1) is more general than (3.1) and can be deeper, which enables us to learn more complex functional. Utilizing our novel FNN structure, we construct a three-layer FNN to derive the approximation rate \(O\left(N^{-\beta}\right)\) for learning the functional \(f\left(L_{G}^{Q}(\mu)\right)\) induced by the polynomial \(Q\) and Holder functions \(f,g\), which is the same as the rate of learning the composite function \(f\left(g\left(Q(x)\right)\right)\) itself without taking into consideration the input of probability measures [28]. Actually, our FNN structure has the potential to learn functionals that are induced by more composite functions, where we just need to construct a deep FNN with the number of layers determined by the number of composite functions that induce the functionals. Furthermore, we utilize a novel error decomposition method to derive an oracle inequality for the distribution regression framework with two-stage sampling, and then combine it with the approximation rate to derive an almost optimal learning rate \(O\left(m^{-\frac{2\beta}{2\beta+1}}\log m\right)\) as [5] and [28]. Let us review some related works on distribution regression in the literature of learning theory [6, 31]. The recent main works on the two-stage distribution regression mainly include [33], [13], [11], [12], [25] and [37]. These works mainly rely on a mean embedding technique to transform probability measures to a space consisting of all mean embeddings via some Mercer kernel. In this theoretical model, the learning theory framework is in fact performed on the space of all mean embeddings which form a compact subspace of the continuous function space defined on the underlying space of the probability measures. To derive nice learning rates, the works of [33], [17], [13], [11], [12] and [37] deeply rely on the kernel regularization and the corresponding integral operator technique. In fact, to the best of our knowledge, learning rate for distribution regression are unexplored in non-kernel or non-regularized settings where the integral operator approach cannot always be applied. Moveover, most of the classical kernel-based learning theory approaches such as [30], [3], [17], and [16] fail in the deep learning setting, and such classical techniques are often inappropriate for modern fully connected neural network structures. One of the main reasons is that the structure of FNN-based hypothesis spaces is totally different from that of the kernel-based hypothesis space. Hence, we are mainly in face of three aspects of difficulties in this work. The first one is how to provide an appropriate FNN structure with distribution inputs and a meaningful distribution regression model for the hypothesis space induced by the FNN. The second one is how to establish approximation results without utilizing the traditional kernel methods and its related integral operator approaches that the existing works on distribution regression used. The third one is how to realize distribution regression without using regularization in a novel hypothesis space that relies on FNN structures and derive optimal learning rates, since there is currently no theoretical result on distribution regression derived in such a setting. In this paper, we overcome aforementioned difficulties and establish a learning theory framework of distribution regression with neural networks. At first, some new functional approximation results are proposed. Then we provide the proof of the crucial fact that the space \(\mathcal{H}_{(2,3),R,N}\) is compact. In contrast to the existing works on distribution regression, the underlying hypothesis space defined by us possesses nice FNN learning structures. Furthermore, though the classical effective dimension based analysis related to kernel methods in [33], [13], [25] and [37] fails in our setting, new estimates based on the covering number of the newly defined hypothesis space \(\mathcal{H}_{(2,3),R,N}\) are obtained. In contrast to the previous works utilizing the classical error decomposition for kernel based regression such as some earlier works [6, 35, 7] and recent developments [14, 19], another technical novelty is the crucial two-stage error decomposition which appears first in the literature of learning theory. Based on the functional approximation results, covering number estimates and the two-stage error decomposition, almost optimal learning rates of the proposed FNN distribution regression scheme are obtained. To the best of our knowledge, the work provides a beginning of the approximation theory of functionals with variable of probability measures using neural networks. It is also the first work to establish the learning theory of distribution regression and obtain almost optimal learning rates with neural networks. The methods in this work have the potential to be further applied to more general deep learning settings. ## 4 Proof of Main Results ### Proof of Theorem 1 In the construction of the FNN to approximate the functional, we need to use the following lemma to approximate Lipschitz continuous univariate functions using continuous piecewise linear functions (splines) \(\{\sigma(\cdot-t_{i})\}_{i=1}^{2N+3}\), with \(t_{i}=-1+\frac{i-2}{N}\), which can be found in [40] and [23]. **Lemma 2**.: _For \(N\in\mathbb{N}\), let \(\textbf{t}=\left\{t_{i}:=-1+\frac{i-2}{N}\right\}_{i=1}^{2N+3}\) be the uniform mesh on \(\left[\,-1-\frac{1}{N}\right.\)_ \(1+\frac{1}{N}\)], \(L_{t}\) be a linear operator on \(C[-B,B]\) given by_ \[L_{t}(g)=\frac{N}{B}\sum_{i=1}^{2N+3}\left(\mathcal{L}_{N}\left(\{g(Bt_{k})\}_{k =2}^{2N+2}\right)\right)_{i}\sigma\left(\cdot-Bt_{i}\right),\qquad g\in C[-B,B]. \tag{4.1}\] _where \(\mathcal{L}_{N}:\mathbb{R}^{2N+1}\to\mathbb{R}^{2N+3}\) is a linear operator that for \(\zeta=(\zeta_{i})_{i=2}^{2N+2}\in\mathbb{R}^{2N+1}\)_ \[\left(\mathcal{L}_{N}(\zeta)\right)_{i}=\begin{cases}\zeta_{2},&\text{for }i=1, \\ \zeta_{3}-2\zeta_{2},&\text{for }i=2,\\ \zeta_{i-1}-2\zeta_{i}+\zeta_{i+1},&\text{for }3\leq i\leq 2N+1,\\ \zeta_{2N+1}-2\zeta_{2N+2},&\text{for }i=2N+2,\\ \zeta_{2N+2},&\text{for }i=2N+3.\end{cases} \tag{4.2}\] _Then for \(g\in C^{0,\alpha}[-B,B]\) with \(0<\alpha\leq 1\), we have_ \[\left\|L_{\mathbf{t}}(g)-g\right\|_{C[-B,B]}\leq\frac{2B^{\alpha}|g|_{0,\alpha }}{N^{\alpha}},\] _where \(|g|_{0,\alpha}\) is the semi-norm of the Lipschitz-\(\alpha\) continuous function \(g\)._ _Proof of Theorem 1._ For the first hidden layer of our FNN, we aim to realize the ridge functionals \(\left\{\int_{\Omega}\sigma(\xi\cdot x-B_{\xi}t_{j})d\mu\right\}_{j}\) leading to approximating the inner functional \(L_{G}^{\xi}(\mu)\). Take the connection matrix and bias vector as \[F^{(1)}=\left[\begin{array}{c}\xi^{T}\\ \xi^{T}\\ \vdots\\ \xi^{T}\end{array}\right]\in\mathbb{R}^{(2N+3)\times d},\quad b^{(1)}=B_{\xi} \left[\begin{array}{c}t_{1}\\ t_{2}\\ \vdots\\ t_{2N+3}\end{array}\right]\in\mathbb{R}^{2N+3},\] then the output of the first hidden layer \(h^{(1)}\in\mathbb{R}^{2N+3}\) is \[\left(h^{(1)}(\mu)\right)_{j}=\int_{\Omega}\sigma\left(\xi\cdot x-B_{\xi}t_{j }\right)d\mu(x),\quad j=1,2,\ldots,2N+3. \tag{4.3}\] The free parameters in the first hidden layer come only from \(\xi\) and \(B_{\xi}t_{j}\), thus the number of free parameters in this layer is \[\mathcal{N}_{1}=d+2N+3.\] The second hidden layer of our FNN aims to realize the ridge functions \(\left\{\sigma(\widetilde{\mathcal{S}}(\mu)-B_{G}t_{j})\right\}_{j}\) leading to approximating \(f\left(\widetilde{\mathcal{S}}(\mu)\right)\), where \(\widetilde{\mathcal{S}}(\mu)\) is the approximation of \(L_{G}^{\xi}(\mu)\) by the linear combination of \(h^{(1)}(\mu)\), and \(B_{G}\) is the upper bound of \(\left|\widetilde{\mathcal{S}}(\mu)\right|\). From Lemma 2, we know that there exists a linear combination of \(h^{(1)}(\mu)\) that approximates \(L_{G}^{\xi}(\mu)\), and denote this approximation as \[\widetilde{\mathcal{S}}(\mu):=\frac{N}{B_{\xi}}\sum_{j=1}^{2N+3}\left(\mathcal{ L}_{N}\left(\left\{g\left(B_{\xi}t_{k}\right)\right\}_{k=2}^{2N+2}\right) \right)_{j}\int_{\Omega}\sigma\left(\xi\cdot x-B_{\xi}t_{j}\right)d\mu(x),\] then according to Lemma 2, we have \[\sup_{\mu\in\mathcal{P}(\Omega)}\left|\widetilde{\mathcal{S}}(\mu)-L _{G}^{\xi}(\mu)\right|\] \[\leq \sup_{\mu\in\mathcal{P}(\Omega)}\left|\frac{N}{B_{\xi}}\sum_{j=1}^{ 2N+3}\left(\mathcal{L}_{N}\left(\left\{g\left(B_{\xi}t_{k}\right)\right\}_{k=2 }^{2N+2}\right)\right)_{j}\int_{\Omega}\sigma\left(\xi\cdot x-B_{\xi}t_{j} \right)d\mu(x)-\int_{\Omega}g(\xi\cdot x)d\mu(x)\right|\] \[\leq \sup_{\mu\in\mathcal{P}(\Omega)}\int_{\Omega}\left\|\frac{N}{B_{ \xi}}\sum_{j=1}^{2N+3}\left(\mathcal{L}_{N}\left(\left\{g\left(B_{\xi}t_{k} \right)\right\}_{k=2}^{2N+2}\right)\right)_{j}\sigma\left(\xi\cdot x-B_{\xi}t _{j}\right)-g(\xi\cdot x)\right\|_{C(\Omega)}d\mu(x)\] \[\leq \frac{2B_{\xi}|g|_{0,1}}{N},\] which also implies the bound of \(\left|\widetilde{\mathcal{S}}(\mu)\right|\), \[\sup_{\mu\in\mathcal{P}(\Omega)}\left|\widetilde{\mathcal{S}}(\mu)\right|\leq \sup_{\mu\in\mathcal{P}(\Omega)}\left|L_{G}^{\xi}(\mu)\right|+\sup_{\mu\in \mathcal{P}(\Omega)}\left|\widetilde{\mathcal{S}}(\mu)-L_{G}^{\xi}(\mu) \right|\leq B_{g}+2B_{\xi}|g|_{0,1}=B_{G}.\] Then we take the connection matrix and bias vector of the second hidden layer as \[F^{(2)}=\frac{N}{B_{\xi}}\left[\begin{array}{c}\mathcal{L}_{N}^{T}\left( \left\{g\left(B_{\xi}t_{k}\right)\right\}_{k=2}^{2N+2}\right)\\ \mathcal{L}_{N}^{T}\left(\left\{g\left(B_{\xi}t_{k}\right)\right\}_{k=2}^{2N+ 2}\right)\\ \vdots\\ \mathcal{L}_{N}^{T}\left(\left\{g\left(B_{\xi}t_{k}\right)\right\}_{k=2}^{2N+ 2}\right)\end{array}\right]\in\mathbb{R}^{(2N+3)\times(2N+3)},\quad b^{(2)}=B_ {G}\left[\begin{array}{c}t_{1}\\ t_{2}\\ \vdots\\ t_{2N+3}\end{array}\right]\in\mathbb{R}^{2N+3}.\] Therefore the output of the second hidden layer \(h^{(2)}(\mu)\in\mathbb{R}^{2N+3}\) is \[\left(h^{(2)}(\mu)\right)_{j}=\sigma\left(\widetilde{\mathcal{S}}(\mu)-B_{G} t_{j}\right)_{j},\quad j=1,2,\ldots,2N+3. \tag{4.4}\] The free parameters in the second hidden layer come only from \(\frac{N}{B_{\xi}}\mathcal{L}_{N}\left(\left\{g\left(B_{\xi}t_{k}\right)\right\} _{k=2}^{2N+2}\right)\) and \(B_{G}t_{j}\), thus the number of free parameters in this layer is \[\mathcal{N}_{2}=2N+3+2N+3=4N+6.\] Finally, choose \(c=\frac{N}{B_{G}}\mathcal{L}_{N}\left(\left\{f\left(B_{G}t_{k}\right)\right\}_{ k=2}^{2N+2}\right)\in\mathbb{R}^{2N+3}\), according to Lemma 2, we have \[\sup_{\mu\in\mathcal{P}(\Omega)}\left|c\cdot h^{(2)}(\mu)-f(\widetilde{ \mathcal{S}}(\mu))\right|\leq\frac{2B_{G}^{\beta}|f|_{C^{0,\beta}}}{N^{\beta}}. \tag{4.5}\] Then we can derive the approximation rate of \(f(L_{G}^{\xi}(\mu))\) with our constructed FNN by combining (4.4) and (4.5), that is \[\sup_{\mu\in\mathcal{P}(\Omega)}\left|c\cdot h^{(2)}(\mu)-f(L_{G }^{\xi}(\mu))\right|\] \[\leq\sup_{\mu\in\mathcal{P}(\Omega)}\left|c\cdot h^{(2)}(\mu)-f( \widetilde{\mathcal{S}}(\mu))\right|+\left|f\right|_{C^{0,\beta}}\sup_{\mu\in \mathcal{P}(\Omega)}\left(\left|L_{G}^{\xi}(\mu)-\widetilde{\mathcal{S}}(\mu )\right|\right)^{\beta} \tag{4.6}\] \[\leq\frac{2B_{G}^{\beta}\left|f\right|_{C^{0,\beta}}}{N^{\beta}}+ \left|f\right|_{C^{0,\beta}}\left(\frac{2B_{\xi}|g|_{C^{0,1}}}{N}\right)^{ \beta}=\frac{2B_{G}^{\beta}\left|f\right|_{C^{0,\beta}}+\left|f\right|_{C^{0, \beta}}(2B_{\xi}|g|_{C^{0,1}})^{\beta}}{N^{\beta}}.\] From the expression of the coefficient vector \(c=\frac{N}{B_{G}}\mathcal{L}_{N}\left(\{f\left(B_{G}t_{k}\right)\}_{k=2}^{2N+2}\right)\), we have known \(\|c\|_{\infty}\leq\frac{4\|f\|_{\infty}N}{B_{G}}\) and hence (2.5) holds. The free parameters of our constructed FNN are from the first, second hidden layer and the coefficients \(c\), therefore, the total number of free parameters is \[\mathcal{N}=\mathcal{N}_{1}+\mathcal{N}_{2}+2N+3=8N+d+12. \tag{4.7}\] This completes the proof of Theorem 1. Proof of Corollary 1.: First we define a function \[g:[-B_{\xi},B_{\xi}]\rightarrow\mathbb{R},x\mapsto e^{-x}.\] Then we know that \(\left|g\right|_{C^{0,1}}=e^{B_{\xi}}\) and \(B_{g}=\|g\|_{C[-B_{\xi},B_{\xi}]}=e^{B_{\xi}}\). Then the corresponding \(B_{G}\) in (2.2) can be explicitly written as \[B_{G}=B_{g}+2B_{\xi}|g|_{C^{0,1}}=e^{B_{\xi}}+2B_{\xi}e^{B_{\xi}}.\] Define a function \(f\) as the identity on the interval \[[-(e^{B_{\xi}}+2B_{\xi}e^{B_{\xi}}),e^{B_{\xi}}+2B_{\xi}e^{B_{\xi}}]\] which satisfies \(\left|f\right|_{C^{0,1}}=1\) and \(\|f\|_{\infty}=e^{B_{\xi}}+2B_{\xi}e^{B_{\xi}}\). We know that \(\|f\|_{\infty}=B_{G}\) and the functional \(\mathcal{L}_{\xi}\) defined in (2.7) can be exactly represented by \[\mathcal{L}_{\xi}(\mu)=f\left(\int_{\Omega}g(\xi\cdot x)d\mu(x)\right)\] with \(f\), \(g\) defined as above. Then the first result follows directly from Theorem 1 by substituting \(\beta=1\) and the above bounds. The second inequality follows directly from Schwarz inequality \(|\xi\cdot x|\leq\|\xi\|_{2}\|x\|_{2}\leq 1\), which implies that \(B_{\xi}=\|\xi\cdot x\|_{C(\Omega)}\leq 1\). ### Proof of Theorem 2 Proof of Theorem 2.: We first construct the two-hidden layer structures in the integral part, which aims to realize an approximation of \(g(Q(x))\) for \(x\in\Omega\). The first layer of this two-layer structure in the integral part aims to realize the ridge functions \(\{\xi_{i}\cdot x-t_{j}\}_{i,j}\) leading to approximating the polynomials \(\{(\xi_{i}\cdot x)^{\ell}\}_{i,\ell}\). Take the connection matrix as \[F^{(1)}=\left[\begin{array}{c}\mathbf{1}_{2N+3}\xi_{1}^{T}\\ \mathbf{1}_{2N+3}\xi_{2}^{T}\\ \vdots\\ \mathbf{1}_{2N+3}\xi_{n_{q}}^{T}\end{array}\right]\in\mathbb{R}^{n_{q}(2N+3) \times d},\] where \(\mathbf{1}_{2N+3}\) is the constant \(1\) vector in \(\mathbb{R}^{2N+3}\), and the bias vector \(b^{(1)}\in\mathbb{R}^{n_{q}(2N+3)}\) as \[b^{(1)}_{(k-1)(2N+3)+j}=t_{j},\quad k=1,2,\ldots,n_{q},\ j=1,2,\ldots,2N+3,\] then we have that \[\left(\sigma(F^{(1)}x-b^{(1)})\right)_{(k-1)(2N+3)+j}=\sigma\left(\xi_{k}\cdot x -t_{j}\right),\quad k=1,2,\ldots,n_{q},\ j=1,2,\ldots,2N+3.\] The free parameters in this layer are only \(\xi_{k}\) and \(t_{j}\), so the number of free parameters in the first hidden layer is \[\mathcal{N}_{1}=n_{q}d+2N+3.\] The second layer of this two-layer structure in the integral part aims to realize the ridge functions \(\left\{\sigma\left(\widetilde{Q}(x)-B_{Q}t_{j}\right)\right\}_{j}\), where \(\widetilde{Q}(x)\) is an approximation of \(Q(x)\), and \(B_{Q}\) is the upper bound of \(\left|\widetilde{Q}(x)\right|\) defined explicitly in (2.12). Then after the integral with respect to the input distribution \(\mu\), we get the output of the integral-part layer, which is then used to realize an approximation of the inner functional \(L_{G}^{Q}\). Denote \(v^{[l]}=\mathcal{L}_{N}\left(\left\{t_{j}^{l}\right\}_{j=2}^{2N+2}\right)\in \mathbb{R}^{2N+3}\), \(\mathbf{V}=\left[v^{[1]}\ v^{[2]}\ \ldots\ v^{[q]}\right]^{T}\in\mathbb{R}^{q\times(2N+3)}\), \(O\) the zero matrix of size \(q\times(2N+3)\) and \[F^{[N]}=N\left[\begin{array}{cccc}\mathbf{V}&O&\ldots&O\\ O&\mathbf{V}&\ldots&O\\ \vdots&\vdots&\vdots&\vdots\\ O&O&\ldots&\mathbf{V}\end{array}\right]\in\mathbb{R}^{qn_{q}\times n_{q}(2N+3)}.\] Then we have \[F^{[N]}\sigma\left(F^{(1)}x-b^{(1)}\right)=\left[P_{1}^{[1]}(x)\ \cdots\ P_{1}^{[q]}(x)\ P_{2}^{[1]}(x)\ \cdots\ P_{2}^{[q]}(x)\cdots P_{n_{q}}^{[1]}(x)\ \cdots\ P_{n_{q}}^{[q]}(x)\right]^{T},\] where for \(k=1,2,\ldots,n_{q}\) and \(\ell=1,2,\ldots,q\), \[P_{k}^{[\ell]}(x)=N\sum_{j=1}^{2N+3}v_{j}^{[\ell]}\sigma\left(\xi_{k}\cdot x-t _{j}\right).\] If we denote \[\gamma_{Q}=\left[\gamma_{1,1}^{Q}\ \gamma_{1,2}^{Q}\ \ldots\ \gamma_{1,q}^{Q}\ \gamma_{2,1}^{Q}\ \gamma_{2,2}^{Q}\ \ldots\ \gamma_{2,q}^{Q}\ \ldots\ \gamma_{n_{q},1}^{Q}\ \gamma_{n_{q},2}^{Q}\ \ldots\ \gamma_{n_{q},q}^{Q}\right],\] then, \[\gamma_{Q}F^{[N]}\sigma\left(F^{(1)}x-b^{(1)}\right)=\sum_{k=1}^{n_{q}}\sum_{ \ell=1}^{q}\gamma_{k,\ell}^{Q}N\sum_{j=1}^{2N+3}v_{j}^{[\ell]}\sigma\left(\xi _{k}\cdot x-t_{j}\right)\] According to Lemma 2, for \(k=1,2,\ldots,n_{q}\), and \(\ell=1,2,\ldots,q\), \[\sup_{x\in\Omega}\left|N\sum_{j=1}^{2N+3}v_{j}^{[\ell]}\sigma\left(\xi_{k} \cdot x-t_{j}\right)-\left(\xi_{k}\cdot x\right)^{\ell}\right|\leq\frac{2\ell }{N}, \tag{4.8}\] denote \[\widetilde{Q}(x)=Q(0)+\sum_{k=1}^{n_{q}}\sum_{\ell=1}^{q}\gamma_{k,\ell}^{Q}N \sum_{j=1}^{2N+3}v_{j}^{[\ell]}\sigma\left(\xi_{k}\cdot x-t_{j}\right),\] then we have \[\sup_{x\in\Omega}\left|\widetilde{Q}(x)-Q(x)\right|\leq\frac{2q\|\gamma_{Q}\|_{ 1}}{N}, \tag{4.9}\] which also implies the upper bound of \(\left|\widetilde{Q}(x)\right|\) as \[\sup_{x\in\Omega}\left|\widetilde{Q}(x)\right|\leq\sup_{x\in\Omega}|Q(x)|+\sup _{x\in\Omega}\left|\widetilde{Q}(x)-Q(x)\right|\leq\widehat{B}_{Q}+2q\|\gamma _{Q}\|_{1}=B_{Q}.\] Take the connection matrix of the second hidden layer as \[F^{(2)}=\mathbf{1}_{2N+3}\gamma_{Q}F^{[N]}\in\mathbb{R}^{(2N+3)\times n_{q}(2N +3)},\] and the bias vector \(b^{(2)}\in\mathbb{R}^{2N+3}\) as \[b_{j}^{(2)}=-Q(0)+B_{Q}t_{j},\quad\text{for }j=1,2,\ldots,2N+3.\] Then the output of the integral-part layer \(h^{(2)}\in\mathbb{R}^{2N+3}\) is \[\left(h^{(2)}(\mu)\right)_{j}=\int_{\Omega}\sigma\left(\widetilde{Q}(x)-B_{Q} t_{j}\right)d\mu(x),\quad\text{for }j=1,2,\ldots,2N+3. \tag{4.10}\] The free parameters in this layer are only \(\gamma_{Q},Nv^{[\ell]}\) and \(-Q(0)+B_{Q}t_{j}\), so the number of free parameters in the second hidden layer is \[\mathcal{N}_{2}=qn_{q}+q(2N+3)+2N+3=2(q+1)N+qn_{q}+3q+3.\] After the realization of the polynomial \(Q\), the rest part of the proof is similar as that in the proof of Theorem 1. The third hidden layer of our FNN aims to realize the ridge functions \(\left\{\sigma(\widetilde{\mathcal{S}}(\mu)-B_{G}t_{j})\right\}_{j}\) leading to approximating \(f\left(\widetilde{\mathcal{S}}(\mu)\right)\), where \(\widetilde{\mathcal{S}}(\mu)\) is the approximation of \(L_{G}^{Q}(\mu)\) by the linear combination of \(h^{(2)}(\mu)\) that is defined in following equation, and \(B_{G}\) is the upper bound of \(\left|\widetilde{\mathcal{S}}(\mu)\right|\) defined explicitly in (2.13). Denote \[\widetilde{\mathcal{S}}(\mu):=\frac{N}{B_{Q}}\sum_{j=1}^{2N+3}\left(\mathcal{ L}_{N}\left(\left\{g\left(B_{Q}t_{k}\right)\}_{k=2}^{2N+2}\right)\right)_{j} \int_{\Omega}\sigma\left(\widetilde{Q}(x)-B_{Q}t_{j}\right)d\mu(x),\] then utilizing the error decomposition, \[\sup_{\mu\in\mathcal{P}(\Omega)}\left|\widetilde{\mathcal{S}}(\mu )-L_{G}^{Q}(\mu)\right|\] \[\leq\sup_{\mu\in\mathcal{P}(\Omega)}\left|\frac{N}{B_{Q}}\sum_{j =1}^{2N+3}\left(\mathcal{L}_{N}\left(\left\{g\left(B_{Q}t_{k}\right)\}_{k=2}^{ 2N+2}\right)\right)_{j}\int_{\Omega}\sigma\left(\widetilde{Q}(x)-B_{Q}t_{j} \right)d\mu-\int_{\Omega}g(\widetilde{Q}(x))d\mu\right|\] \[\left(h^{(3)}(\mu)\right)_{j}=\sigma\left(\widetilde{\mathcal{S}}(\mu)-B_{G}t_{j} \right)_{j},\quad j=1,2,\ldots,2N+3. \tag{4.12}\] The free parameters in the third hidden layer are only \(\frac{N}{B_{Q}}\mathcal{L}_{N}\left(\left\{g\left(B_{\xi}t_{k}\right)\right\}_{ k=2}^{2N+2}\right)\) and \(B_{G}t_{j}\), thus the number of free parameters in this layer is \[\mathcal{N}_{3}=2N+3+2N+3=4N+6.\] Finally, choose \(c=\frac{N}{B_{G}}\mathcal{L}_{N}\left(\left\{f\left(B_{G}t_{k}\right)\right\}_{ k=2}^{2N+2}\right)\in\mathbb{R}^{2N+3}\), according to Lemma 2, we have \[\sup_{\mu\in\mathcal{P}(\Omega)}\left|c\cdot h^{(3)}(\mu)-f(\widetilde{ \mathcal{S}}(\mu))\right|\leq\frac{2B_{G}^{\beta}\left|f\right|_{C^{0,\beta}}} {N^{\beta}}. \tag{4.13}\] Then we can derive the approximation rate of our constructed FNN by combining (4.11) and (4.13), and using the Lipschitz-\(\beta\) condition of \(f\), that is \[\begin{split}&\sup_{\mu\in\mathcal{P}(\Omega)}\Big{|}c\cdot h^{(3)}( \mu)-f(L_{G}^{Q}(\mu))\Big{|}\\ &\leq\sup_{\mu\in\mathcal{P}(\Omega)}\Big{|}c\cdot h^{(3)}(\mu)- f(\widetilde{\mathcal{S}}(\mu))\Big{|}+|f|_{C^{0,\beta}}\sup_{\mu\in\mathcal{P}( \Omega)}\Big{(}\Big{|}L_{G}^{Q}(\mu)-\widetilde{\mathcal{S}}(\mu)\Big{|} \Big{)}^{\beta}\\ &\leq\frac{2B_{G}^{\beta}\,|f|_{C^{0,\beta}}}{N^{\beta}}+|f|_{C^{ 0,\beta}}\left(\frac{3B_{Q}|g|_{C^{0,1}}}{N}\right)^{\beta}=\frac{2B_{G}^{ \beta}\,|f|_{C^{0,\beta}}+|f|_{C^{0,\beta}}\,(3B_{Q}|g|_{C^{0,1}})^{\beta}}{N ^{\beta}}.\end{split} \tag{4.14}\] From the expression \(c=\frac{N}{B_{G}}\mathcal{L}_{N}\left(\{f\left(B_{G}t_{k}\right)\}_{k=2}^{2N+2 }\right)\), we know \(\|c\|_{\infty}\leq\frac{4\|f\|_{\infty}N}{B_{G}}\). Then we have (2.16). The free parameters of the whole network are from the first, second, third hidden layer and the coefficient vector \(c\), therefore the total number of free parameters is \[\mathcal{N}=\mathcal{N}_{1}+\mathcal{N}_{2}+\mathcal{N}_{3}+2N+3=(2q+10)N+(d+ q)n_{q}+3q+15. \tag{4.15}\] This proves Theorem 2. ### Compactness of the hypothesis space \(\mathcal{H}_{(2,3),R,N}\) In this section, we rigorously prove the compactness of \(\mathcal{H}_{(2,3),R,N}\). The argument is based on a general version of the Ascoli-Arzela theorem for an abstract framework [10, Theorem 19.1c]. We first recall some notations on general equi-continuity and equi-boundedness. Let \(\{f_{n}\}\) be a countable collection of continuous functions from a separable topological space \((X,\mathcal{T})\) into a metric space \((Y,d_{Y})\). The functions \(\{f_{n}\}\) are equibounded at \(x\) if the closure in \((Y,d_{Y})\) of the set \(\{f_{n}(x)\}\) is compact. The functions \(\{f_{n}\}\) are equi-continuous at a point \(x\in X\) if for every \(\varepsilon>0\), there exists an open set \(\mathcal{O}\in\mathcal{T}\) containing \(x\) and such that \(d_{Y}\left(f_{n}(x),f_{n}(y)\right)\leq\varepsilon\quad\) for all \(y\in\mathcal{O}\quad\) and all \(n\in\mathbb{N}\). Then the general Ascoli-Arzela theorem can be stated as follow. **Lemma 3**.: _Let \(\{f_{n}\}\) be a sequence of continuous functions from a separable topological space \((X,\mathcal{T})\) into a metric space \((Y,d_{Y})\). Assume that the functions \(\{f_{n}\}\) are equibounded and equi-continuous at each \(x\in X\). Then, there exists a subsequence \(\{f_{n^{\prime}}\}\subset\{f_{n}\}\) and a continuous function \(f:X\to Y\) such that \(\{f_{n^{\prime}}\}\to f\) pointwise in \(X.\) Moreover the convergence is uniform on compact subsets of \(X.\)_ Proof of Theorem 3.: The idea is to show \(\mathcal{H}_{(2,3),R,N}\) is a sequentially compact subset of the metric space \((C(\mathcal{P}(\Omega)),\|\cdot\|_{\infty})\). Then \(\mathcal{H}_{(2,3),R,N}\) is compact. We have already known from aforementioned argument that the Wasserstein space \((\mathcal{P}(\Omega),W_{p})\) is a compact separable metric space. Hence, it is naturally a separable topological space with the topology induced by the Wasserstein metric \(\mathcal{T}=W_{p}\). For any countable collection of functions \(\{f_{n}\}\subset\mathcal{H}_{(2,3),R,N}\) with \[f_{n}:(\mathcal{P}(\Omega),W_{p})\rightarrow(\mathbb{R},|\cdot|). \tag{4.16}\] We first show that \(\{f_{n}\}\) is equi-continuous at any point \(\mu\in{\cal P}(\Omega)\). From the structure of space \({\cal H}_{(2,3),R,N}\), we know that for any function \(f_{n}\) in the collection, there exist \(c_{f_{n}}\), \(F_{f_{n}}^{(j)}\), \(b_{f_{n}}^{(j)}\) with \(\|F_{f_{n}}^{(j)}\|_{\infty}\leq RN^{2}\), \(\|b_{f_{n}}^{(j)}\|_{\infty}\leq R\), \(\|c_{f_{n}}\|_{\infty}\leq RN\), for \(j=1,2,3\) and any \(n\in\mathbb{N}\) such that \[f_{n}(\mu)=c_{f_{n}}\cdot\sigma\left(F_{f_{n}}^{(3)}\int_{\Omega}H_{f_{n}}^{(2 )}(x)d\mu(x)-b_{f_{n}}^{(3)}\right) \tag{4.17}\] with \[H_{f_{n}}^{(2)}(x)=\sigma\left(F_{f_{n}}^{(2)}\sigma(F_{f_{n}}^{(1)}x-b_{f_{n} }^{(1)})-b_{f_{n}}^{(2)}\right).\] Then for any \(\mu,\nu\in{\cal P}(\Omega)\) and \(n\in\mathbb{N}\), \[|f_{n}(\mu)-f_{n}(\nu)|\] \[=\left|c_{f_{n}}\cdot\left\{\sigma\left(F_{f_{n}}^{(3)}\int_{ \Omega}H_{f_{n}}^{(2)}(x)d\mu(x)-b_{f_{n}}^{(3)}\right)-\sigma\left(F_{f_{n}} ^{(3)}\int_{\Omega}H_{f_{n}}^{(2)}(x)d\nu(x)-b_{f_{n}}^{(3)}\right)\right\}\right|\] \[\leq\|c_{f_{n}}\|_{1}\|F_{f_{n}}^{(3)}\|_{\infty}\left\|\int_{ \Omega}H_{f_{n}}^{(2)}(x)d\mu(x)-\int_{\Omega}H_{f_{n}}^{(2)}(x)d\nu(x)\right\| _{\infty}. \tag{4.18}\] The definition of the infinity norm for the vector implies that the above bound equals \[\|c_{f_{n}}\|_{1}\|F_{f_{n}}^{(3)}\|_{\infty}\max_{1\leq i\leq 2N+3}\left| \int_{\Omega}\left(H_{f_{n}}^{(2)}(x)\right)_{i}d\mu(x)-\int_{\Omega}\left(H_ {f_{n}}^{(2)}(x)\right)_{i}d\nu(x)\right|\] Recall the well-known duality formula for the Kantorovich-Rubinstein metric \[W_{1}(\mu,\nu)=\sup_{\psi:\|\psi\|_{C^{0,1}}\leq 1}\left\{\int_{\Omega}\psi d \mu-\int_{\Omega}\psi d\nu\right\}.\] Note that \(\|c_{f_{n}}\|_{1}\leq(2N+3)\|c_{f_{n}}\|_{\infty}\), we have (4.18) is bounded by \[(2N+3)\|c_{f_{n}}\|_{\infty}\|F_{f_{n}}^{(3)}\|_{\infty}\max_{1\leq i\leq 2N +3}\left\|\left(H_{f_{n}}^{(2)}\right)_{i}\right\|_{C^{0,1}}W_{1}(\mu,\nu).\] Also, for any \(x,y\in\Omega\) and \(n\in\mathbb{N}\), note that for any \(1\leq i\leq 2N+3\) there holds \[\left|\left(H_{f_{n}}^{(2)}(x)\right)_{i}-\left(H_{f_{n}}^{(2)}(y )\right)_{i}\right|\leq\left\|H_{f_{n}}^{(2)}(x)-H_{f_{n}}^{(2)}(y)\right\|_{\infty}\] \[=\left\|\sigma\left(F_{f_{n}}^{(2)}\sigma(F_{f_{n}}^{(1)}x-b_{f_{ n}}^{(1)})-b_{f_{n}}^{(2)}\right)-\sigma\left(F_{f_{n}}^{(2)}\sigma(F_{f_{n}}^{(1)}y -b_{f_{n}}^{(1)})-b_{f_{n}}^{(2)}\right)\right\|_{\infty}\] \[\leq\|F_{f_{n}}^{(2)}\|_{\infty}\left\|F_{f_{n}}^{(1)}(x-y)\right\| _{\infty}\leq\|F_{f_{n}}^{(2)}\|_{\infty}\|F_{f_{n}}^{(1)}\|_{\infty}\|x-y\|_{ 2}.\] Then it follows that \[\max_{1\leq i\leq 2N+3}\left\|\left(H_{f_{n}}^{(2)}\right)_{i}\right\|_{C^{0,1} }\leq\|F_{f_{n}}^{(2)}\|_{\infty}\|F_{f_{n}}^{(1)}\|_{\infty}\] and \[|f_{n}(\mu)-f_{n}(\nu)|\leq(2N+3)\|c_{f_{n}}\|_{\infty}\prod_{j=1}^{3}\|F_{f_{ n}}^{(j)}\|_{\infty}W_{1}(\mu,\nu)\leq(2N+3)R^{4}N^{7}W_{1}(\mu,\nu),\ \forall\ n\in\mathbb{N}.\] We know from [34] that there holds \(W_{1}(\mu,\nu)\leq W_{p}(\mu,\nu)\) for any \(p\in[1,\infty)\) which is in fact as a result of Holder's inequality, then we arrive at \[|f_{n}(\mu)-f_{n}(\nu)|\leq(2N+3)R^{4}N^{7}W_{p}(\mu,\nu),\text{ for any }\mu,\nu\in\mathcal{P}(\Omega),n\in\mathbb{N}. \tag{4.19}\] Then we know that, at any point \(\mu\in\mathcal{P}(\Omega)\), for every \(\epsilon>0\), there always exists an open ball in the space \((\mathcal{P}(\Omega),W_{p})\) centered at \(\mu\) in the form of \[\mathcal{O}_{\mu,\epsilon}=\left\{\nu\in\mathcal{P}(\Omega):W_{p}(\nu,\mu)< \frac{\epsilon}{(2N+3)R^{4}N^{7}}\right\}\] such that \(|f_{n}(\mu)-f_{n}(\nu)|\leq\epsilon\) for all \(\nu\in\mathcal{O}_{\mu,\epsilon}\) and all \(n\in\mathbb{N}\). That is to say, for any countable collection \(\{f_{n}\}\subset\mathcal{H}_{(2,3),R,N}\), the equi-continuity of \(\{f_{n}\}\) holds at any point of \(\mathcal{P}(\Omega)\). For the equi-boundedness of \(\{f_{n}\}\), we only need to show that for any \(\mu\in\mathcal{P}(\Omega)\) and any \(n\in\mathbb{N}\), the function \(f_{n}\) in (4.17) as a map of (4.16) is uniformly bounded in \(\mathbb{R}\). To this end, since \[f_{n}(\mu)=c_{f_{n}}\cdot h_{f_{n}}^{(3)}(\mu)\] for some \(h_{f_{n}}^{(3)}\) defined layer by layer as in definition 1. Then for \(h_{f_{n}}^{(1)}(x):=\sigma\left(F_{f_{n}}^{(1)}x-b_{f_{n}}^{(1)}\right)\), \[\left\|h_{f_{n}}^{(1)}(x)\right\|_{\infty}\leq\|F_{f_{n}}^{(1)}\|_{\infty}\|x \|_{\infty}+\|b_{f_{n}}^{(1)}\|_{\infty}\leq RN^{2}+R\leq 2RN^{2},\quad\forall n \in\mathbb{N}.\] For \(h_{f_{n}}^{(2)}(\mu)\), we have \[\left\|h_{f_{n}}^{(2)}(\mu)\right\|_{\infty}=\left\|\int_{\Omega} \sigma\left(F_{f_{n}}^{(2)}h_{f_{n}}^{(1)}(x)-b_{f_{n}}^{(2)}\right)d\mu \right\|_{\infty}\leq\int_{\Omega}\left\|\sigma\left(F_{f_{n}}^{(2)}h_{f_{n} }^{(1)}(x)-b_{f_{n}}^{(2)}\right)\right\|_{\infty}d\mu\\ \leq\|F_{f_{n}}^{(2)}\|_{\infty}\left\|h_{f_{n}}^{(1)}(x)\right\| _{\infty}+\|b_{f_{n}}^{(2)}\|_{\infty}\leq(2R^{2}+R)N^{4},\ \forall n\in\mathbb{N}.\] Then for the third hidden layer, \[\left\|h_{f_{n}}^{(3)}(\mu)\right\|_{\infty}\leq\|F_{f_{n}}^{(3)}\|_{\infty} \left\|h_{f_{n}}^{(2)}(\mu)\right\|_{\infty}+\|b_{f_{n}}^{(3)}\|_{\infty}\leq (2R^{3}+R^{2}+R)N^{6},\forall n\in\mathbb{N}.\] Finally, \[|f_{n}(\mu)|=|c_{f_{n}}\cdot h_{f_{n}}^{(3)}(\mu)|\leq\|c_{f_{n}}\|_{1}\|h_{f _{n}}^{(3)}(\mu)\|_{\infty}\leq(2N+3)(2R^{4}+R^{3}+R^{2})N^{7},\forall n\in \mathbb{N}. \tag{4.20}\] From the above process, we have in fact shown that \(\sup_{\mu\in\mathcal{P}(\Omega)}|f_{n}(\mu)|\leq(2N+3)(2R^{4}+R^{3}+R^{2})N^{7}\) holds for all \(n\in\mathbb{N}\). Hence the equi-boundedness of \(\{f_{n}\}\) holds at any point of \(\mathcal{P}(\Omega)\). Combining the above arguments and using the fact that a subset of a metric space is compact if and only if it is a sequentially compact set, we obtain that \(\mathcal{H}_{(2,3),R,N}\) is compact. ### Covering number of space \(\mathcal{H}_{(2,3),R,N}\) The generalization (estimation) error bound can be derived by a bias and variance trade-off method. The bias corresponds to the approximation rate of our proposed hypothesis space for approximating the target function, which is shown in Theorem 2, while we also need to demonstrate the approximation ability of our proposed hypothesis space by showing that the parameters of our constructed FNN can actually be bounded as required in the hypothesis space. The variance term can be measured by the complexity of the hypothesis space, such as pseudo dimension, Rademacher complexity and covering number, and we utilize the covering number as the tool of estimating the complexity. We first bound the parameters in our constructed FNN structure in the following lemma. **Lemma 4**.: _Let \(Q\) be a polynomial of degree \(q\) on \(\Omega\), \(g\in C^{0,1}[-B_{Q},B_{Q}]\), \(f\in C^{0,\beta}[-B_{G},B_{G}]\), for some \(0<\beta\leq 1\). Then for the FNN constructed in the proof of Theorem 2, there exists a constant \(R=R_{d,Q,f,g}\) depending on \(d,Q,f,g\) such that for \(j=1,2,3\),_ \[\|F^{(j)}\|_{\infty}\leq RN^{2},\quad\|b^{(j)}\|_{\infty}\leq R,\quad\|c\|_{ \infty}\leq RN.\] Proof.: For the first hidden layer, since \(|\xi|=1\), and \(|t_{j}|=1\), for all \(j=1,2,\ldots,2N+3\), we have that \(\|F^{(1)}\|_{\infty}\leq\sqrt{d}\) and \(\|b^{(1)}\|_{\infty}\leq 2\). For the second hidden layer, since \(\|v^{(\ell)}\|_{\infty}\leq 4\), for all \(\ell=1,2,\ldots,q\), and \(\|Q\|_{C(\Omega)}=\widehat{B}_{Q}\leq B_{Q}\), we have that \(\|F^{(2)}\|_{\infty}\leq 4\|\gamma_{Q}\|_{1}N(2N+3)\) and \(\|b^{(2)}\|_{\infty}\leq 3B_{Q}\). For the third hidden layer, since \(\|g\|_{C([-B_{Q},B_{Q}])}=B_{g}\leq B_{G}\), we have that \(\|F^{(3)}\|_{\infty}\leq\frac{4B_{G}}{B_{Q}}N(2N+3)\) and \(\|b^{(3)}\|_{\infty}\leq 2B_{G}\). For the coefficient vector \(c\), we have that \(\|c\|_{\infty}\leq\frac{4\|f\|_{\infty}}{B_{G}}N\). Thus we finish the proof by choosing \[R=\max\left\{2\sqrt{d},20\|\gamma_{Q}\|_{1},3B_{Q},\frac{20B_{G}}{B_{Q}},2B_{G },\frac{4\|f\|_{\infty}}{B_{G}}\right\}.\] In the following lemma, we bound the covering number of our hypothesis space \(\mathcal{H}_{(2,3),R,N}\), which is then used to derive an upper bound of the estimation error for the ERM algorithm. **Lemma 5**.: _For \(R\geq 1,N\in\mathbb{N}\), and \(\widehat{R}=3R^{4}(10n_{q}+d+18)\), there exist two constants \(T_{1},T_{2}\) depending on \(d,q\) such that_ \[\log\mathcal{N}\left(\mathcal{H}_{(2,3),R,N},\epsilon,\|\cdot\|_{\infty} \right)\leq T_{1}N\log\frac{\widehat{R}}{\epsilon}+T_{2}N\log N.\] Proof.: Denote \(h^{(1)}(x):=\sigma\left(F^{(1)}x-b^{(1)}\right)\), then \[\left\|h^{(1)}(x)\right\|_{\infty}\leq\|F^{(1)}\|_{\infty}\|x\|_{\infty}+\|b^ {(1)}\|_{\infty}\leq RN^{2}+R\leq 2RN^{2}.\] For the second layer, we have \[\left\|h^{(2)}(\mu)\right\|_{\infty}=\left\|\int_{\Omega}\sigma \left(F^{(2)}h^{(1)}(x)-b^{(2)}\right)d\mu\right\|_{\infty}\leq\sup_{\mu\in \mathcal{P}(\Omega)}\int_{\Omega}\left\|\sigma\left(F^{(2)}h^{(1)}(x)-b^{(2)} \right)\right\|_{\infty}d\mu\] \[\leq\|F^{(2)}\|_{\infty}\left\|h^{(1)}(x)\right\|_{\infty}+\|b^{( 2)}\|_{\infty}\leq 2R^{2}N^{4}+R\leq 3R^{2}N^{4}.\] Then for the third hidden layer, \[\left\|h^{(3)}(\mu)\right\|_{\infty}\leq\|F^{(3)}\|_{\infty}\left\|h^{(2)}(\mu) \right\|_{\infty}+\|b^{(3)}\|_{\infty}\leq 3R^{3}N^{6}+R\leq 4R^{3}N^{6}.\] If we choose another functional \(\widehat{c}\cdot\widehat{h}^{(3)}(\mu)\) in the hypothesis space \(\mathcal{H}_{(2,3),R,N}\) induced by \(\widehat{F}^{(j)}\), \(\widehat{b}^{(j)}\) and \(\widehat{c}\), satisfying the restriction that, \[\left|F_{ik}^{(j)}-\widehat{F}_{ik}^{(j)}\right|\leq\epsilon,\ \left\|b^{(j)}- \widehat{b}^{(j)}\right\|_{\infty}\leq\epsilon,\ \text{and}\left\|c-\widehat{c}\right\|_{\infty}\leq\epsilon,\] then by the Lipschitz property of ReLU, \[\left\|h^{(1)}(x)-\widehat{h}^{(1)}(x)\right\|_{\infty}\leq\left\|F^{(1)}- \widehat{F}^{(1)}\right\|_{\infty}\left\|x\right\|_{\infty}+\left\|b^{(1)}- \widehat{b}^{(1)}\right\|_{\infty}\leq(d+1)\epsilon.\] Consider the random variable \(\xi\), since \(\left|\mathbb{E}(\xi)\right|\leq\mathbb{E}\left(\left|\xi\right|\right)\), and \(\mathbb{E}(\xi)\) is bounded by the maximum value of \(\xi\), utilize the Lipschitz property of ReLU, \[\left\|h^{(2)}(\mu)-\widehat{h}^{(2)}(\mu)\right\|_{\infty}\leq \left\|\int_{\Omega}\sigma\left(F^{(2)}h^{(1)}(x)-b^{(2)}\right)d\mu-\int_{ \Omega}\sigma\left(\widehat{F}^{(2)}\widehat{h}^{(1)}(x)-\widehat{b}^{(2)} \right)d\mu\right\|_{\infty}\] \[\leq \sup_{\mu\in\mathcal{P}(\Omega)}\int_{\Omega}\left\|\sigma \left(F^{(2)}h^{(1)}(x)-b^{(2)}\right)-\sigma\left(\widehat{F}^{(2)}\widehat{h }^{(1)}(x)-\widehat{b}^{(2)}\right)\right\|_{\infty}d\mu\] \[\leq \left\|F^{(2)}\left(h^{(1)}(x)-\widehat{h}^{(1)}(x)\right) \right\|_{\infty}+\left\|\left(F^{(2)}-\widehat{F}^{(2)}\right)\widehat{h}^{( 1)}(x)\right\|_{\infty}+\left\|b^{(2)}-\widehat{b}^{(2)}\right\|_{\infty}\] \[\leq RN^{2}(d+1)\epsilon+2RN^{2}n_{q}(2N+3)\epsilon+\epsilon\leq(1 0n_{q}+d+2)RN^{3}\epsilon,\] where \(\|\cdot\|_{\infty}\) in above inequalities obeys the notations at the end of the introduction, specifically, \(\|\cdot\|_{\infty}\) on the right hand side of the first inequality is taken w.r.t. vector of functions with variables in \(\mathcal{P}(\Omega)\). Similarly, for the third hidden layer, \[\left\|h^{(3)}(\mu)-\widehat{h}^{(3)}(\mu)\right\|_{\infty}\leq \left\|F^{(3)}\left(h^{(2)}(x)-\widehat{h}^{(2)}(x)\right)\right\|_{\infty}+ \left\|\left(F^{(3)}-\widehat{F}^{(3)}\right)\widehat{h}^{(2)}(x)\right\|_{\infty}\] \[+\left\|b^{(3)}-\widehat{b}^{(3)}\right\|_{\infty}\leq RN^{2}(10n _{q}+d+2)RN^{3}\epsilon+3R^{2}N^{4}(2N+3)\epsilon+\epsilon\] \[\leq(10n_{q}+d+18)R^{2}N^{5}\epsilon.\] Finally, the output of these two functionals satisfy \[\left\|c\cdot h^{(3)}(\mu)-\widehat{c}\cdot\widehat{h}^{(3)}(\mu )\right\|_{\infty}\leq\left\|c\cdot\left(h^{(3)}(\mu)-\widehat{h}^{(3)}(\mu )\right)\right\|_{\infty}+\left\|(c-\widehat{c})\cdot\widehat{h}^{(3)}(\mu) \right\|_{\infty}\] \[\leq(2N+3)RN(10n_{q}+d+18)R^{2}N^{5}\epsilon+(2N+3)4R^{3}N^{6} \epsilon\leq 5(10n_{q}+d+22)R^{3}N^{7}\epsilon=:\widetilde{\epsilon}. \tag{4.21}\] Therefore, by taking a \(\epsilon\)-net of each free parameter in \(F^{(j)}\), \(b^{(j)}\) and \(c\), the \(\widetilde{\epsilon}\)-covering number of \(\mathcal{H}_{(2,3),R,N}\) can be bounded by \[\mathcal{N}\left(\mathcal{H}_{(2,3),R,N},\widetilde{\epsilon}, \left\|\cdot\right\|_{\infty}\right)\leq\left\lceil\frac{2RN^{2}}{\epsilon} \right\rceil^{n_{q}d+qn_{q}+q(2N+3)+2N+3}\left\lceil\frac{2R}{\epsilon}\right \rceil^{6N+9}\left\lceil\frac{2RN}{\epsilon}\right\rceil^{2N+3} \tag{4.22}\] \[\leq \left(\frac{3R}{\epsilon}\right)^{(2q+10)N+(d+q)n_{q}+3q+15}N^{(4 q+6)N+2(d+q)n_{q}+6q+9}\leq\left(\frac{\widehat{R}}{\widetilde{\epsilon}} \right)^{T_{1}N}N^{T_{2}N},\] where \(\widehat{R}=15R^{4}(10n_{q}+d+18)\), \(T_{1}=(d+q)n_{q}+5q+5\), and \(T_{2}=9(d+q)n_{q}+45q+190\). Thus we finish the proof by taking the logarithm. ### Proof of Theorem 4 We first prove the two-stage error decomposition which is crucial for the proof of Theorem 4. Proof of Proposition 1.: With simple computations, we have \[\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}\left(f _{\rho}\right)=\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}_{D }\left(\pi_{M}f_{\hat{D},R,N}\right)+\mathcal{E}_{D}\left(\pi_{M}f_{\hat{D},R, N}\right)\] \[-\mathcal{E}_{\hat{D}}\left(\pi_{M}f_{\hat{D},R,N}\right)+ \mathcal{E}_{\hat{D}}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}_{\hat{D} }\left(h\right)+\mathcal{E}_{\hat{D}}\left(h\right)-\mathcal{E}_{D}\left(h \right)+\mathcal{E}_{D}\left(h\right)-\mathcal{E}(h)\] \[+\mathcal{E}(h)-\mathcal{E}(f_{\rho}).\] Then the first decomposition follows from the fact \(\mathcal{E}_{\hat{D}}\left(\pi_{M}f_{\hat{D},R,N}\right)\leq\mathcal{E}_{\hat {D}}\left(h\right)\). The second further decomposition follows immediately by inserting \(\pm\mathcal{E}(f_{\rho})\), \(\pm\mathcal{E}_{D}(f_{\rho})\) to the above terms. We introduce several concentration inequalities based on Bernstein's inequality and Hoeffding's inequality which can be found in or derived from [6], [29] and [7]. **Lemma 6** (A-inequality).: _Let \(\mathcal{G}\) be a set of continuous functions on a probability space \(\mathcal{Z}\) such that, for some \(B>0,c>0\), \(|G-\mathbb{E}(G)|\leq B\) almost surely and \(\mathbb{E}\left(G^{2}\right)\leq c\mathbb{E}(G)\) for all \(f\in\mathcal{G}\), then for any \(\epsilon>0\) and \(0<\alpha\leq 1\),_ \[\text{Prob}\left\{\sup_{G\in\mathcal{G}}\frac{\mathbb{E}(G)-\frac{1}{m}\sum_{i= 1}^{m}G(z_{i})}{\sqrt{\mathbb{E}(G)+\epsilon}}>4\alpha\sqrt{\epsilon}\right\} \leq\mathcal{N}\left(\mathcal{G},\alpha\epsilon,\|\cdot\|_{\infty}\right)\exp \left\{-\frac{\alpha^{2}m\epsilon}{2c+\frac{2B}{3}}\right\}.\] **Lemma 7** (B-inequality).: _Let \(\xi\) be a random variable on a probability space \(\mathcal{Z}\) with mean \(\mathbb{E}(\xi)\) and variance \(\sigma^{2}(\xi)=\sigma^{2}\). If \(|\xi(z)-\mathbb{E}(\xi)|\leq M_{\xi}\) for almost all \(z\in\mathcal{Z}\), then for any \(\epsilon>0\),_ \[\text{Prob}\left\{\frac{1}{m}\sum_{i=1}^{m}\xi(z_{i})-\mathbb{E}(\xi)> \epsilon\right\}\leq\exp\left\{-\frac{m\epsilon^{2}}{2\left(\sigma^{2}+\frac{1 }{3}M_{\xi}\epsilon\right)}\right\}.\] **Lemma 8** (C-inequality).: _Let \(\mathcal{H}_{2}\) be a set of continuous functions on a probability space \(\mathcal{X}\) such that, for some \(M_{\mathcal{H}_{2}}>0\), \(|f-\mathbb{E}(f)|\leq M_{\mathcal{H}_{2}}\) almost surely for all \(f\in\mathcal{H}_{2}\), then for any \(\epsilon>0\),_ \[\text{Prob}\left\{\sup_{f\in\mathcal{H}_{2}}\left|\mathbb{E}(f)-\frac{1}{n} \sum_{i=1}^{n}f(x_{i})\right|>\epsilon\right\}\leq 2\mathcal{N}\left(\mathcal{H}_{2 },\frac{\epsilon}{4},\|\cdot\|_{\infty}\right)\exp\left\{-\frac{n\epsilon^{2} }{8M_{\mathcal{H}_{2}}^{2}}\right\}.\] Now we prove the oracle inequality for the distribution regression framework based on the two-stage error decomposition method shown in Proposition 1 and these concentration inequalities. Proof of Theorem 4.: We denote \(\mathcal{H}=\mathcal{H}_{(2,3),R,N}\), the two-stage error decomposition and the basic fact \(\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_{\rho}^{2}=\mathcal{E}\left( \pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}\left(f_{\rho}\right)\) imply that \(\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_{\rho}^{2}\) can be bounded by the sum of \(I_{1}(D,\mathcal{H})\), \(I_{2}(D,\mathcal{H})\), \(\left|I_{3}(\hat{D},\mathcal{H})\right|\), \(\left|I_{4}(\hat{D},\mathcal{H})\right|\) and \(R(\mathcal{H})\). We first derive a bound for \(I_{1}(D,\mathcal{H})\) in a probability form. Consider the functional class \[\mathcal{G}:=\left\{G=\left(\pi_{M}f(\mu)-y\right)^{2}-\left(f_{\rho}(\mu)-y \right)^{2}:f\in\mathcal{H}_{(2,3),R,N}\right\}.\] For any fixed \(G\in\mathcal{G}\), there exists a \(f\in\mathcal{H}_{(2,3),R,N}\) such that \(G(z)=\left(\pi_{M}f(\mu)-y\right)^{2}-\left(f_{\rho}(\mu)-y\right)^{2}\), and \[\mathbb{E}(G)=\mathcal{E}\left(\pi_{M}f\right)-\mathcal{E}\left(f_{\rho} \right)=\left\|\pi_{M}f-f_{\rho}\right\|_{\rho}^{2},\] \[\frac{1}{m}\sum_{i=1}^{m}G(z_{i})=\mathcal{E}_{D}\left(\pi_{M}f\right)- \mathcal{E}_{D}\left(f_{\rho}\right).\] Furthermore, since \(\left|\pi_{M}f(\mu)\right|\leq M\), \(\left|f_{\rho}(\mu)\right|\leq M\) and \(\left|y\right|\leq M\) almost surely, we have \[\left|G(z)\right|=\left|\left(\pi_{M}f(\mu)-f_{\rho}(\mu)\right)\left(\pi_{M} f(\mu)+f_{\rho}(\mu)-2y\right)\right|\leq 8M^{2}.\] Thus \(\left|G(z)-\mathbb{E}(G)\right|\leq 16M^{2}\) and \(\mathbb{E}\left(G^{2}\right)\leq 16M^{2}\left\|\pi_{M}f-f_{\rho}\right\|_{\rho}^{2}=16M^{2} \mathbb{E}(G)\). Moreover, since for any \(f_{1},f_{2}\in\mathcal{H}_{(2,3),R,N}\), \[\left|\left(\pi_{M}f_{1}(\mu)-y\right)^{2}-\left(\pi_{M}f_{2}(\mu)-y\right)^ {2}\right|\leq 4M\left|\pi_{M}f_{1}(\mu)-\pi_{M}f_{2}(\mu)\right|\leq 4M\left|f_{1 }(\mu)-f_{2}(\mu)\right|,\] an \(\frac{\epsilon}{4M}\)-covering of \(\mathcal{H}_{(2,3),R,N}\) results in an \(\epsilon\)-covering of \(\mathcal{G}\), we have \[\mathcal{N}\left(\mathcal{G},\epsilon,\|\cdot\|_{\infty}\right)\leq\mathcal{N }\left(\mathcal{H}_{(2,3),R,N},\frac{\epsilon}{4M},\|\cdot\|_{\infty}\right).\] Therefore, by applying A-inequality to \(\mathcal{G}\) with \(B=c=16M^{2}\) and \(\alpha=\frac{1}{4}\), we have that \[\text{Prob}\left\{\sup_{f\in\mathcal{H}_{(2,3),R,N}}\frac{\mathcal{ E}\left(\pi_{M}f\right)-\mathcal{E}\left(f_{\rho}\right)-\left(\mathcal{E}_{D} \left(\pi_{M}f\right)-\mathcal{E}_{D}\left(f_{\rho}\right)\right)}{\sqrt{ \mathcal{E}\left(\pi_{M}f\right)-\mathcal{E}\left(f_{\rho}\right)+\epsilon}}> \sqrt{\epsilon}\right\}\] \[\leq\mathcal{N}\left(\mathcal{G},\frac{\epsilon}{4},\|\cdot\|_{ \infty}\right)\exp\left\{-\frac{3m\epsilon}{2048M^{2}}\right\}\leq\mathcal{N} \left(\mathcal{H}_{(2,3),R,N},\frac{\epsilon}{16M},\|\cdot\|_{\infty}\right) \exp\left\{-\frac{3m\epsilon}{2048M^{2}}\right\}.\] Since \(\sqrt{\epsilon\left(\mathcal{E}\left(\pi_{M}f\right)-\mathcal{E}\left(f_{\rho} \right)+\epsilon\right)}\leq\frac{1}{2}\left(\mathcal{E}\left(\pi_{M}f\right)- \mathcal{E}\left(f_{\rho}\right)\right)+\epsilon\), by choosing \(f=f_{\hat{D},R,N}\), we conclude that \[\begin{array}{l}\mathit{Prob}\left\{I_{1}(D,\mathcal{H})>\frac{1}{2}\left( \mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}\left(f_{\rho} \right)\right)+\epsilon\right\}\\ \leq\mathcal{N}\left(\mathcal{H}_{(2,3),R,N},\frac{\epsilon}{16M},\| \cdot\|_{\infty}\right)\exp\left\{-\frac{3m\epsilon}{2048M^{2}}\right\}.\end{array} \tag{4.23}\] Next we derive a bound for \(I_{2}(D,\mathcal{H})\) in a probability form. Consider the random variable \(\xi\) on \(\mathcal{Z}\) defined by \[\xi(z)=\left(h(\mu)-y\right)^{2}-\left(f_{\rho}(\mu)-y\right)^{2}.\] Since \(|f_{\rho}(\mu)|\leq M\) and \(|y|\leq M\) almost surely, we have \[\begin{array}{l}|\xi(z)|\leq\left(3M+\|h\|_{\infty}\right)^{2},\quad|\xi- \mathbb{E}(\xi)|\leq 2\left(3M+\|h\|_{\infty}\right)^{2},\\ \sigma^{2}\leq\mathbb{E}(\xi^{2})\leq\left(3M+\|h\|_{\infty}\right)^{2}R( \mathcal{H}).\end{array}\] Hence, by applying B-inequality with \(M_{\xi}=2\left(3M+\|h\|_{\infty}\right)^{2}\), we conclude that \[\mathit{Prob}\left\{I_{2}(D,\mathcal{H})>\epsilon\right\}\leq\exp\left\{- \frac{m\epsilon^{2}}{2\left(\sigma^{2}+\frac{1}{3}M_{\xi}\epsilon\right)} \right\}\leq\exp\left\{-\frac{m\epsilon^{2}}{2\left(3M+\|h\|_{\infty}\right) ^{2}\left(R(\mathcal{H})+\frac{2}{3}\epsilon\right)}\right\}. \tag{4.24}\] Then we derive a bound for \(\left|I_{3}(\hat{D},\mathcal{H})\right|\) in a probability form. Since \(\left\|\pi_{M}f_{\hat{D},R,N}\right\|_{\infty}\leq M\) and \(|y_{i}|\leq M\), \[\begin{array}{l}\left|I_{3}(\hat{D},\mathcal{H})\right|=&\left|\frac{1}{m} \sum_{i=1}^{m}\left(\pi_{M}f_{\hat{D},R,N}\left(\hat{\mu}_{i}^{n}\right)-y_{i} \right)^{2}-\left(\pi_{M}f_{\hat{D},R,N}\left(\mu_{i}\right)-y_{i}\right)^{2 }\right|\\ \leq&\frac{1}{m}\sum_{i=1}^{m}4M\left|f_{\hat{D},R,N}\left(\hat{\mu }_{i}^{n}\right)-f_{\hat{D},R,N}\left(\mu_{i}\right)\right|,\end{array}\] Then we only need to bound \(\left|f_{\hat{D},R,N}\left(\hat{\mu}_{i}^{n}\right)-f_{\hat{D},R,N}\left(\mu_ {i}\right)\right|\). Recall that the structure of the FNN in our hypothesis space \(\mathcal{H}_{(2,3),R,N}\) includes the integration w.r.t. the input distribution \(\mu\) after the second layer, that is, for any \(f\in\mathcal{H}_{(2,3),R,N}\), there always exists a vector of functions \[H_{f}^{(2)}(x)=\sigma\left(F_{f}^{(2)}\sigma(F_{f}^{(1)}x-b_{f}^{(1)})-b_{f}^ {(2)}\right) \tag{4.25}\] and \(c_{f}\), \(F_{f}^{(3)}\), \(b_{f}^{(3)}\) satisfying \(\|F_{f}^{(j)}\|_{\infty}\leq RN^{2}\), \(\|b_{f}^{(j)}\|_{\infty}\leq R\), \(\|c_{f}\|_{\infty}\leq RN\), \(j=1,2,3\) such that \[f(\mu)=c_{f}\cdot\sigma\left(F_{f}^{(3)}h_{f}^{(2)}(\mu)-b_{f}^{(3)}\right)=c_ {f}\cdot\sigma\left(F_{f}^{(3)}\int_{\Omega}H_{f}^{(2)}(x)d\mu(x)-b_{f}^{(3)} \right).\] If we denote a function class \(\mathcal{H}_{2}\) by \[\begin{array}{l}\mathcal{H}_{2}=\Big{\{}H_{2}(x)=\left(\sigma\left(F^{(2)} \sigma\left(F^{(1)}x-b^{(1)}\right)-b^{(2)}\right)\right)_{1}:\\ \|F^{(j)}\|_{\infty}\leq RN^{2},\|b^{(j)}\|_{\infty}\leq R,j=1,2\Big{\}},\end{array} \tag{4.26}\] which is just the first element of the output a second-layer network in the part of the hypothesis space \(\mathcal{H}_{(2,3),R,N}\). Therefore, for any \(f\in\mathcal{H}_{(2,3),R,N}\), there exists a function \(H_{f}\in\mathcal{H}_{2}\), with \[\left(h_{f}^{(2)}(\mu_{i})\right)_{1}=\mathbb{E}_{x_{i}\sim\mu_{i}}\left(H_{f} \left(x_{i}\right)\right),\quad i=1,2,...,m,\] and \[\left(h_{f}^{(2)}(\hat{\mu}_{i}^{n})\right)_{1}=\frac{1}{n}\sum_{j=1}^{n}H_{f} (x_{i,j}),\quad i=1,2,...,m.\] From the proof of Lemma 5 or Lemma 9 (in Appendix), we know that \(\sup_{H_{2}\in\mathcal{H}_{2}}|H_{2}|\leq 3R^{2}N^{4}\), thereby \(\sup_{H_{2}\in\mathcal{H}_{2}}|H_{2}-\mathbb{E}(H_{2})|\leq 6R^{2}N^{4}\). Hence, by applying C-inequality to \(\mathcal{H}_{2}\) with \(M_{\mathcal{H}_{2}}=6R^{2}N^{4}\) and using the above notations, we conclude that \[Prob\left\{\sup_{f\in\mathcal{H}_{(2,3),R,N}}\left|\left(h_{f}^{ (2)}(\mu_{i})\right)_{1}-\left(h_{f}^{(2)}(\hat{\mu}_{i}^{n})\right)_{1}\right| >\epsilon\right\}\] \[= Prob\left\{\sup_{f\in\mathcal{H}_{(2,3),R,N}}\left|\mathbb{E}_{x _{i}\sim\mu_{i}}\left(H_{f}\left(x_{i}\right)\right)-\frac{1}{n}\sum_{j=1}^{n }H_{f}(x_{i,j})\right|>\epsilon\right\}\] \[\leq Prob\left\{\sup_{H_{2}\in\mathcal{H}_{2}}\left|\mathbb{E}_{x_{i} \sim\mu_{i}}\left(H_{2}\left(x_{i}\right)\right)-\frac{1}{n}\sum_{j=1}^{n}H_{ 2}(x_{i,j})\right|>\epsilon\right\}\] \[\leq 2\mathcal{N}\left(\mathcal{H}_{2},\frac{\epsilon}{4},\|\cdot\|_ {C(\Omega)}\right)\exp\left\{-\frac{n\epsilon^{2}}{288R^{4}N^{8}}\right\}.\] From the structure of the hypothesis space \(\mathcal{H}_{(2,3),R,N}\), since each row of the connection matrix \(F^{(j)}\) and bias \(b^{(j)}\) satisfies the same symmetric restriction, we observe from the following explicit form \[\sup_{\begin{subarray}{c}\left\|F^{(j)}\right\|_{\infty}\leq RN^ {2}\\ \left\|\phi^{(j)}\right\|_{\infty}\leq R,j=1,2\end{subarray}} \bigg{|}\int_{\Omega}\left(\sigma\left(F^{(2)}\sigma(F^{(1)}x-b^{( 1)})-b^{(2)}\right)\right)_{s}d\mu_{i}(x)\] \[-\frac{1}{n}\sum_{j=1}^{n}\left(\sigma\left(F^{(2)}\sigma(F^{(1) }x_{ij}-b^{(1)})-b^{(2)}\right)\right)_{s}\bigg{|}\] that for any fixed \(\mu_{i}\) and \(\hat{\mu}_{i}=\frac{1}{n}\sum_{j=1}^{n}\delta_{x_{ij}}\), the sup operation w.r.t. \(F^{(j)}\) and \(b^{(j)}\) contribute equally to any \(s\)-th row of the \(2N+3\) rows. Then we know that \[\sup_{f\in\mathcal{H}_{(2,3),R,N}}\left|\left(h_{f}^{(2)}(\mu_{i})\right)_{1}- \left(h_{f}^{(2)}(\hat{\mu}_{i}^{n})\right)_{1}\right|\leq\epsilon\] implies that \[\sup_{f\in\mathcal{H}_{(2,3),R,N}}\left|\left(h_{f}^{(2)}(\mu_{i})\right)_{s}- \left(h_{f}^{(2)}(\hat{\mu}_{i}^{n})\right)_{s}\right|\leq\epsilon,\quad\forall s =1,2,\ldots,2N+3,\] which then implies that \(\sup_{f\in\mathcal{H}_{(2,3),R,N}}\left\|h_{f}^{(2)}(\mu_{i})-h_{f}^{(2)}(\hat{ \mu}_{i}^{n})\right\|_{\infty}\leq\epsilon\). Then it follows that \[\sup_{f\in\mathcal{H}_{(2,3),R,N}}|f(\mu_{i})-f(\hat{\mu}_{i}^{n})|\] \[\leq \sup_{f\in\mathcal{H}_{(2,3),R,N}}\|c_{f}\|_{1}\left\|\sigma\left(F _{f}^{(3)}h_{f}^{(2)}(\mu_{i})-b_{f}^{(3)}\right)-\sigma\left(F_{f}^{(3)}h_{f} ^{(2)}(\hat{\mu}_{i}^{n})-b_{f}^{(3)}\right)\right\|_{\infty}\] \[\leq (2N+3)\sup_{f\in\mathcal{H}_{(2,3),R,N}}\|c_{f}\|_{\infty}\left\| \sigma\left(F_{f}^{(3)}h_{f}^{(2)}(\mu_{i})-b_{f}^{(3)}\right)-\sigma\left(F_ {f}^{(3)}h_{f}^{(2)}(\hat{\mu}_{i}^{n})-b_{f}^{(3)}\right)\right\|_{\infty}\] \[\leq (2N+3)RN\|F_{f}^{(3)}\|_{\infty}\sup_{f\in\mathcal{H}_{(2,3),R,N }}\left\|h_{f}^{(2)}(\mu_{i})-h_{f}^{(2)}(\hat{\mu}_{i}^{n})\right\|_{\infty} \leq 5R^{2}N^{4}\epsilon.\] Thus we have that, for \(i=1,2,...,m\), \[\text{\it Prob}\left\{\sup_{f\in\mathcal{H}_{(2,3),R,N}}|f(\mu_{i})-f(\hat{ \mu}_{i}^{n})|>5R^{2}N^{4}\epsilon\right\}\leq 2\mathcal{N}\left(\mathcal{H}_{2}, \frac{\epsilon}{4},\|\cdot\|_{C(\Omega)}\right)\exp\left\{-\frac{n\epsilon^{2 }}{288R^{4}N^{8}}\right\}.\] Now combine all the \(m\) terms with \(i=1,2,\ldots,m\), we have \[\text{\it Prob}\left\{\left|I_{3}(\hat{D},\mathcal{H})\right|>20 MR^{2}N^{4}\epsilon\right\}\] \[\leq \text{\it Prob}\left\{\frac{1}{m}\sum_{i=1}^{m}4M\left|f_{\hat{D},R,N}\left(\mu_{i}\right)-f_{\hat{D},R,N}\left(\hat{\mu}_{i}^{n}\right)\right| >20MR^{2}N^{4}\epsilon\right\}\] \[\leq \sum_{i=1}^{m}\text{\it Prob}\left\{\left|f_{\hat{D},R,N}(\mu_{i })-f_{\hat{D},R,N}(\hat{\mu}_{i}^{n})\right|>5R^{2}N^{4}\epsilon\right\}\] \[\leq \sum_{i=1}^{m}\text{\it Prob}\left\{\sup_{f\in\mathcal{H}_{(2,3),R,N}}|f(\mu_{i})-f(\hat{\mu}_{i}^{n})|>5R^{2}N^{4}\epsilon\right\}\] \[\leq 2m\mathcal{N}\left(\mathcal{H}_{2},\frac{\epsilon}{4},\|\cdot\|_ {C(\Omega)}\right)\exp\left\{-\frac{n\epsilon^{2}}{288R^{4}N^{8}}\right\}.\] Finally, replacing \(\epsilon\) by \(\frac{\epsilon}{20MR^{2}N^{4}}\), we get \[\text{\it Prob}\left\{\left|I_{3}(\hat{D},\mathcal{H})\right|>\epsilon\right\} \leq 2m\mathcal{N}\left(\mathcal{H}_{2},\frac{\epsilon}{80MR^{2}N^{4}},\|\cdot\|_ {C(\Omega)}\right)\exp\left\{-\frac{n\epsilon^{2}}{115200M^{2}R^{8}N^{16}} \right\}. \tag{4.27}\] Similarly, we have \[\text{\it Prob}\left\{\left|I_{4}(\hat{D},\mathcal{H})\right|> \epsilon\right\}\leq 2m\mathcal{N}\left(\mathcal{H}_{2},\frac{\epsilon}{80MR^{2}N^{4}}, \|\cdot\|_{C(\Omega)}\right) \tag{4.28}\] \[\exp\left\{-\frac{n\epsilon^{2}}{28800\left(\|h\|_{\infty}+M\right) ^{2}R^{8}N^{16}}\right\}.\] Finally, combining (4.23), (4.24), (4.27) and (4.28), we have \[\mathit{Prob}\left\{\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_ {\rho}^{2}>2\left\|h-f_{\rho}\right\|_{\rho}^{2}+8\epsilon\right\}\] \[\leq \mathcal{N}\left(\mathcal{H}_{(2,3),R,N},\frac{\epsilon}{16M}, \left\|\cdot\right\|_{\infty}\right)\exp\left\{-\frac{3m\epsilon}{2048M^{2}}\right\}\] \[+\exp\left\{-\frac{m\epsilon^{2}}{2\left(3M+\left\|h\right\|_{ \infty}\right)^{2}\left(\left\|h-f_{\rho}\right\|_{\rho}^{2}+\frac{2}{3} \epsilon\right)}\right\}\] \[+4m\mathcal{N}\left(\mathcal{H}_{2},\frac{\epsilon}{80MR^{2}N^{4 }},\left\|\cdot\right\|_{C(\Omega)}\right)\exp\left\{-\frac{n\epsilon^{2}}{11 5200\max\{\left\|h\right\|_{\infty}^{2},M^{2}\}R^{8}N^{16}}\right\}.\] Thus we complete the proof by utilizing the covering number bound of the set \(\mathcal{H}_{(2,3),R,N}\) and \(\mathcal{H}_{2}\) from Lemma 5 and Lemma 9. ### Proof of Theorem 5 Based on the oracle inequality for distribution regression in Theorem 4, we are now ready to give the proof of Theorem 5. Proof of Theorem 5.: According to Theorem 2, there exists \(h^{*}\in\mathcal{H}_{(2,3),R,N}\), such that \[\left\|h^{*}-f_{\rho}\right\|_{\rho}\leq\sup_{\mu\in\mathcal{P}(\Omega)}\left| h^{*}(\mu)-f_{\rho}(\mu)\right|\leq C_{1}N^{-\beta},\] where \(C_{1}=2B_{G}^{\beta}\left|f\right|_{C^{0,\beta}}+\left(3B_{Q}|g|_{C^{0,1}} \right)^{\beta}\left|f\right|_{C^{0,\beta}}\), which also implies that \[\left\|h^{*}\right\|_{\infty}\leq\sup_{\mu\in\mathcal{P}(\Omega)}\left|h^{*}( \mu)-f_{\rho}(\mu)\right|+\sup_{\mu\in\mathcal{P}(\Omega)}\left|f_{\rho}(\mu) \right|\leq C_{1}+M.\] Then utilizing Theorem 4 by taking \(h=h^{*}\), it follows that \[\mathit{Prob}\left\{\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\| _{\rho}^{2}>2C_{1}^{2}N^{-2\beta}+8\epsilon\right\}\] \[\leq \exp\left\{T_{1}N\log\frac{16M\widehat{R}}{\epsilon}+T_{2}N\log N -\frac{3m\epsilon}{2048M^{2}}\right\}\] \[+\exp\left\{-\frac{m\epsilon^{2}}{2\left(4M+C_{1}\right)^{2} \left(C_{1}^{2}N^{-2\beta}+\frac{2}{3}\epsilon\right)}\right\}\] \[+\exp\left\{\log 4m+T_{1}N\log\frac{80M\widehat{R}R^{2}N^{4}}{ \epsilon}+T_{2}N\log N-\frac{n\epsilon^{2}}{115200\left(M+C_{1}\right)^{2}R^{ 8}N^{16}}\right\}.\] If we restrict \(\epsilon\geq 2C_{1}^{2}N^{-2\beta}\log N\), we have \[\begin{split}&\mathit{Prob}\left\{\|\pi_{M}f_{\hat{D},R,N}-f_{ \rho}\|_{\rho}^{2}>9\epsilon\right\}\leq\mathit{Prob}\left\{\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\|_{\rho}^{2}>2C_{1}^{2}N^{-2\beta}+8\epsilon\right\}\\ \leq&\exp\left\{T_{1}N\log\frac{8M\widehat{R}N^{2 \beta}}{C_{1}^{2}}+T_{2}N\log N-\frac{3m\epsilon}{2048M^{2}}\right\}+\exp \left\{-\frac{3m\epsilon}{8\left(4M+C_{1}\right)^{2}}\right\}\\ +&\exp\left\{\log 4m+T_{1}N\log\frac{40M\widehat{R}R^{ 2}N^{2\beta+4}}{C_{1}^{2}}+T_{2}N\log N-\frac{n\epsilon^{2}}{115200\left(M+C_ {1}\right)^{2}R^{8}N^{16}}\right\}\\ \leq&\exp\left\{A_{1}N\log N-\frac{3m\epsilon}{2048M ^{2}}\right\}+\exp\left\{-\frac{3m\epsilon}{8\left(4M+C_{1}\right)^{2}} \right\}\\ +&\exp\left\{\log 4m+A_{2}N\log N-\frac{n\epsilon^{2}}{A_{ 3}N^{16}}\right\},\end{split}\] where \[A_{1} =T_{1}\left(\log\frac{8M\widehat{R}}{C_{1}^{2}}+2\beta\right)+T_{ 2},\] \[A_{2} =T_{1}\left(\log\frac{40M\widehat{R}R^{2}}{C_{1}^{2}}+2\beta+4 \right)+T_{2},\] \[A_{3} =115200\left(M+C_{1}\right)^{2}R^{8}.\] If we choose the neural network parameter \[N=\left[A_{4}m^{\frac{1}{2\beta+1}}\right],\] with \(A_{4}=\left(\min\left\{\frac{3C_{1}^{2}}{2048M^{2}A_{1}},\frac{3C_{1}^{2}}{40 96M^{2}A_{2}}\right\}\right)^{\frac{1}{2\beta+1}}\), and \([x]\) denotes the largest integer that is smaller than or equal to \(x\). Moreover, we choose the second stage sample size \[n\geq\left\lceil A_{5}m^{\frac{4\beta+17}{2\beta+1}}\right\rceil,\] with \(A_{5}=\frac{3A_{3}A_{4}^{2\beta+16}}{4096M^{2}C_{1}^{2}}\), and \(\lceil x\rceil\) denotes the smallest integer that is greater than \(x\). Then when \(m\) satisfying the restriction that \[\log 4m\leq\frac{3C_{1}^{2}A_{4}^{-2\beta}}{4096M^{2}}m^{\frac{1}{2\beta+1}}=A_ {6}m^{\frac{1}{2\beta+1}}, \tag{4.29}\] we conclude that \[\begin{split}&\mathit{Prob}\left\{\|\pi_{M}f_{\hat{D},R,N}-f_{ \rho}\|_{\rho}^{2}>9\epsilon\right\}\leq\exp\left\{\frac{3m\epsilon}{4096M^{2} }-\frac{3m\epsilon}{2048M^{2}}\right\}+\exp\left\{-\frac{3m\epsilon}{8\left(4 M+C_{1}\right)^{2}}\right\}\\ &+\exp\left\{\frac{3m\epsilon}{8192M^{2}}+\frac{3m\epsilon}{8192 M^{2}}-\frac{3m\epsilon}{2048M^{2}}\right\}\leq 3\exp\left\{-\frac{3m\epsilon}{2 56\left(4M+C_{1}\right)^{2}}\right\},\end{split}\] take \(t=9\epsilon\), then when \(t\geq 18C_{1}^{2}N^{-2\beta}\log N\), we have \[\mathit{Prob}\left\{\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_{\rho}^{2}>t \right\}\leq 3\exp\left\{-\frac{m^{\frac{2\beta}{2\beta+1}}t}{768\left(4M+C_{1} \right)^{2}}\right\}. \tag{4.30}\] Then using the property that for the non-negative random variable \(\xi\), \(\mathbb{E}(\xi)=\int_{0}^{\infty}P(\xi>t)dt\), with \(\xi=\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_{\rho}^{2}=\mathcal{E} \left(\pi_{M}f_{\hat{D},R,N}\right)-\mathcal{E}\left(f_{\rho}\right)\), from (4.30), we yield \[\mathbb{E}\left\{\mathcal{E}\left(\pi_{M}f_{\hat{D},R,N}\right)- \mathcal{E}\left(f_{\rho}\right)\right\}=\int_{0}^{\infty}Prob\left\{\left\| \pi_{M}f_{\hat{D},R,N}-f_{\rho}\right\|_{\rho}^{2}>t\right\}dt\] \[= \left(\int_{0}^{18C_{1}^{2}N^{-2\beta}\log N}+\int_{18C_{1}^{2}N ^{-2\beta}\log N}\right)Prob\left\{\left\|\pi_{M}f_{\hat{D},R,N}-f_{\rho} \right\|_{\rho}^{2}>t\right\}dt\] \[\leq 18C_{1}^{2}N^{-2\beta}\log N+\int_{18C_{1}^{2}N^{-2\beta}\log N }^{\infty}3\exp\left\{-\frac{m^{\frac{2\beta}{2\beta+1}}t}{768\left(4M+C_{1} \right)^{2}}\right\}\] \[\leq\] \[\leq A_{7}m^{-\frac{2\beta}{2\beta+1}}\log m,\] where \(A_{7}=18C_{1}^{2}2^{2\beta}A_{4}^{-2\beta}\left(\log A_{4}+\frac{1}{2\beta+1} \right)+2304\left(4M+C_{1}\right)^{2}\). Thus we complete the proof of Theorem 5. ## Acknowledgments The work described in this paper is supported partially by the NSFC/RGC Joint Research Scheme [RGC Project No. N-CityU102/20 and NSFC Project No. 12061160462], Germany/Hong Kong Joint Research Scheme [Project No. G-CityU101/20], Laboratory for AI-Powered Financial Technologies, Hong Kong Institute for Data Science, and the CityU Strategic Interdisciplinary Research Grant [Project No. 7020010]. ## Appendix This appendix provides the proof of the continuity of \(f\circ L_{G}^{Q}\) and the bound for the covering number of \(\mathcal{H}_{2}\). Proof of Proposition 2.: It suffices to prove that \(L_{G}^{Q}(\cdot)\) is continuous. We use again the Kantorovich-Rubinstein distance \[W_{1}(\mu,\nu)=\sup_{\psi:\left\|\psi\right\|_{C^{0,1}}\leq 1}\left\{\int_{ \Omega}\psi d\mu-\int_{\Omega}\psi d\nu\right\}.\] Now return to the definition of the functional \(L_{G}^{Q}(\cdot)\) which is defined as \[L_{G}^{Q}(\mu)=\int_{\Omega}g(Q(x))d\mu.\] For any polynomial \(Q\) on \(\Omega\), it is easy to see that for any \(x,y\in\Omega\), \(|Q(x)-Q(y)|\leq\|\nabla Q\|_{C(\Omega)}|x-y|\). Then it follows that \(\|g\circ Q\|_{C^{0,1}}\leq|g|_{C^{0,1}}\|\nabla Q\|_{C(\Omega)}\). Combining the above equations yields that \[\left|L_{G}^{Q}(\mu)-L_{G}^{Q}(\nu)\right|\leq|g|_{C^{0,1}}\|\nabla Q\|_{C( \Omega)}W_{1}(\mu,\nu),\forall\mu,\nu\in\mathcal{P}(\Omega).\] Holder's inequality shows that, when \(p\leq q\), there holds, \(W_{p}(\mu,\nu)\leq W_{q}(\mu,\nu),\forall\mu,\nu\in\mathcal{P}(\Omega)\). Finally there holds \[\left|L_{G}^{Q}(\mu)-L_{G}^{Q}(\nu)\right|\leq|g|_{C^{0,1}}\|\nabla Q\|_{C( \Omega)}W_{p}(\mu,\nu),\forall\mu,\nu\in\mathcal{P}(\Omega),\] which shows the continuity of \(L_{G}(\cdot)\). Now we give a bound for the covering number \(\mathcal{N}\left(\mathcal{H}_{2},\epsilon,\left\|\cdot\right\|_{C(\Omega)}\right)\) **Lemma 9**.: _For \(R\geq 1,N\in\mathbb{N}\), and \(\widehat{R}=3R^{4}(10n_{q}+d+18)\), there exists two constants \(T_{1},T_{2}\) depending on \(d,q\) such that_ \[\log\mathcal{N}\left(\mathcal{H}_{2},\epsilon,\left\|\cdot\right\|_{C(\Omega) }\right)\leq T_{1}N\log\frac{\widehat{R}}{\epsilon}+T_{2}N\log N.\] Proof.: Since \(\mathcal{H}_{2}\) is just the component of a classical two-layer neural network, for the first layer, \[\left\|\sigma\left(F^{(1)}x-b^{(1)}\right)\right\|_{\infty}\leq\|F^{(1)}\|_{ \infty}\|x\|_{\infty}+\|b^{(1)}\|_{\infty}\leq RN^{2}+R\leq 2RN^{2},\] for the second layer, we have \[\left\|H_{2}(x)\right\|_{C(\Omega)}\leq\|F^{(2)}\|_{\infty}\left\|\sigma\left( F^{(1)}x-b^{(1)}\right)\right\|_{\infty}+\|b^{(2)}\|_{\infty}\leq 2R^{2}N^{4}+R \leq 3R^{2}N^{4}.\] If we choose another function \(\widehat{H}_{2}\) in the hypothesis space \(\mathcal{H}_{2}\) induced by \(\widehat{F}^{(j)}\) and \(\widehat{b}^{(j)}\), satisfying the restriction that, \[\left|F^{(j)}_{ik}-\widehat{F}^{(j)}_{ik}\right|\leq\epsilon,\ \left\|b^{(j)}- \widehat{b}^{(j)}\right\|_{\infty}\leq\epsilon,\] then by the Lipschitz property of ReLU, \[\left\|\sigma\left(F^{(1)}x-b^{(1)}\right)-\sigma\left(\widehat{ F}^{(1)}x-\widehat{b}^{(1)}\right)\right\|_{\infty}\leq \left\|F^{(1)}-\widehat{F}^{(1)}\right\|_{\infty}\left\|x\right\|_{ \infty}+\left\|b^{(1)}-\widehat{b}^{(1)}\right\|_{\infty}\] \[\leq (d+1)\epsilon.\] It follows that \[\left\|H_{2}(x)-\widehat{H}_{2}(x)\right\|_{C(\Omega)}\leq \left\|F^{(2)}\left(\sigma\left(F^{(1)}x-b^{(1)}\right)-\sigma \left(\widehat{F}^{(1)}x-\widehat{b}^{(1)}\right)\right)\right\|_{\infty}\] \[+ \left\|\left(F^{(2)}-\widehat{F}^{(2)}\right)\sigma\left(\widehat{ F}^{(1)}x-\widehat{b}^{(1)}\right)\right\|_{\infty}+\left\|b^{(2)}-\widehat{b}^{(2)} \right\|_{\infty}\] \[\leq RN^{2}(d+1)\epsilon+2RN^{2}n_{q}(2N+3)\epsilon+\epsilon\leq(10n _{q}+d+2)RN^{3}\epsilon=:\tilde{\epsilon},\] Therefore, by taking a \(\epsilon\)-net of each free parameter in \(F^{(j)}\) and \(b^{(j)}\), the \(\widetilde{\epsilon}\)-covering number of \(\mathcal{H}_{2}\) can be bounded by \[\mathcal{N}\left(\mathcal{H}_{2},\widetilde{\epsilon},\left\|\cdot\right\|_{ C(\Omega)}\right)\leq\left\lceil\frac{2RN^{2}}{\epsilon}\right\rceil^{n_{q}d+ qn_{q}+q(2N+3)}\left\lceil\frac{2R}{\epsilon}\right\rceil^{4N+6}\leq\left( \frac{\widehat{R}}{\widetilde{\epsilon}}\right)^{T_{1}N}N^{T_{2}N}, \tag{31}\] where \(\widehat{R},T_{1}\) and \(T_{2}\) are defined in Lemma 5. Thus we finish the proof by taking the logarithm.
2308.11160
Searching High Temperature Superconductors with the assistance of Graph Neural Networks
Predicting high temperature superconductors has long been a great challenge. A major difficulty is how to predict the transition temperature Tc of superconductors. Recently, progress in material informatics has led to a number of machine learning models predicting Tc, which greatly improves the efficiency of prediction. Unfortunately, prevailing models have not shown adequate physical rationality and generalization ability to find new high temperature superconductors, yet. In this work, in order to give a trustable prediction on the unexplored materials, we built a bond-sensitive graph neural network (BSGNN), which is optimized to process the information of chemical bond and electron interaction in the crystal lattice, to predict the Tc maximum of each type of superconducting materials. On the basis of the domain knowledge considered in the data preparation and algorithm design, our model revealed a relevance between the Tc-Tc maximum and chemical bonds. The results indicate that shorter bond length is favored by high Tc, which is in accordance with previous human experience. Moreover, it also shows that some specific chemical elements are favored by high Tc, which is beyond what human experts already knew. It gives a convenient guidance for searching high temperature superconductors in materials database, by ruling out the materials that could never have high Tc.
Liang Gu, Yang Liu, Pin Chen, Haiyou Huang, Ning Chen, Yang Li, Yutong Lu, Yanjing Su
2023-08-22T03:34:18Z
http://arxiv.org/abs/2308.11160v4
# Predicting Transition Temperature of Superconductors with Graph Neural Networks ###### Abstract Predicting high temperature superconductors has long been a great challenge. The difficulty lies in how to predict the transition temperature (\(\mathbf{T_{c}}\)) of superconductors. Although recent progress in material informatics has led to a number of machine learning models predicting \(\mathbf{T_{c}}\), prevailing models have not shown adequate generalization ability and physical rationality to find new high temperature superconductors, yet. In this work, a bond sensitive graph neural network (BSGNN) was developed to predict the \(\mathbf{T_{c}}\) of various superconductors. In BSGNN, communicative message passing and graph attention methods were utilized to enhance the model's ability to process bonding and interaction information in the crystal lattice, which is crucial for the superconductivity. Consequently, our results revealed the relevance between chemical bond attributes and \(\mathbf{T_{c}}\). It indicates that shorter bond length is favored by high \(\mathbf{T_{c}}\). Meanwhile, some specific chemical elements that have relatively large van der Waals radius is favored by high \(\mathbf{T_{c}}\). It gives a convenient guidance for searching high temperature superconductors in materials database, by ruling out the materials that could never have high \(\mathbf{T_{c}}\). **Keywords: materials informatics, machine learning, graph neural network, superconductivity, superconductors, transition temperature** ## 1 Introduction As the practical application of superconductors is severely constrained by the low working temperature, hunting for new superconductors with higher transition temperature (\(T_{c}\)) is a long-cherished dream for generations. However, the discovery of new high temperature superconductors (HTS) is still a significant challenge. [1, 2, 3, 4, 5, 6, 7] The \(T_{c}\) of superconductors varies with a good number of interrelated factors [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], leading to the difficulty in predicting \(T_{c}\) of unexplored materials. In this instance, machine learning (ML) has recently been exploited in the research on superconductors with the hope of giving guidance and inspiration to find new HTS. In the latest few years, plenty of ML models were developed to predict the \(T_{c}\) of superconductors or the existence of superconductivity. [24, 25, 26, 27, 28, 29, 30, 31, 32] For example, models predicting \(T_{c}\) were trained with 10000+ data mainly derived from SuperCon database [33]. Several algorithms, including random forest (RF) [24], atom table convolutional neural network (ATCNN) [27, 30] and convolutional gradient boosting decision tree (ConvGBDT) [34], have achieved predictive scores as good as \(R^{2}\sim 0.9\). Then the ML models were used to identify HTS from the materials in open access databases such as Inorganic Crystallographic Structure Database (ICSD) [35] and Crystallography Open Database (COD) [36]. Unfortunately, high scoring models so far gave inconsistent results when proposing candidates of HTS, and most of those candidates have not been verified experimentally, yet. Therefore, it calls for more new advanced algorithms and models with better generalization ability and physical rationality. In this study, we developed a bond sensitive graph neural network (BSGNN) algorithm with optimized graph structures and message passing method, so as to adapt for the problem of superconductivity. Being good at processing structure information and interaction information, graph neural network (GNN) models for general purpose in materials science have shown impressive performance in predicting some material properties [37, 38, 39, 40]. There are also GNN models predicting conventional superconductivity [31], but is a lack of models powerful enough to predict new HTS. Despite of the abundance of architectures of GNN algorithm [41], it is unclear yet that which architectures work well on superconductivity. Here, according to the domain knowledge about superconductivity, we focused on the information of bonding and interaction in the crystal lattice, which is known as crucial factors influencing the superconductivity. Three methods, called nearest-neighbors-only graph representation (NGR), communicative message passing (CMP) and graph attention (GAT), were integrated into our GNN algorithm. Regression models to predict the \(T_{c}\) of superconductors were trained. With the help of BSGNN, we explored the entire ICSD to find potential HTS materials. Then, the model was used to predict some compressed hydrides as well as a series of conceived binary compounds in different lattice constants. The results showed that shorter bond length of specific atom pairs in the lattice is essential for HTS, implying that our model has learned the correlation between the \(T_{c}\) and pressure. Our work suggests that GNN model can have better interpretability and generalization ability in predicting superconductors and provide substantial help to materials scientists. ## 2 Results and discussion ### Modeling It is well known that the crystalline structure is crucial for the superconductivity. How to deal with the information of interaction between atoms or electrons in the crystal lattice, or to say, the bond information, is vital for a ML model predicting superconductors. So, three modules in our GNN code to capture the bond information. As shown in Fig. 1, for nearest-neighbors-only graph representation (NGR) module, which encodes the crystalline structures into crystal graphs, only the chemical bonds of closest neighboring atoms were considered. The nodes in the graph are each atom, while the edges in the graph are the chemical bonds of closest neighbors. There are 16 features (various atomic attributes) on each node and one feature (normalized bond length) on each edge (see more details in section \(Methods\)). In communicative message passing (CMP) module [42], the information exchange between nodes and adjacent edges are considered. And the graph attention (GAT) module [43]. All those three modules focused on the local interplay of chemical bonds. Regression models predicting \(T_{c}\) (actually \(log_{2}T_{c}\)) were trained and tested with the split datasets. Figure 2 shows the result of the best one in five models. Those five models achieved an average predictive score of \(R^{2}=0.85\pm 0.05\). Table 1 shows the prediction on three superconductors found in recent few years, which are not in the input data. It can be seen that our GNN model has a good performance, not only in predicting various superconducting materials, but also in predicting unexplored materials (unseen data to the model). \begin{table} \begin{tabular}{l l l l} \hline No. & material & \(T_{c}^{exp}\) (K) & \(T_{c}^{pred}\) (K) \\ \hline 1 & CaH\({}_{6}\) @172 GPa & 215 & 249 \\ 2 & Ti @248 GPa & 26 & 16 \\ 3 & CsV\({}_{3}\)Sb\({}_{5}\) & 2.3 & 11 \\ \hline \end{tabular} \end{table} Table 1: Three superconductors discovered recently (not in train data). Figure 1: Three critical modules in BSGNN: nearest-neighbors-only graph representation (NGR) module, communicative message passing (CMP) module, and graph attention (GAT) module. \(v_{n}\) and \(e_{uv}\) represent the nodes and edges, respectively. \(h_{n}\) and \(h_{uv}\) represent the node features and edge feature, respectively. \(\alpha_{uv}\) is the attention coefficients. \(n\), \(u\), \(v\) are indices of nodes. ### Screening high temperature superconductors The model shown in Fig. 2 was used to predict the materials in ICSD. The ICSD contains over \(200,000+\) entries of ordered or disordered crystal structures. Here we take all \(110,000+\) ordered structures as the data to predict. Table 2 shows the top 10 (higher \(T_{c}\)) candidates of HTS, proposed by our GNN model. Unexpectedly, it can be found in Tab. 2, Fig. 3, and \(Additional\)\(information\) (candidates.csv) that nearly half of the predicted \(T_{c}\) is higher than 30 K, which is inconsistent with the domain knowledge. Of course, it doesn't mean our predictions are unreliable. It just means that our model can not directly predict new high temperature superconductors, because the regression model can only predict how high the \(T_{c}\) could be, but can not predict whether or not the material is a superconductor. More specifically, the model often mistakes insulators as high temperature superconductors. But, many materials in ICSD are not superconductors, even not metals, but insulators. The predicted \(T_{c}\) of insulators should be seen as the \(T_{c}\) they could have, if they could be changed into metals by chemical doping or high pressure. Unfortunately, most insulators would not change into metals, no matter how they are doped or compressed. They may have met the necessary conditions for high \(T_{c}\), but just \begin{table} \begin{tabular}{l l l l} \hline No. & material & \(T_{c}^{pred}\) (K) & category \\ \hline 1 & CF\({}_{4}\) & 271 & molecular crystal \\ 2 & CaZnF\({}_{4}\) & 270 & ionic crystal \\ 3 & C\({}_{2}\)F\({}_{6}\)O\({}_{3}\) & 270 & molecular crystal \\ 4 & NH\({}_{4}\)F & 269 & ionic crystal \\ 5 & MgF\({}_{2}\) & 265 & ionic crystal \\ 6 & LiCaF\({}_{3}\) & 262 & ionic crystal \\ 7 & AlF\({}_{3}\) & 254 & ionic crystal \\ 8 & CaO\({}_{2}\) & 254 & ionic crystal \\ 9 & CaF\({}_{2}\) & 250 & ionic crystal \\ 10 & CaCO\({}_{3}\) & 248 & ionic crystal \\ \hline \end{tabular} \end{table} Table 2: Top 10 HTS candidates (highest \(T_{c}^{pred}\)), proposed by our GNN model. Figure 2: Model performance. Colored points: test data (randomly 10% data). don't have the chance to be superconductors. So, we need a materials screening to rule out the unchangeable insulators (mostly ionic crystals and molecular crystals). Here, we used the data of energy gap (given by DFT calculation) in MATGEN [42] to identify insulators. The materials \(E_{g}>\) 2 eV were removed from the candidates list. Then, we further screened the insulators in the HTS candidate list manually. The ionic crystals and molecular crystals were removed from the candidates list. Table 3 shows the top 10 candidates after manual screening. ### Understanding the GNN model from the predictions In order to understand what the model have learn, we looked for the patterns in the predictions. For that purpose, there is no need to box ourselves into the materials experimentally observed. So, we conceived a series of binary compounds, with the sphalerite structure (space group: F\(\bar{4}\)3m, No.216), and in different lattice parameters. We choose the structure of sphalerite because in this structure, there is only one type of chemical bond (for example, Ga-As bond is the only one in GaAs, and the Ga-Ga and As-As is too far to form a substantial bond). Then the \(T_{c}\) of these crystals of binary compound (CBC), if exists, could be considered to be solely contributed by that bond. They are purest systems to see the influence of chemical bond on the superconductivity. Most of these CBC are not superconducting, or not metallic, or even thermodynamically unstable. But by predicting these materials don't exist, we can see the influence of bond length and elements to the \(T_{c}\). The meaning of the predictions of these CBC is how high their \(T_{c}\) maxima could be, if they were superconductors. Figure 4a shows a result of the predictions of CBC. Each \(T_{c}\) value in Fig. 4a represents the theoretical \(T_{c}\) maxima of the CBC consisting of each two chemical elements in the heat map. It can be found that the \(T_{c}\) varies with the combination of elements. The chemical elements of the two atoms forming the bond (not each single Figure 3: Statistical chart of the predictions on the materials in ICSD. \begin{table} \begin{tabular}{l l l l} \hline No. & material & \(T_{c}^{pred}\) (K) & ICSD code \\ \hline 1 & CaB\({}_{6}\)O\({}_{10}\) & 154 & 161320 \\ 2 & Ca\({}_{2}\)Ga\({}_{2}\)O\({}_{5}\) & 145 & 051545 \\ 3 & CaC\({}_{2}\)Bi\({}_{2}\)O\({}_{8}\) & 138 & 094741 \\ 4 & Co\({}_{0.89}\)Ca\({}_{1.11}\)GeO\({}_{4}\) & 136 & 173465 \\ 5 & Sn\({}_{0.976}\)Au\({}_{0.024}\)F\({}_{4}\) & 132 & 078909 \\ 6 & Ca\({}_{2}\)B\({}_{5}\)O\({}_{9}\)Br & 126 & 018001 \\ 7 & Ca\({}_{2}\)Nb\({}_{2}\)O\({}_{6}\)F & 121 & 063189 \\ 8 & RbNi\({}_{2}\)F\({}_{6}\) & 113 & 031781 \\ 9 & LiB\({}_{6}\)O\({}_{9}\)F & 111 & 420286 \\ 10 & YCaGa\({}_{3}\)O\({}_{7}\) & 103 & 109448 \\ \hline \end{tabular} \end{table} Table 3: Top 10 HTS candidates after manual screening (may need further doping). element, but the element combination) is vital for high \(T_{c}\). Some element combinations are able to support high \(T_{c}\), whereas some others never had a chance. Meanwhile, Fig. 4b shows that shorter bonds usually lead to higher \(T_{c}\), which is consistent with the common sense, and indicating that our model do have learnt the dependence of \(T_{c}\) on bond length. On the basis of that, we can predict the \(T_{c}\) of superconductors at high pressure, with confidence. Please note that what is shown in the heat-map (Fig. 4a) is not the contribution of each element to the \(T_{c}\), but of each combination of two elements. Here, it's inappropriate to discuss either one element alone. However, still, we can see the differentiation between elements by calculate the maximal or average value of each row or column in the heat-map, as shown in Fig. 4c. It suggests that some elements are more favorable than others for higher \(T_{c}\). Especially, Alkali, Alkaline earth, Chalcogen, and Halogen elements are preferred. According to the domain knowledge of superconductivity, the factors affecting \(T_{c}\) are numerous. Bond length and elements combination are only two of them. Having learnt the influence of bond length, our model has surpassed all previous ML models, but still not goes beyond human expertise. Meanwhile, the other knowledge that our model has learnt, the influence of elements combination, is a new information that has never been noticed and interpreted. Figure 4: Results of predicting conceived binary compounds. (a) Each \(T_{c}^{pred}\) value represents the predicted \(T_{c}\) of the conceived binary compounds consisting of two chemical elements (marked as “element 1” and “element 2”). And the bond length between the two nearest neighboring atoms is 50% of the sum of van der Waals (vdW) radii (\(d=r_{1}^{vdW}+r_{2}^{vdW}\)). (b) Dependence of \(T_{c}\) on bond length. The three data points in each line are 50%, 70%, and 90% of \(d\), respectively. (c) Black and red: the average / maximal predicted \(T_{c}\) for each element, derived from subplot (a), blue: the vdW radius of each element. ### Hunting for superior superconductors in compressed hydrides In addition, since our model has learnt the variation of \(T_{c}\) with bond length, it is supposed to get good performance in predicting superconductors at high pressure. So we used the model shown in Fig. 2 to predict compressed hydrides. Figure 5 shows the results of several typical hydrides, in which YH\({}_{6}\), LaH\({}_{10}\), H\({}_{3}\)S and H\({}_{2}\)S are in train data, whereas CeH\({}_{9}\) and CaH\({}_{6}\) are unseen data. On this basis, our model can be used to search new compressed hydrides with higher \(T_{c}\) or lower pressure. ## 3 Conclusion In this study, a bond sensitive GNN model (BSGNN) was developed to predict the \(T_{c}\) of various superconducting materials. Our work suggests that a well-designed GNN algorithm can achieve rational and reliable results on complex materials problems with unclear mechanism, such as superconductivity. Owing to the GAT and CMP techniques, BSGNN has effectively learnt the dependence of \(T_{c}\) on both bond length and chemical composition. The predictions of BSGNN shows that shorter bond length, as well as specific elements with relatively larger vdW radii, is in favor of high \(T_{c}\). It illustrates the necessity and importance of considering crystalline structure information when predicting superconductors. As the influence of bond length was learnt, BSGNN shall have advantages in predicting superconducting materials under different pressure. With the help of BSGNN, we predicted all the materials in ICSD, and proposed some promising candidates of HTS after further artificial screening. ## 4 Methods ### Data preparation Our GNN models were trained with the data of 612 superconductors. The crystalline structures of those superconductors are derived from ICSD [35] (up to 2019), and the \(T_{c}\) values of them are mainly derived from SuperCon [33] (up to 2014). Some compressed hydrides and some superconductors discovered recently were added into the dataset-SuperCon. The intersection set of dataset-SuperCon and dataset-ICSD has 1526 materials. In those materials, we took the data that \(T_{c}\)\(>\) 5 K (612 entries) as the input data for our GNN model. The 612 input data were randomly (but not fully random) divided into train/test sets in a proportion of 9:1. Before the dataset splitting, the materials family of each data was labeled. During the dataset splitting, the ratio Figure 5: Variation of predicted \(T_{c}\) with pressure for several typical compressed hydrides. Solid filled points: experimental values, light filled points: predicted values, open filled points: DFT calculated values. of each materials family was maintained for both the train set and test set. As a result, the data distribution is better than the case of complete random data splitting. There are one edge feature (bond length) and 16 node features in our crystal graph (as shown Tab. 4). ### Algorithm **NGR module** The strategy of the graph representation in our BSGNN algorithm is called nearest-neighbors-only. Each node in the graph is an atom in the crystal lattice. Each edge in the graph is a chemical bond between nearest neighbors. Here, "nearest neighbors" means a set of neighboring atoms for a centering atom. There is an edge between two neighboring atoms when \(d_{neighbor}<1.2\) * \(d_{nearest}\), where \(d_{neighbor}\) is the distance between those two neighboring atoms, \(d_{nearest}\) is the distance to the nearest neighbor for either of those two atoms. **CMP module** Inspired by the work on molecular graph with enhanced interatomic interactions [45], we apply the CMPNN framework to perform message passing on the aforementioned crystal graphs. The message generation and message update are iterated through the following equation. \[\left\{\begin{aligned} m_{v}^{t+1}&=\sum_{k\in N(v)}h _{kv}^{t}\\ h_{v}^{t+1}&=h_{v}^{t}+m_{v}^{t+1}\\ m_{vw}^{t+1}&=h_{v}^{t+1}-h_{wv}^{t}\\ h_{vw}^{t+1}&=ReLU(h_{vw}^{0}+W_{m}m_{vw}^{t+1}) \end{aligned}\right.\] Where \(h_{kv}^{t}\) represents the directed edge vector pointing to node \(v\) at time \(t\), which generates the message \(m_{v}^{t+1}\) for node \(v\) at time \(t+1\). Then, the message is used to update the hidden layer representation of node \(v\) at time \(t+1\). Next, the updated node information can be used to update all the directed edge vectors in reverse. Specifically, \begin{table} \begin{tabular}{l l} No. & feature name \\ \hline 1 & atomic mass 1 \\ 2 & vdW radius 1 \\ 3 & atomic radius 1 \\ 4 & electronegativity (Pauling) 2 \\ 5 & orbital energy of highest valence electrons 3 \\ 6 & orbital energy of lowest valence electrons 4 \\ 7 & number of unfilled valence orbitals \\ 8 & number of filled valence orbitals \\ 9 & number of unfilled valence \(s\) orbitals \\ 10 & number of filled valence \(s\) orbitals \\ 11 & number of unfilled valence \(p\) orbitals \\ 12 & number of filled valence \(p\) orbitals \\ 13 & number of unfilled valence \(d\) orbitals \\ 14 & number of filled valence \(d\) orbitals \\ 15 & number of unfilled valence \(f\) orbitals \\ 16 & number of filled valence \(f\) orbitals \\ \end{tabular} \end{table} Table 4: Node features. the message \(m_{wv}^{t+1}\) generated by each directed edge is obtained by subtracting the vector \(h_{wv}^{t}\) of its reverse edge from the starting node vector \(h_{v}^{t+1}\). The model prevents gradient vanishing by using residual connections between the generated edge messages and the initial edge vectors. Such completes the update of all edge vectors, where \(W_{m}\) represents the learnable weight matrix. #### Attention module The attention layer consists of local attention layer and global attention layer. The local attention layer helps the central atom identify the neighbor atoms with greater effect as follows [46]: \[\boldsymbol{h}^{{}^{\prime}}(u)=\boldsymbol{W}_{1}\boldsymbol{h}(u)+\sum_{v \in N(u)}\alpha_{u,v}\boldsymbol{W}_{2}\boldsymbol{h}(v) \tag{1}\] where the attention coefficients \(\alpha_{u,v}\) are computed as: \[\alpha_{u,v}=softmax(\frac{(\boldsymbol{W}_{3}\boldsymbol{h}(u))^{T}( \boldsymbol{W}_{4}\boldsymbol{h}(v))}{\sqrt{d}}) \tag{2}\] while the global attention layer is used to identify which atom of the material contributes more to \(T_{c}\) as follows [47]: \[\boldsymbol{z}=\sum_{u\in U}Set2Set(\boldsymbol{h}^{{}^{\prime}}(u)) \tag{3}\] #### Additional information Data and code can be found at [https://github.com/GLinustb/BSGNN](https://github.com/GLinustb/BSGNN). #### Author contributions The study was planned and designed by YL, PC and HH, machine learning programs were performed by LG and PC, and checked by YL, the manuscript prepared by LG, YL, HH and PC. All authors discussed the results and commented on the manuscript. #### Competing interests The authors declare no competing interests. #### Acknowledgements The authors gratefully acknowledge the financial support of Guangdong Province Key Area R&D Program (2019B010940001).
2310.02430
Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks
Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs. Building on these findings, we devise an algorithm to define a privileged basis, which reveals hidden neurons instrumental in the temporal storage and composition of variables, a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models.
Arjun Karuvally, Peter Delmastro, Hava T. Siegelmann
2023-10-03T20:52:37Z
http://arxiv.org/abs/2310.02430v1
# Episodic Memory Theory for the Mechanistic Interpretation of Recurrent Neural Networks ###### Abstract Understanding the intricate operations of Recurrent Neural Networks (RNNs) mechanistically is pivotal for advancing their capabilities and applications. In this pursuit, we propose the Episodic Memory Theory (EMT), illustrating that RNNs can be conceptualized as discrete-time analogs of the recently proposed General Sequential Episodic Memory Model. To substantiate EMT, we introduce a novel set of algorithmic tasks tailored to probe the variable binding behavior in RNNs. Utilizing the EMT, we formulate a mathematically rigorous circuit that facilitates variable binding in these tasks. Our empirical investigations reveal that trained RNNs consistently converge to the variable binding circuit, thus indicating universality in the dynamics of RNNs. Building on these findings, we devise an algorithm to define a _privileged basis_, which reveals hidden neurons instrumental in the temporal storage and composition of variables -- a mechanism vital for the successful generalization in these tasks. We show that the privileged basis enhances the interpretability of the learned parameters and hidden states of RNNs. Our work represents a step toward demystifying the internal mechanisms of RNNs and, for computational neuroscience, serves to bridge the gap between artificial neural networks and neural memory models. Recurrent Neural Networks Mechanistic Interpretability Memory Models ## 1 Introduction Mechanistic interpretability aims to reverse engineer the intricate workings of neural networks that drive their behavior (Olah, 2022). At the core of its significance lies the pressing need for transparency and comprehension in an era where AI-driven systems have become ubiquitous in real-world applications (Christian, 2021; Sears, 2021; Bostrom, 2014; Muller and Bostrom, 2013). While these systems demonstrate remarkable proficiency, their inherently black-box nature often renders them inscrutable (Alishahi et al., 2019; Buhrmester et al., 2019; Fong and Vedaldi, 2017). Gaining a mechanistic understanding not only builds trust in such systems but also provides insights that can lead to refinement and innovation (Raukur et al., 2022). In essence, mechanistic interpretability is not just about demystifying AI; it's about harnessing its potential responsibly and efficiently. Current approaches to mechanistic interpretability focus on neural networks without any long-term temporal behavior. On the other hand, Recurrent Neural Networks (RNNs) pose a unique challenge - the task relevant information is stored in a hidden state that evolves over time. This raises the question: _How is information reliably stored and processed in an evolving hidden state and how is the hidden state dynamics connected to the computations performed?_ To answer this question, we draw inspiration from memory models in computational neuroscience. First, we show how autonomously evolving RNNs can be interpreted as performing episodic memory retrieval in Section 3. This establishes the connection between existing RNN architectures and neurocomputational memory models, and forms the foundational principle for the Episodic Memory Theory (EMT). In Section 4, we formulate a class of algorithmic tasks to probe the variable binding behavior of RNNs. Section 5 introduces _variable memories_, which are linear subspaces capable of symbolically binding and recursively composing information, and use them to reverse engineer variable binding mechanisms in RNNs. The experimental results reveal consistent convergence to the proposed mechanisms adding evidence to the _universality hypothesis_ in mechanistic interpretability (Olah et al., 2020; Li et al., 2015). In Section 6, we build on the empirical results and propose an algorithm to construct a privileged basis grounded in the _variable memories_. For the first time, this basis unveils _hidden neurons_, which exist in superposition within the RNN hidden state, and actively participate in the storage and composition of variables. ## 2 Related Works **Dynamical Systems Interpretation of RNNs**: Current approaches to interpret RNNs consider them as non-linear dynamical systems and apply linearization around fixed or slow-changing points to reveal their behavior (Marschall and Savin, 2023; Sussillo and Barak, 2013). The preliminary step in this analysis involves linearization around fixed points and slow-changing points found using optimization algorithms. The phase space flow is assembled piecemeal from each linearized region. The exploration of the long-term behavior of these regions is undertaken through the eigen-spectrum analysis of the corresponding linearized dynamical systems (Strogatz, 1994), providing insights into the dynamics of convergence, divergence, stability, or spiraling (Rowley et al., 2009; Kim, 1996). However, this method becomes intractable when there are many dimensions exhibiting non-convergent behaviors. The proposed EMT generalizes this approach and enables interpretation even when the number of non-converging dimensions is arbitrarily large. **Mechanistic Interpretability**: Mechanistic interpretability seeks to reverse-engineer neural networks to expose the underlying mechanisms enabling them to learn and adapt to previously unconcountered conditions. The prevailing strategy involves examining the networks' internal "circuits" (Comny et al., 2023; Wang et al., 2022; Cammarata et al., 2020). Researchers have found that applying these interpretability methods to large networks, such as transformers (Vaswani et al., 2017) handling complex tasks in natural language processing and vision, faces the challenge of unclear features to be modeled in internal circuits. To address this, they create toy models with clearly defined features essential for task resolution. Probing models trained on toy tasks has resulted in supporting evidence for prevalent hypotheses. Some of the notable hypotheses are _universality_(Chughtai et al., 2023; Li et al., 2015) - models learn similar features and circuits across different models when trained on similar tasks, _bottleneck superposition_(Elhage et al., 2022) - a mechanism for storing more information than the available dimensions, and _composable linear representations_(Cheung et al., 2019) - the use of linear spaces in feature representation. Despite these advancements, current approaches remain confined to networks without a recurrent state like MLPs and transformers. Recurrent architectures, which maintain and recursively update a hidden state for task-related information processing, present a unique challenge - information needs to be stored and processed over time (Cruse, 1996; Hochreiter and Schmidhuber, 1997). Figure 1: **Equivalence between Episodic Memory Models and Variable Binding**: **A**. Episodic Memory models aim to uncover the cognitive processes involved in the retrieval of subjective past experiences often stored as a temporal sequence of memory items. The illustration shows the retrieval of a personal experience when an apple is observed. **B**. The illustration shows the application of the Episodic Memory Theory, which poses that learning the addition operation over arbitrary numbers, a task involving variable binding, can be considered equivalent to episodic memory retrieval where the computations are performed over variables instead of predetermined memories. The abstract addition operation is stored in the synapses in the form of _how_ the variables interact with each other to produce the desired result. **Neural Memory Models**: Developments in memory modeling have revealed links between deep neural networks and memory models. The first investigation of this link explored the relationship between Dense Associative Memory and Multi-Layer Perceptron (MLP) with various activation functions (Krotov and Hopfield, 2016). Later studies extended this connection to explain the practical computational benefits observed in neural architectures like transformers (Ramsauer et al., 2020). Recently, the traditional memory models capable of associative recall of single memories were expanded to retrieving memory sequences (Karuvally et al., 2022; Chaudhry et al., 2023). This expansion allows memories that previously did not interact in the single memory retrieval context to interact and produce complex temporal behavior (Kleinfeld, 1986; Kleinfeld and Sompolinsky, 1988). A fundamental assumption in memory modeling (in both single and sequence retrieval) is that the network's memories are predetermined and stored in the synapses. This assumption limits the models' applicability to mechanistic interpretability, which requires the symbolic binding of memories typically available only during inference. In EMT, we will demonstrate that by lifting the fixed memory assumption in memory modeling, these memory models can be utilized to show how binding of external information happens in RNNs, revealing the synergistic relationship between the three fields - memory modeling, recurrent neural networks and mechanistic interpretability. ## 3 RNN as Episodic Memory We show that RNNs can be viewed as a discrete-time analog of a memory model called General Sequential Episodic Memory Model (GSEMM) (Karuvally et al., 2022). To be applicable for the more general setting of RNNs, we slightly modify the GSEMM formulation with a pseudoinverse learning rule instead of the Hebbian learning rule for the synapses. This modification allows us to deal with more general (linearly independent vectors) memories than orthogonal vectors (Chaudhry et al., 2023; Personnaz et al., 1986). We discretize the continuous time GSEMM model using forward Euler approximation under the conditions that the timescale parameters are \(\mathcal{T}_{f}=1,\mathcal{T}_{h}=0,\) and \(\mathcal{T}_{d}=0\) (See Appendix A.2 for details). The final discrete system we obtain is \(V_{f}(t+1)=\Xi\,(I+\Phi^{\top})\,\Xi^{\dagger}\,\sigma_{f}(V_{f}(t))\). The columns of \(\Xi\) are the _stored memories_ of the model, and \((I+\Phi^{\top})=\Phi^{\top}\) is the matrix representing sequential memory interactions between these stored memories. The neural state variable is \(V_{f}\in\mathbb{R}^{d\times 1}\). The discrete system we derived is topologically conjugate to the update equations of an Elman RNN under the homeomorphic transformation if the norm of the matrix is bounded by 1. That is, if \(||\Xi\,\Phi^{\top}\,\Xi^{\dagger}||\leq 1\), we can consider a new state variable \(h=\sigma_{f}(V_{f})\) such that \[h(t)=\sigma_{f}(\Xi\,\Phi^{\top}\,\Xi^{\dagger}h(t-1))\,. \tag{1}\] This conjugate system has equations that are equivalent to an Elman RNN hidden state update equation without bias \(h(t+1)=\sigma_{f}(W_{hh}h(t))\). A corollary to the equivalence between the sequence memory model and Elman RNNs is that if we decompose the weight matrix of the RNN in terms of the memories such that \(W_{hh}=\Xi\,\Phi^{\top}\,\Xi^{\dagger}\), the RNN computations can be interpreted as the retrieval of episodic memories temporally transitioning according to the rules encoded in \(\Phi\). This result along with the previous results (Krotov and Hopfield, 2016; Ramsauer et al., 2020) connecting memory models with feedforward neural networks forms the foundation of the Episodic Memory Theory of learned neural networks: Figure 2: **Circuit of variable binding in an illustrative task of four variables, each with five dimensions**: **A**. The hidden state at time \(t\) has subspaces capable of storing external information in their activities. The colors depict the vector components (or activity) of the hidden state in the variable memory basis. The linear operator \(\Phi\) acts on the hidden state such that these activities are copied between variable memories except for \(\Psi_{4}\), which implements the linear operation \(f\). **B**. The \(N^{\text{th}}\) variable contents are read during the output phase using the appropriate linear operator \(W_{r}=\Psi_{N}^{\star}\). ``` \(0\leq\alpha\leq 1\)\(\triangleright\) number of time-steps in the input phase \(W_{hh},W_{uh},W_{r}\)\(\triangleright\) learned parameters of the RNN \(\Psi_{s}\leftarrow\alpha W_{uh}+(1-\alpha)W_{r}^{*}\) for\(k\in\{s-1,s-2,\ldots 1\}\)do \(\Psi_{k}\leftarrow\alpha W_{hh}^{s-k}W_{uh}+(1-\alpha)\left(\left(W_{hh}^{\top} \right)^{k}W_{r}^{*}\right)\) \(\Psi_{k}\leftarrow\Psi_{k}-EE^{*}\Psi_{k}\quad\forall E:\lambda(E)<1\)\(\triangleright\) Remove the components along transient directions. endfor \(\Psi\leftarrow[\Psi_{1};\ldots;\Psi_{s}]\) \(\Psi^{\downarrow}\leftarrow\text{PC}(\{h(t)\}-\Psi^{*}\,\{\tilde{h}(t)\})\)\(\triangleright\) Principle Components of \(\tilde{h}\) from simulations ``` **Algorithm 1** Algorithm for computing variable memories of trained linear RNNs **The Episodic Memory Theory (EMT) poses that the inner workings of learned neural networks can be revealed by analyzing the learned inter-memory interactions and its effect in the space of stored memories (Figure 1).** ## 4 Variable Binding Tasks Simple algorithmic tasks are very useful for mechanistic interpretability as they provide a controllable empirical setup compared to complex and noisy real-world scenarios. We formulate a class of tasks with input and output phases. At each timestep of the input phase, external information is provided _to_ the RNN. During the output phase, the RNN needs to utilize this external information to synthesize novel outputs at each time step that are subsequently read _from_ the network as output. This simple two-phase setup closely matches the behavior of NLP tasks like translation and conditional generative modeling where the noisy real-world features are abstracted out. Formally, the input phase consists of \(s\) total timesteps where at each timestep \(t\), a vector of \(d\) dimensions \(u(t)=\left(u^{1}(t),u^{2}(t),\ldots,u^{d}(t)\right)^{\top}\) is input to the model. We call the vector components \(u^{i}(t)\) the external information that needs to be _stored_ in the RNN hidden state. After the input phase is complete, the zero vector is continually passed as input to the model, so we say the RNN evolves autonomously (without any external input) during the output phase. The future states of the system during output phase evolve according to the following equation. \[u(t)=f(u(t-1),u(t-2),\ldots u(t-s)),\quad t>s. \tag{2}\] For analytical tractability, we add two restrictions to the variable binding tasks: (1) the composition function \(f\) is a linear function of its inputs. (2) the codomain of \(f\) and the domain of the inputs is binary \(\in\{-1,1\}\). ## 5 Variable Binding Circuit in RNN Linearization approaches have shown promise in the analysis of RNNs in the neighborhood of fixed points (Sussillo and Barak, 2013). We build our model of variable binding on a linearized RNN defined by the following equations. \[\begin{cases}h(t)=W_{hh}h(t-1)+W_{uh}u(t)\,,\\ y(t)=W_{r}\,h(t)\,.\end{cases} \tag{3}\] We envision that any non-linear RNN can be converted to this form by finding fixed points and subsequent linearization (See Appendix A.4 for details). Here, \(W_{hh},W_{uh},W_{r}\) are linear operators, \(h(t)\) is the hidden state, \(u(t)\) is the input, and \(y(t)\) is the output. We use a simplifying assumption that \(W_{hh}\) has sufficient capacity to represent all the variables required for variable binding tasks (no requirement for bottleneck superposition (Elhage et al., 2022)). We further assume that \(h(0)\) is the zero vector. To handle the basis change suggested by the EMT view of RNNs, we use Dirac and Sinstein notations from abstract algebra (Appendix A.1). This new notation has two benefits - (1) We are able to formalize variable binding mechanisms independent of the basis, and (2) The notation enables a very clean and concise description of all the components of the circuit. Formally, we write any vector \(v\) as \(\ket{v}=\bra{\epsilon^{i}}\ket{v}\ket{\epsilon_{j}}=v^{i}\ket{\epsilon_{j}}\). \(\ket{\epsilon_{j}}\) is the basis in which the vector has vector components \(v^{i}=\bra{\epsilon^{i}}\ket{v}\cdot\bra{\epsilon^{i}}\) are the basis _covectors_ defined such that \(\bra{\epsilon^{i}}\ket{\epsilon_{j}}=\delta_{ij}\), and \(\delta_{ij}\) is the Dirac delta function. In the new notation, the linear RNN of Equation 3 evolving autonomously is reformulated as follows. \[\ket{h(t)}=\left(\xi_{\mu}^{i}\,\Phi_{\nu}^{\mu}\,(\xi^{\dagger})_{j}^{\nu}\, \ket{e_{i}}\bra{e^{j}}\right)\ket{h(t-1)} \tag{4}\] Here \(\left|e_{i}\right\rangle\) is the standard basis vector which is typically used in simulations. To simplify this system further, we use a new basis \(\left|\psi_{\mu}\right\rangle=\xi_{\mu}^{i}\left|e_{i}\right\rangle\). \[\left|h(t+1)\right\rangle=\left(\Phi_{\nu}^{\mu}\left|\psi_{\mu}\right\rangle \left\langle\psi^{\nu}\right|\right)\left|h(t)\right\rangle \tag{5}\] This new basis allows us to treat the linearized RNN as applying a _single_ linear operator as opposed to three in the original formulation. In the new basis, the hidden state vector is \(\left|h(t)\right\rangle=h^{\psi_{\mu}}\left|\psi_{\mu}\right\rangle\). We pose that these components \(h^{\psi_{\mu}}\) (also called subspace _activity_) can be set to any external information for solving the task by appropriately interacting with the hidden state, thus behaving like variables in computation. The collection of vectors \(\{\Psi_{i}\}=\{\left|\psi_{\mu}\right\rangle:\mu\in\{(i-1)d,\ldots,id\}\}\) that defines a subspace where the variable is stored is called the \(i^{\text{th}}\)_variable memory_. The newly introduced concept of variable memory enables treating activity in the space collectively, simplifying reasoning about their behavior. In order to retain information, the \(\Phi\) operator must have necessary mechanisms to handle the storage of information over time. One possibility for this is the following linear operator \(\Phi\). \[\Phi=\sum_{\mu=1}^{(N-1)d}\left|\psi_{\mu}\right\rangle\left\langle\psi^{\mu+d }\right|+\underbrace{\sum_{\mu=(N-1)d}^{Nd}\Phi_{\nu}^{\mu}\left|\psi_{\mu} \right\rangle\left\langle\psi^{\nu}\right|}_{f(u(t-1),u(t-2),\ldots,u(t-N))}. \tag{6}\] The action of the operator on the hidden state is illustrated in Figure 1A. For all variable memories with index \(i\in\{2,3,4,\ldots N\}\), the information contents are copied to the variable memory with index \(i-1\). The operator then implements the function \(f\) defined in Equation 2 and stores the result in the \(N^{\text{th}}\) subspace. Any _linear_ function \(f\) of the history can be represented in this framework. This specific setting of the linear operator allows information to be stored in the \(N^{\text{th}}\) variable over \(N\) timesteps reliably for computing the solution to the problem. **Reading Variables**: Once RNN has performed its computation, the computed information needs to be extracted. RNNs have a linear operator \(W_{r}\), which facilitates the reading of information from \(\left|h(t)\right\rangle\) at consecutive time steps. We propose that \(W_{r}\) has the following equation. \[W_{r}=\Psi_{N}^{*}=\sum_{\mu=(N-1)d+1}^{Nd}\left|e_{\mu-(N-1)d}\right\rangle \left\langle\psi^{\mu}\right| \tag{7}\] The reading operation reads the activity of the \(N^{th}\) subspace and outputs them in the standard basis (Figure 2B) for the output to be read out of RNNs. We do not propose any form for \(W_{uh}\). This is because the operator has two roles to play in the RNN behavior. One role is to add the input information in the \(N^{th}\) subspace, and two, suppress the effect of \(\Phi\) operator during the input phase - \(\Phi\) actively computes \(f\) even though all the variables are not filled during the input phase. These conflicting roles make proposing a _linear_ form for \(W_{uh}\) intractable. In our experiments, we find that this is not an issue for describing the long term behavior of the RNN as \(W_{uh}\) only influences the input phase. **Optimization**: One point to note is that our definition of variable memory till now is not optimized - there are dimensions in certain variables that are irrelevant for future computations and hence uncessary to be stored. This \begin{table} \begin{tabular}{l|l l l|l l l} \hline \hline Task & \multicolumn{4}{c}{hidden size: 64} & \multicolumn{4}{c}{hidden size: 128} \\ & L2: 0.0 & L2: 0.001 & L2: 0.1 & L2: 0.0 & L2: 0.001 & L2: 0.1 \\ \hline \(\mathcal{T}_{1}\) & — (0.97) & — (0.88) & — (0.50) & 0.0005 (1.00) & — (0.93) & — (0.50) \\ \(\mathcal{T}_{2}\) & 0.0075 (1.00) & — (0.85) & — (0.50) & 0.0055 (0.98) & 0.0031 (0.98) & — (0.50) \\ \(\mathcal{T}_{3}\) & 0.0026 (1.00) & 0.0010 (0.97) & — (0.50) & 0.0031 (0.98) & 0.0005 (1.00) & — (0.50) \\ \(\mathcal{T}_{4}\) & 0.0112 (0.94) & 0.0011 (1.00) & — (0.50) & 0.0022 (1.00) & 0.0006 (1.00) & — (0.50) \\ \hline \hline \end{tabular} \end{table} Table 1: **RNNs consistently converge to the variable binding model**: The table shows the MAE in the complex argument between the eigenspectrum of the predicted \(\Phi\) from the variable binding circuit and the empirically learned \(W_{hh}\) in \(4\) tasks across \(20\) seeds under different RNN configurations. This average error is indeterminate if the rank of the theoretical \(\Phi\) is different from the empirical \(W_{hh}\). Values in the brackets show the average test accuracy of the trained model. For models that have high test accuracy (\(>0.94\)), the error in the theoretically predicted spectrum is very low indicating consistent convergence to the theoretical circuit. A notable exception of this behavior is \(\mathcal{T}_{1}\) with hidden size\(=64\) and \(L2=0\), where the restricted availability of dimensions forces the network to encode variables in bottleneck superposition resulting in a low-rank representation of the solution. redundant information can be safely discarded without any effect to the network behavior. Concretely, we propose that RNNs learn an optimized representation of the variables such that for a basis transform \(M\) where each standard basis vector is masked by mask \(m^{i}\in\{0,1\}\). \(M=\left|m_{i}e_{i}\right\rangle\left\langle m^{j}e^{j}\right|\): \[\arg\min_{m^{i}}\sum_{i}m^{i}\,,\text{such that}\,,\,rank(M^{T}\Phi M)=rank(\Phi) \tag{8}\] The final optimized basis is the minimum amount of information that needs to be retained for computing the function \(f\) safely over time. For the rest of the paper, we will always consider this optimized basis, unless explicitly specified. ## 6 Algorithm for Computing Variable Memories The mechanisms we elucidate suggest that _variable memories_ can be treated as a privileged and interpretable basis - a special basis where RNN behavior is interpreted as forming variables that are stored and processed. To compute variable memories, we compare the learned hidden weights \(W_{hh}\) to the interaction matrix \(\Phi\) of the theoretical model in experimental settings. We view the hidden state space in the new basis consisting of the \(\Psi=[\Psi_{1};...;\Psi_{s}]\). We consider also an additional orthogonal basis \(\Psi^{\perp}\) for the remaining dimensions of the space not currently explained by the variable binding circuit in Section 5. Once this basis is defined, the action of the learned weights on the variable memories can be extracted from the learned \(W_{hh}\) using the basis transformation \(\Phi_{\text{learned}}^{\top}=\Psi^{*}W_{hh}\Psi\). Any interaction between the variable memories and the non-memory space will be encoded in the matrices \(\Psi^{*}W_{hh}\Psi^{\perp}\) and \((\Psi^{\perp})^{*}W_{hh}\Psi\). Building on the intuition from the linear model, we use the learned weights \(W_{hh}\), and \(W_{r}\) to estimate the \(\Psi_{k}\) for the variable memories of the trained RNN. The read weights \(\Psi_{s}=W_{r}^{*}\) define one of the variable memory, and all other subspaces can be found simply by propagating these dimensions _forward_ in time: \(\Psi_{k}=\Phi^{s-k}\Psi_{s}=W_{hh}^{s-k}W_{r}^{*}\). Although the variable memories are defined based on a linear evolution assumption, we found that the method of power iterating \(W_{hh}\) was effective in defining a variable memory space for even nonlinear RNNs. We specifically selected to define the variable memory dimensions \(\Psi_{k}\) by propagating \(W_{r}^{*}\) forward in time: \[\Psi_{s}=W_{r}^{*},\qquad\Psi_{k}=W_{hh}^{s-k}W_{r}^{*}\quad\text{for}\,\,k<s \tag{9}\] We also removed the projection of each \(\Psi_{k}\) onto the eigenvectors of \(W_{hh}\) whose eigenvalues were less than 1 in magnitude since they do not contribute to the long-term behavior but may interfere with the basis definition. The theorized interaction matrix \(\Phi\) has eigenvalues that sit on the unit circle in the complex plane. This ensures that iterating \(\Phi\) does not cause the hidden state to contract or expand in the linear model. We will expect that any eigenvectors of the learned \(W_{hh}\) with eigenvalue inside the unit circle corresponding to _transient_ behavior associated with dimensions whose activity dies out at later timesteps during inference. To obtain the orthogonal basis \(\Psi^{\perp}\) for the rest of the hidden space, we applied principal component analysis (PCA) to hidden state dynamics during inference after removing the projection onto the variable memories. The algorithm to compute variable memories from a linearized RNN is summarized in Algorithm 1. ## 7 Results ### Rnns consistently converge to the variable binding model In Section 5, we proposed an exact, mathematically rigorous circuit for variable binding capable of storing and processing information over time. To substantiate that this mechanism is indeed learned by RNNs, we trained various RNN configurations, differing in hidden sizes and regularization penalties. After training, the RNNs were linearized, and the eigen-spectrum of the learned \(W_{hh}\) matrix is compared with the theoretical \(\Phi\), as defined in Equation 6. If RNNs learn a representation in alignment with our model, both operators, i.e., the learned \(W_{hh}\) and theoretical \(\Phi\), are expected to share a portion of their spectrum as they are similar matrices (i.e they differ by only a basis change). In this comparison, it's pertinent to evaluate solely the arguments of the spectrum, disregarding the magnitude. The rationale behind this exclusion lies in what the magnitude tells about the dynamical behavior, which portray whether a linear dynamical system is diverging, converging, or maintaining consistency along the eigenvector directions. RNNs typically incorporate a squashing non-linearity, such as the tanh activation function, which restricts trajectories that divergence to infinity. Essentially, provided the eigenvalue magnitude remains \(\geq 1\), the complex argument solely determines the overall dynamical behavior of the RNN. Table 1 depicts the average absolute error when various RNN models are trained across \(4\) distinct tasks. The RNNs exhibiting robust generalization tend to consistently converge towards the circuit mechanisms detailed in Section 5. The results also highlight a particular setup - \(7_{1}\) with hidden size\(=64\) and \(L2=0.0\), which, while not converging to the theoretical mechanism, still achieves high generalization accuracy. \(7_{1}\) stands out as it necessitates exactly \(64\) dimensions per the optimized theoretical model, matching the exact dimensionality available in the RNN. In this instance, RNNs exhibit a form of bottleneck superposition, a scenario not accommodated by the variable binding circuit yet. Nevertheless, provided there are sufficient dimensions to encapsulate the variable binding circuit, the RNNs tend to converge to it. #### Variable memory reveals hidden neurons storing variable information Variable memories define bases for the storage and processing of variable information within RNNs if the RNNs follow the variable binding circuit. Building on the empirical results we obtained on the consistent convergence to the variable binding circuit, we compute variable memories using Algorithm 1 for models trained in Repeat Copy (\(\mathcal{T}_{1}\)). In the Repeat Copy task, the RNN must repeatedly output a stream of inputs provided during the input phase. The simulated Figure 4: **EMT enables human interpretability of learned RNN parameters**: The learned weights when visualized in the variable memories result in a form that is human-interpretable. For RNNs trained on two sample tasks \(\mathcal{T}_{1}\) (**A** left) and \(\mathcal{T}_{2}\) (**B** right), the weight matrix \(W_{hh}\) converts into a form that reveals internal mechanisms of how RNNs solve the task. For both tasks, the variables with index \(<8\) copies its contents to the preceding variable. Variable \(8\) actively computes the function \(f\) applied on all the variables stored in the hidden state using the variable binding circuit. For \(\mathcal{T}_{1}\), it is a simple copy of the \(1^{\text{st}}\) variable, and for \(\mathcal{T}_{2}\), it is a linear composition of all the variables Notably, the circuit for \(\mathcal{T}_{2}\) shows an optimized basis where all the irrelevant dimensions are absent. Figure 3: **EMT reveals hidden neurons storing task relevant information over time**: **A**. In the repeat copy task (\(\mathcal{T}_{1}\)), the RNN needs to repeatedly produce an input sequence that is presented. A typical trained hidden state after providing the input does not show any meaningful patterns connected to the input. **B**. The same hidden states when visualized in the variable memories reveal the input information being stored as variables and processed according to the variable binding circuit. The actual hidden state is in a superposition of these _hidden_ variable memory activities. hidden states of learned RNNs are visualized by projecting the hidden state in the variable memories: \(\tilde{h}=\Psi^{*}h\). The results shown in Figure 3 reveal that the hidden state is in a superposition of hidden neurons that actively store each variable required to compute the function \(f\) at all points in time. The basis transformation helps to disentangle these superposed variables from the hidden state so that they are easily visualized. #### Variable memories enable human interpretability of learned weights In addition to revealing hidden neurons that store and process information over time, variable memories can also be used as bases to view the operations of the learned matrices. The variable memories are carefully constructed such that \(W_{hh}\) converts to the underlying \(\Phi\) when viewed in the basis. As observed in the Figure 4, viewing the operations of \(W_{hh}\) in the new basis enables human-interpretability in terms of the variable binding circuit and provides a way to influence or "fix" RNN behavior after training. This "fixing" operation can be imagined as changing specific weights of the extracted \(\Phi\) to improve either variable storage properties, or problems computing \(f\). One practical consideration when computing variable memories is the sensitivity of these results to minor changes in basis definition. It may be found that in some practical cases, the basis transformation does not reveal these interpretable representations even though the spectrum converges to the theoretical circuit. This deviation from the theory is a result of the sensitivity of the basis definition to minor errors in the pseudo-inverse required to compute the dual. It is possible that new empirical procedures may be developed to improve this computation in the future. ## 8 Discussion We presented a novel perspective on Recurrent Neural Networks (RNNs), framing them as dynamical systems performing sequence memory retrieval. We introduced the concept of "variable memories," linear subspaces capable of symbolically binding and recursively composing information. Our approach addresses the limitations of current methods in understanding RNNs, particularly their inscrutability as 'black boxes' and the complexity of spectral analysis in high-dimensional task spaces. We presented a new class of algorithmic tasks that are designed to probe the variable binding behavior of RNNs. We presented a circuit mechanism that is capable of recursively storing and composing variables and show that trained RNNs consistently converge to this circuit pointing towards universality in the learned models. Building on the empirical evidence, we used variable memories to derive a privileged basis that, for the first time, revealed hidden neurons actively involved in information processing in RNNs. Further, using variable memories, we viewed the learned parameters of an RNN in a human-interpretable manner, enabling reasoning about RNN behavior as repeatedly copying and composing variables. The Episodic Memory Theory and variable memories are versatile enough to be broadly applicable in various scenarios, offering valuable insights for researchers designing new algorithms. One practical application of this analysis can be in the development of continual learning algorithms, which can restrict gradients to pre-existing variable memory spaces to minimize catastrophic forgetting of prior tasks. Additionally, in task composition--where a new task is a combination of two existing tasks--the linear spaces of each task can be linearly combined to efficiently solve the composite problem. Another possible application is transfer learning, where task knowledge is shared between networks. The Episodic Memory Theory suggests that variable memories and their interactions are the essential components for knowledge transfer, allowing the remaining network dynamics to be disregarded or easily relearned, streamlining the transfer process. With these diverse applications, it is also important to recognize certain inherent limitations to the approach. One of the limitation is that the analysis is primarily restricted to linear dynamical systems. Although an accurate representation of the qualitative behavior within small neighborhoods of fixed points can be found for non-linear dynamical systems, the RNNs have to be confined to these linear regions for the approach to be applicable. It is an interesting behavior that models consistently converge to these linearized regime, at least for the tasks outlined in Section 4. The second limitation of the variable binding model is that external information is stored as a _linear_ superposition of variable memories in the hidden state. Our results indicates that the role of non-linearity in encoding external information may be minimal for the toy tasks. However, we have observed that when the number of dimensions of the linear operator \(W_{hh}\) is not substantially large compared to the task's dimensionality requirements (bottleneck superposition) or when the regularization penalty is high, the RNN can effectively resort to non-linear encoding mechanisms to store external information (Appendix B.3). Overcoming these limitations of non-linearity will be an interesting direction to pursue in future research.
2302.12432
Graph Neural Networks with Learnable and Optimal Polynomial Bases
Polynomial filters, a kind of Graph Neural Networks, typically use a predetermined polynomial basis and learn the coefficients from the training data. It has been observed that the effectiveness of the model is highly dependent on the property of the polynomial basis. Consequently, two natural and fundamental questions arise: Can we learn a suitable polynomial basis from the training data? Can we determine the optimal polynomial basis for a given graph and node features? In this paper, we propose two spectral GNN models that provide positive answers to the questions posed above. First, inspired by Favard's Theorem, we propose the FavardGNN model, which learns a polynomial basis from the space of all possible orthonormal bases. Second, we examine the supposedly unsolvable definition of optimal polynomial basis from Wang & Zhang (2022) and propose a simple model, OptBasisGNN, which computes the optimal basis for a given graph structure and graph signal. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models. Our code is available at https://github.com/yuziGuo/FarOptBasis.
Yuhe Guo, Zhewei Wei
2023-02-24T03:24:04Z
http://arxiv.org/abs/2302.12432v2
# Graph Neural Networks with Learnable and Optimal Polynomial Bases ###### Abstract Polynomial filters, a kind of Graph Neural Networks, typically use a predetermined polynomial basis and learn the coefficients from the training data. It has been observed that the effectiveness of the model is highly dependent on the property of the polynomial basis. Consequently, two natural and fundamental questions arise: Can we learn a suitable polynomial basis from the training data? Can we determine the optimal polynomial basis for a given graph and node features? In this paper, we propose two spectral GNN models that provide positive answers to the questions posed above. First, inspired by Favard's Theorem, we propose the FavardGNN model, which learns a polynomial basis from the space of all possible orthonormal bases. Second, we examine the supposedly unsolvable definition of optimal polynomial basis from Wang & Zhang (2022) and propose a simple model, OptBasisGNN, which computes the optimal basis for a given graph structure and graph signal. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models. Machine Learning, Optimal Polynomial Bases, Favard-GNN, Favard-GNN, Optimal Polynomial Bases, Favard-GNN, Optimal Polynomial Bases, Favard-GNN, Optimal Polynomial Bases, Favard-GNN, Optimal Polynomial Bases, Favard-GNN, Optimal Polynomial Bases, Favard-GNN, Optimal Polynomial Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Fav-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Favard-GNN, Optimal Bases, Fav-GNN, Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN, Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN, Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Fav-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Favard-GNN Optimal Bases, Fav-GNN Optimal Bases, Fav-GNN Optimal Bases, Fav-GNN Optimal als: the Three-term recurrences and its converse, Favard's Theorem. FavardGNN learns from the _whole space_ of possible orthonormal basis with \(2(K+1)\) extra parameters2. Secondly, we propose OptBasisGNN with **solvable optimal basis**. We solve the optimal basis raised by Wang & Zhang (2022) by avoiding the explicit solving of the weight function. Note that, although we write out the implicitly defined/solved polynomial series in the methodology section, we never need to solve it explicitly. Last but not least, we conduct **extensive experiments** to demonstrate the effectiveness of our proposed models. Footnote 2: \(K\) is the truncated order of polynomial series. ## 2 Background and Preliminaries ### Background of Spectral GNNs In this section, we provide some necessary backgrounds of spectral graph neural networks, and show how the choice of polynomial bases emerges as a problem. Notations used are summarized in Table 6 in Appendix A. **Graph Fourier Transform.** Consider an undirected and connected graph \(G=(V,E)\) with \(N\) nodes, its symmetric normalized adjacency matrix and laplacian matrix are denoted as \(\hat{P}\) and \(\hat{L}\), respectively, \(\hat{L}=I-\hat{P}\). _Graph Fourier Transform_, as defined in the spatial/spectral domain of graph signal processing, is analogous to the time/frequency domain Fourier Transform (Hammond et al., 2009; Shuman et al., 2013). One column in the representations of \(N\) nodes, \(X\in\mathbb{R}^{N\times d}\), is considered a _graph signal_, denoted as \(x\). The complete set of \(N\) eigenvectors of \(\hat{L}\), denoted as \(U\), who show varying structural frequency characteristics (Shuman et al., 2013), are used as _frequency components_. _Graph Fourier Transform_ is defined as \(\hat{x}:=U^{T}\underline{x}\), where signal \(x\) is projected to the frequency responses of all components. It is then followed by _modulation_, which suppresses or strengthens certain frequency components, denoted as \(\hat{x}^{\star}:=\operatorname{diag}\{\theta_{0},\cdots,\theta_{N-1}\}\hat{x}\). After modulation, _inverse Fourier Transform_: \(x^{\star}:=U\hat{x}^{\star}\) transforms \(\hat{x}^{\star}\) back to the spatial domain. The three operations form the process of _spectral filtering_: \(\left|\underline{U}\operatorname{diag}\{\theta_{0},\theta_{1},\ldots,\theta_{N -1}\}U^{T}x\left(\ref{eq:1}\right)\right|\). **Polynomial Approximated Filtering.** In order to avoid time-consuming eigen-decomposition, a line of work approximate \(\theta_{i}\) by polynomial function of \(\lambda_{i}\), which is the \(i\)-th eigenvalue of \(\hat{L}\), i.e. \(\theta_{i}\approx h(\lambda_{i})\). Equation (1) then becomes a form that is easy for fast _localized_ calculation: \(U\operatorname{diag}\{h(\lambda_{0}),h(\lambda_{1}),\ldots,h(\lambda_{N-1})\}U^ {T}x=h(\hat{L})x\). As listed in Introduction, various _polynomial bases_ have been utilized, denoted as \(h(\lambda)=\sum_{k=0}^{K}\alpha_{k}g_{k}(\lambda)\). The filtering process on the input signal \(x\) is then expressed as \(x\to z=\sum_{k=0}^{K}\alpha_{k}g_{k}(\hat{P})x\). When considering independent filtering on each channel of the node feature matrix \(X\) simultaneously, the **multichannel filtering** can be denoted as: \(X\to Z=\mathop{\parallel}\limits_{l\in[1,h]}\sum_{k=0}^{K}\alpha_{k,l}g_{k,l}( \hat{P})X_{:,l}\left(\ref{eq:2}\right)\). For further simplicity, we equivalently use \(b(\hat{P})\) instead of \(h(\hat{L})\) in this paper, where \(b(\hat{P}):=h(I-\hat{P})\). Note that \(b(\cdot)\) is defined on the spectrum of \(\hat{P}\), and the \(i\)-th eigenvalue of \(\hat{P}\), denoted as \(\mu_{i}\), equals \(1-\lambda_{i}\). ### Orthogonal and Orthonormal Polynomials In this section, we give a formal definition of orthogonal and orthonormal polynomials, which plays a central role in the choosing of polynomial bases (Simon, 2014). **Inner Products.** The inner product of polynomials is defined as \(\left\langle f,g\right\rangle:=\int_{a}^{b}f(x)g(x)w(x)\mathrm{d}x\), where \(f\), \(g\) and \(w\) are functions of \(x\) on interval \((a,b)\), and the _weight function_\(w\) should be non-negative to guarantee the positive-definiteness of inner-product space. The definition of the inner products induces the definitions of _norm_ and _orthogonality_. The norm of polynomial \(f\) is defined as: \(\left\|f\right\|=\sqrt{\left\langle f,f\right\rangle}\), and \(f\) and \(g\) are orthogonal to each other when \(\left\langle f,g\right\rangle=\overline{0}\). Notice that the concept of inner product, norm, and orthogonality are all defined with respect to some weight function. **Orthogonal Polynomials.** A sequence of polynomials \(\{p_{n}(x)\}_{n=0}^{\infty}\) where \(p_{n}(x)\) is of exact degree \(n\), is called _orthogonal_ w.r.t. the positive weight function \(w(x)\), if, for \(m,n=0,1,2,\cdots\), there exists \(\left\langle p_{n},p_{m}\right\rangle=\delta_{mn}\|p_{n}\|^{2}(\|p_{n}\|^{2}\neq 0)\), where the inner product \(\left\langle f,g\right\rangle\) is defined w.r.t. \(w(x)\). When \(\|p_{n}\|^{2}=1\) for \(n=0,1,2,\cdots\), \(\{p_{n}(x)\}_{n=0}^{\infty}\) is known as **orthonormal** polynomial series. When a weight function is given, the orthogonal or orthonor Figure 1: Representation of \(h(\lambda)=\lambda^{2}+1\) by different bases. mal series with respect to the weight function can be solved by _Gram-Schmidt process_. _Remark 2.1_.: In this paper, the orthogonal/orthonormal polynomial bases we consider are truncated polynomial series, i.e. the polynomials that form a basis are of increasing order. ## 3 Learnable Basis via Favard's Theorem Empirically, spectral GNNs with different polynomial bases vary in performance on different datasets, which leads to two observations: (1) the choice of bases matters; (2) whether a basis is preferred might be related to the input, i.e. different signals on their accompanying underlying graphs. For the first observation, we notice that up to now, polynomial filters _pick_ polynomial bases from well-studied polynomials, e.g. Chebyshev polynomials, Bernstein polynomials, _etc_, which narrows down the range of choice. For the second observation, we question the reasonableness of fixing a basis during training. A related effort is made by JacobiConv (Wang et al., 2019), who adapt to a Jacobi polynomial series from the family of Jacobi polynomials via _hyperparameter tuning_. However, the range they choose from is discrete. Therefore, we aim at dynamically **learn** polynomial basis from the input from a **vast range**. ### Recurrence Formula for Orthonormal Bases Luckily, the Three-term recurrences and Favard's theorem of orthonormal polynomials provide a _continuous_ parameter space to learn basis. Generally speaking, three-term recurrences states that every orthonormal polynomial series satisfies a very characteristic form of recurrence relation, and Favard's theorem states the converse. **Theorem 3.1** (Three Term Recurrences for Orthonormal Polynomials).: _(Gautschi, 2004, p. 12) For orthonormal polynomials \(\{p_{k}\}_{k=0}^{\infty}\) w.r.t. weight function \(w\), suppose that the leading coefficients of all polynomials are positive, there exists the three-term recurrence relation:_ \[\sqrt{\beta_{k+1}}\,p_{k+1}(x)=(x-\gamma_{k})p_{k}(x)-\sqrt{\beta_{k}}\,p_{k- 1}(x),\] \[p_{-1}(x):=0,\;p_{0}(x)=1/\sqrt{\beta_{0}},\] \[\gamma_{k}\in\mathbb{R},\;\sqrt{\beta_{k}}\in\mathbb{R}^{+},\;k\geq 0 \tag{3}\] _with \(\beta_{0}=\int w(x)\mathrm{d}x\)._ **Theorem 3.2** (Favard's Theorem; Orthonormal Case).: _(Favard, 1935), (Simon, 2005, p. 14) A polynomial series \(\{p_{k}\}_{k=0}^{\infty}\) who satisfies the recurrence relation in Equation (3) is orthonormal w.r.t. a weight function \(w\) that \(\beta_{0}=\int w(x)\mathrm{d}x\)._ By Theorem 3.2, any possible recurrences with the form (3) defines an orthonormal basis. By Theorem 3.1, such a formula covers the whole space of orthonormal polynomials. If we set \(\{\sqrt{\beta_{k}}\}\) and \(\{\gamma_{k}\}\) to be learnable parameters with \(\sqrt{\beta_{k}}>0(k\geq 0)\), any orthonormal basis can be obtained. We put the more general _orthogonal_ form of Theorem 3.1 and Theorem 3.2 in Appendix B.1 to B.5. In fact, the property of three-term recurrences for orthogonal polynomials has been used multiple times in the context of current spectral GNNs to reuse \(g_{k}(\hat{P})x\) and \(g_{k-1}(\hat{P})x\) for the calculation of \(g_{k+1}(\hat{P})x\). Defferrard et al. (2016) owe the fast filtering of ChebNet to employing the three-term recurrences of _Chebyshev polynomials_ (the first kind, which is orthogonal w.r.t. \(\frac{1}{\sqrt{x^{2}-1}}\)): \(T_{k+1}(x)=2xT_{k}(x)-T_{k-1}(x)\). Similarly, JacobConv (Wang and Zhang, 2022) employs the three-term recurrences for _Jacobi polynomials_ (orthogonal w.r.t. to \((1-x)^{a}(1+x)^{b}\)). In this paper, however, we focus on orthonormal bases because they minimize the mutual influence of basis polynomials and the influence of the unequal norms of different basis polynomials. ### FavardGNN Formulation of FavardGNN.We formally write the architecture of FavardGNN (Algorithm 2), with the filtering process illustrated in FavardFiltering (Algorithm 1). Note that the iterative process of Algorithm 1 (lines 3-5) follows exactly from Equation (3) in Favard's Theorem. The key insight is to treat the coefficients \(\beta,\gamma,\alpha\) in Equation (3) as learnable parameters. Since Theorem 3.1 and Theorem 3.2 state that the orthonormal basis must satisfy the employed iteration and vice versa, it follows that the model can learn a suitable orthonormal polynomial basis from among all possible orthonormal bases. Following convention, before FavardFiltering, an MLP is used to map the raw features onto the signal channels (often much less than the dimension of raw features). In regression problems, the filtered signals are directly used as predictions; for classification problems, they are combined by another MLP followed by a softmax layer. Parallel Execution.Note that for convenience of presentation, we write the FavardFiltering Algorithm in a form of nested loops. In fact, the computation on different channels (the inner loop \(k\)) is conducted simultaneously. We put more concrete implementation in PyTorch-styled the pseudocode in Appendix C.1. ### Weaknesses of FavardGNN However, there are still two main weaknesses of FavardGNN. Firstly, the orthogonality lacks interpretability. The weight function \(w\) can only be solved analytically in a number of cases (Geronimo and Van Assche, 1991). Even if the weight function is solved, the form of \(w\) might be too complicated to understand. Secondly, FavardFiltering is not good in convergence properties: consider a simplified optimization problem \(\min\|Z-Y\|_{F}^{2}\) which has been examined in the context of GNN (Xu et al., 2021; Wang and Zhang, 2022), even this problem is non-convex w.r.t the learnable parameters in \(Z\). We will re-examine this problem in the experiment section. ## 4 Achieving Optimal Basis Although FavardGNN has the potential to reach the whole space of orthonormal polynomial series, on the other hand, we still want to know: **whether there is an optimal and accessible basis** in this vast space. Recently, Wang and Zhang (2022) raises a criterion for optimal basis. Since different bases are the same in expressiveness, this criterion is induced from an angle of optimization. However, Wang and Zhang (2022) believe that this optimal basis is unreachable. In this section, we follow this definition of optimal basis, and show how we can use exactly this optimal basis with \(O(K|E|)\) time complexity. ### A Criterion for Optimal Basis We will firstly restate the related section of Wang and Zhang (2022) very briefly, with a more complete review put in Appendix E. **Definition of Optimal Basis.** Wang and Zhang (2022) considers the squared loss \(R=\frac{1}{2}\|Z-Y\|_{F}^{2}\), where \(Y\) is the target signal. Since each signal channel contributes independently to the loss, the authors then consider the loss function channelwisely and ignore the index \(l\), that is, \(r=\frac{1}{2}\|z-y\|_{F}^{2}\), where \(z=\sum_{k=0}^{K}\alpha_{k}g_{k}(\hat{P})x\). Since \(r\) is convex w.r.t. \(\alpha\). the gradient descents' convergence rate reaches optimal when the **Hessian matrix** is an identity matrix. The \((k_{1},k_{2})\) element of Hessian matrix is: \[H_{k_{1}k_{2}}=\frac{\partial r}{\partial\alpha_{k_{1}}\alpha_{k_{2}}}=x^{T}g _{k_{2}}(\hat{P})g_{k_{1}}(\hat{P})x. \tag{4}\] **Definition 4.1** (Optimal basis for signal \(x\)).: For a given graph signal \(x\), polynomial basis \(\{g_{k}\}_{k=0}^{K}\) is optimal in convergence rate when \(H\) given in (4) is an **identity matrix**. Unachievable Algorithm Towards Optimal Basis.Wang and Zhang (2022) continue to rewrite Equation (4) by Riemann sum: \(H_{k_{1}k_{2}}=\int_{\mu=-1}^{1}g_{k_{1}}(\mu)g_{k_{2}}(\mu)f(\mu)d\mu\), where \(f(\mu)\) is given below (Remark 4.2), and Definition 4.1 is reached when \(\{g_{k}(\cdot)\}_{k=0}^{K}\) is orthonormal polynomial basis w.r.t. \(f(\cdot)\). _Remark 4.2_.: The **exact form of the weight function** of optimal basis defined in Definition 4.1 is \(f(\mu)=\frac{\partial F(\mu)}{\partial\mu}\), where \(F(\mu):=\sum_{\mu_{i}\leq\mu}(U^{T}x)_{i}^{2}\). Having write out the weight function \(f(\mu)\), the optimal basis is determined. Wang and Zhang (2022) think of a regular process for getting this optimal basis, which is unreachable since eigen-decomposition is unaffordable for large graphs. We summarize this process in Algorithm 3. ``` Input: Graph signal \(x\); Normalized graph adjacency \(\hat{P}\); Truncated polynomial order \(K\). Output: Optimal basis \(\{g_{k}(\cdot)\}_{k=0}^{K}\) 1\(U,\{\mu_{i}\}_{i=1}^{N}\leftarrow\) Eigen decomposition of \(\hat{P}\) 2 Calculate \(f(\mu)\) as descripted in E.1 Use Gram-Schmidt process and weight function \(f(\mu)\) to contruct an orthonormal basis \(\{g_{k}\}_{k=0}^{K}\) ``` **Algorithm 3**An Unreachable Algorithm for Getting Optimal Basis) As a result, Wang and Zhang (2022) come up with a compromised method that allows the model to choose from the family of orthogonal Jacobi bases, who have "_flexible enough weight functions_", i.e. \((1-\mu)^{a}(1+\mu)^{b}\), determined by two hyper-parameters \(a\) and \(b\). However, still, a very small fraction of possible weight functions are covered, possibly without the optimal weight function in Remark 4.2. ``` 0: Normalized graph \(\hat{P}\); Solved basis vectors \(v_{0},\cdots,v_{k}\) (\(k\geq 0\)) 0:\(v_{k+1}\) 1: Step 1: \(v_{k+1}^{*}\leftarrow\hat{P}v_{k}\) 2: Step 2: \(v_{k+1}^{\perp}\leftarrow v_{k+1}^{*}-\sum_{i=0}^{k}\langle v_{k+1}^{*},v_{i} \rangle v_{i}\) 3: Step 3: \(v_{k+1}\gets v_{k+1}^{\perp}/\|v_{k+1}^{\perp}\|\) 4return\(v_{k+1}\) ``` **Algorithm 4**ObtainNextBasisVector (Raw Version) ### From Polynomial Basis to Vector Basis In this section, we show that the optimal bases given in Definition 4.1 can be solved efficiently in \(O(K|E|)\). Instead of following the three-step convention of Algorithm 3, we use the optimal bases **without** explicitly solving the \(f(\mu)\) and \(\{g_{k}(\mu)\}_{k=0}^{K}\) as pre-steps of polynomial filtering. The key insight of our method is to consider the inner product space of vectors rather than that of polynomials. We shift our attention from solving orthonormal polynomial basis \(g_{k}(x)\) (w.r.t. a specific weight function) to orthonormal vector basis \(g_{k}(\hat{P})x\). Thus, we avoid the need to solve for the _inaccessible weight function_ in the habitual procedure 3, making the optimal polynomial series attainable. **Optimal Vector Basis.** Following Wang and Zhang (2022), we consider \(x\to z=\sum_{k=0}^{K}\alpha_{k}g_{k}(\hat{P})x\) on one channel. Instead of taking \(\sum_{k=0}^{K}\alpha_{k}g_{k}(\hat{P})=b(\hat{P})\) as a whole, we now regard \(\left\{v_{k}|v_{k}:=g_{k}(\hat{P})x\right\}_{k=0}^{K}\) as a _vector basis_. Thus, the filtered signal \(z\) is a linear combination of \(\{v_{k}\}_{k=0}^{K}\). By Definition 4.1, the optimal basis is achieved when \[v_{k_{2}}^{T}v_{k_{1}}=x^{T}g_{k_{2}}(\hat{P})g_{k_{1}}(\hat{P})x=\delta_{k_{1 }k_{2}},\] which is equivalent to finding a vector basis \(\{v_{k}\}_{k=0}^{K}\) that satisfies two conditions: **Condition 1**: Orthonormality. **Condition 2**: Polynomials with **increasing order**, that is, for \(k=0,1,\ldots,K\), there exists a polynomial \(g_{k}\) so that \(v_{k}=g_{k}(\hat{P})x\), where \(g_{k}(\cdot)\) is of order \(k\). ``` 0: Normalized graph \(\hat{P}\); **Two** solved basis vectors \(v_{k-1},v_{k}\) (\(k\geq 0\)) 0:\(v_{k+1}\) 1: Step 1: \(v_{k+1}^{*}\leftarrow\hat{P}v_{k}\) // \(g_{k+1}^{*}(\mu):=\mu g_{k}(\mu)\) 2: Step 2: \(v_{k+1}^{\perp}\gets v_{k+1}^{*}-\langle v_{k+1}^{*},v_{k}\rangle v_{k}- \langle v_{k+1}^{*},v_{k-1}\rangle v_{k-1}\) // \(g_{k+1}^{*}(\mu):=\) \(g_{k+1}^{*}(\mu)-\langle v_{k+1}^{*},v_{k}\rangle g_{k}(\mu)-\langle v_{k+1}^{* },v_{k-1}\rangle g_{k-1}(\mu)\) 3: Step 3: \(v_{k+1}\gets v_{k+1}^{\perp}/\|v_{k+1}^{\perp}\|\) // \(g_{k+1}(\mu):=g_{k+1}^{*}(\mu)/\|v_{k+1}^{*}\) return\(v_{k+1}\) ``` **Algorithm 5**ObtainNextBasisVector (In comment, we write the the \((k+1)\)-th optimal basis polynomial \(g_{k+1}(\cdot)\) based on \(g_{k}(\cdot)\) and \(g_{-1}(\cdot)\) that is implicitly used, but never solved explicitly.) [MISSING_PAGE_POST] Algorithm 33: Provability Based on \(\ rence in the comment of Algorithm 5, which can be written as: \[\overline{\|v_{k+1}^{\perp}\|}g_{k+1}(\mu) =\mu g_{k}(\mu)-\langle v_{k+1}^{*},v_{k}\rangle g_{k}(\mu)\] \[-\ \langle v_{k+1}^{*},v_{k-1}\rangle g_{vk-1}(\mu). \tag{5}\] Now, we confirm that, Equation (5) is consistent with the recurrence formula given in Equation (3). We show \(\overline{\|v_{k}^{\perp}\|}=\langle v_{k+1}^{*},v_{k-1}\rangle\) (6) in Appendix B.6. ### Formulation of OptBasisGNN We show the procedure of OptBasisFiltering in Algorithm 6, which is the core part of the complete OptBasisGNN. Other parts of OptBasisGNN are MLP layers and softmax layers, same as in FavardGNN (Algorithm 2). The process of calculating the next basis vector and filtering on all channels are conducted in parallel. Please check the Pytorch-style pseudo-code in Appendix C.2. Relation to FavardGNN.OptBasisGNN is a **particular case** of FavardGNN. FavardGNN is possible to reach the whole space of orthonormal bases, among all these bases, while OptBasis is the one that promises optimal convergence property. ``` Input: Input signals \(X\) with \(d\) channels; Normalized Graph \(\hat{P}\); Order \(K\) Learnable Parameters:\(\alpha\) Output: Filtered signals \(Z\) \(v_{-1}\gets 0\) for\(l=0\)to\(d-1\)do \(x\gets X_{:,l}\), \(v_{0}\gets x/\|x\|\), \(z\leftarrow\alpha_{0,l}v_{0}\) for\(k=0\)to\(K\)do \(v_{k+1}\leftarrow\)ObtainNextBasisVector(\(\hat{P}\),\(v_{k}\), \(v_{k-1}\)) \(z\gets z+\alpha_{k+1,l}v_{k+1}\) \(Z_{:,l}\gets z\) returnZ ``` **Algorithm 6**OptBasisFiltering ### Scale Up OptBasisGNN Scaling up GNNs on large graphs is a challenging and important problem. One way to scale up GNN models is to decouple feature propagation and transformation (Chen et al., 2020; Wu et al., 2019; He et al., 2022). Similarly, we scale up OptBasisGNN by (1) drop the MLP layer before OptBasisFiltering, thus, the basis vectors for each channel are fixed; (2) **preprocess** the whole set of basis vectors (denote as \(V\in R^{d\times(K+1)\times N}\)) on CPU, and (3) conduct **batch training**: for each batch of nodes \(\mathcal{B}\), move the corresponding segment of basis vectors \(V[:,:,\mathcal{B}]\) to GPU. ## 5 Experiments In this section, we conduct a series of comprehensive experiments to demonstrate the effectiveness of the proposed methods. Experiments consist of node classification tasks on small and large graphs, the learning of multi-channel filters, and a comparison of FavardGNN and OptBasisGNN. ### Node Classification Experimental Setup.We include medium-sized graph datasets conventionally used in preceding graph filtering works, including three heterophilic datasets (Chameleon, Squirrel, Actor) provided by Pei et al. (2020) and two citation datasets (PubMed, Citeseer) provided by Yang et al. (2016) and Sen et al. (2008). For all these graphs, we take a \(60\%/20\%/20\%\) train/validation/test split proportion following former works, e.g. Chien et al. (2021). We report our results of twenty runs over random splits with random initialization seeds. For baselines, we choose sota spectral GNNs. For other experimental settings, please refer to Appendix D.1. Besides, for evaluation of OptBasisGNN, please also check the results in the scalability experimental section (Section 5.2). Results.As shown in Table 1, FavardGNN and OptBasisGNN outperform most strong baselines. Especially, in Chameleon, Squirrel and Actor, we see a big lift. The vast selection range and learnable nature of FavardGNN and the optimality of convergence provided by OptBasisGNN both enhance the performance of polynomial filters, and their performances hold flat. ### Node Classification on Large Datasets Experimental Setup.We perform node classification tasks on two large citation networks: ogbn-arxiv and ogbn-papers100M (Hu et al., 2020), and five large non-homophilic networks from the LINKX datasets (Lim et al., 2021). Except for Penn94, Genius and Twitch-Gamers, all other mentioned datasets use the scaled version of OptBasisGNN. For ogbn datasets, we run repeating experiments on the given split with ten random model seeds, and choose baselines following the scalability experiments in ChebNetII (He et al., 2022). For LINKX datasets, we use the five given splits to align with other reported experiment results for Penn94, Genius, Twitch-Gamer and Pokec. For Wiki dataset, since the splits are not provided, we use five random splits. For baselines, we choose spectral GNNs as well as top-performing spatial models reported in Lim et al. (2021), including LINK, LINKX, GCNII (Chen et al., 2020) and MixHop (Abu-El-Haija et al., 2019). For more detailed experimental settings, please refer to Appendix D.1. Results.As shown in Table 2 and Table 3, On Penn94, Genius and Twitch-gamer, our two models achieve compa rable results to those of the sota spectral methods. On ogbn datasets as well as Pokec and Wiki with tens or hundreds of edges, we use the scaled version of OptBasisGNN with batch training. We do not conduct FavardGNN on these datasets, since the basis vectors of FavardGNN are cannot be precomputed. Notably, on Wiki dataset, the largest non-homophilous dataset, our method surpasses the second top method by nearly one percent, this demonstrates the effectiveness of our scaled-up version of OptBasisGNN. ### Learning Multi-Channel Filters from Signals Experimental Setup. We extend the experiment of learning filters in He et al. (2021) and Balcilar et al. (2021). The differences are twofold: First, we consider the case of _multi-channel_ input signals and learn filters _channelwisely_. Second, the _only_ learnable parameters are the coefficients \(\alpha\). Note that the optimization target of this experiment is identical to how the optimal basis was derived by Wang and Zhang (2022) (See Section 4.1). We put the practical backgro \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & Chamleon & Squierl & Actor & Citeseer & Pubned \\ \(\|V\|\) & 2.277 & 5.201 & 7,600 & 3,327 & 19,717 \\ \(\mathcal{H}(\mathcal{O})\) & 23 & 22 & 22 & -74 & 80 \\ \hline MLP & \(46.59\pm 1.84\) & \(31.01\pm 1.18\) & \(40.18\pm 0.55\) & \(76.52\pm 0.89\) & \(86.14\pm 0.25\) \\ GCN & \(60.81\pm 2.95\) & \(45.87\pm 0.8\) & \(33.26\pm 1.15\) & \(79.85\pm 0.78\) & \(86.79\pm 0.31\) \\ DehNet & \(59.51\pm 1.25\) & \(40.81\pm 0.42\) & \(37.42\pm 0.58\) & \(79.33\pm 0.57\) & \(87.82\pm 0.24\) \\ ARRA & \(60.21\pm 1.00\) & \(36.27\pm 0.62\) & \(37.67\pm 0.54\) & \(80.04\pm 0.55\) & \(86.93\pm 0.24\) \\ APPNP & \(52.15\pm 1.79\) & \(35.71\pm 0.78\) & \(39.76\pm 0.49\) & \(80.47\pm 0.73\) & \(88.13\pm 0.33\) \\ GPRCNN & \(67.49\pm 1.38\) & \(50.43\pm 1.89\) & \(39.91\pm 0.62\) & \(80.13\pm 0.84\) & \(88.46\pm 0.31\) \\ BernNet & \(68.53\pm 1.68\) & \(51.39\pm 0.92\) & \(41.71\pm 1.12\) & \(80.08\pm 0.75\) & \(88.51\pm 0.39\) \\ CheNeill & \(71.37\pm 1.01\) & \(57.72\pm 0.59\) & \(41.75\pm 1.07\) & \(80.53\pm 0.79\) & \(88.93\pm 0.29\) \\ JacobConv & \(74.26\pm 1.08\) & \(57.38\pm 1.25\) & \(41.17\pm 0.64\) & \(80.78\pm 0.79\) & \(89.62\pm 0.41\) \\ \hline FavardGNN & \(72.32\pm 1.90\) & \(63.49\pm 1.47\) & \(43.05\pm 0.53\) & \(81.89\pm 0.63\) & \(90.90\pm 0.27\) \\ OptBasisGNN & \(74.26\pm 0.74\) & \(63.62\pm 0.76\) & \(42.39\pm 0.52\) & \(80.58\pm 0.82\) & \(90.30\pm 0.19\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Experimental results.**_Accuracies \(\pm\)\(95\%\) confidence intervals_ are displayed for each model on each dataset. The best-performing two results are highlighted. The results of GPRCNN are taken from He et al. (2021). The results of BernNet, ChebNetII and JacobiConv are taken from original papers. The results of FavardGNN and OptBasisGNN are the average of repeating experiments over 20 cross-validation splits. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Penn4 & Genius & Twitch-Gamers & Palace & Wiki \\ \(\|V\|\) & 41.554 & 421,961 & 168,114 & 1,632,803 & 1,925,342 \\ \(\|E\|\) & 1,562,229 & 984,979 & 6,797,557 & 30,622,564 & 303,543,860 \\ \(\mathcal{H}(\mathcal{O})\) & 470 & 618 & 545 & 445 & 389 \\ \hline MLP & \(73.61\pm 0.40\) & \(86.68\pm 0.09\) & \(60.92\pm 0.07\) & \(62.37\pm 0.02\) & \(37.38\pm 0.21\) \\ GCN & \(82.47\pm 0.27\) & \(87.42\pm 0.31\) & \(62.18\pm 0.26\) & \(75.45\pm 0.17\) & OOM \\ GCN & \(82.92\pm 0.59\) & \(90.24\pm 0.09\) & \(63.39\pm 0.61\) & \(78.94\pm 0.11\) & OOM \\ MiHoP & \(83.47\pm 0.71\) & \(90.58\pm 0.16\) & \(65.64\pm 0.27\) & \(81.07\pm 0.16\) & \(49.15\pm 0.26\) \\ INNC & \(80.79\pm 0.49\) & \(73.56\pm 0.14\) & \(64.85\pm 0.21\) & \(80.54\pm 0.03\) & \(57.11\pm 0.26\) \\ LINNK & \(84.71\pm 0.52\) & \(90.77\pm 0.27\) & \(660.06\pm 0.19\) & \(82.04\pm 0.07\) & \(59.80\pm 0.41\) \\ GPRCNN & \(83.54\pm 0.32\) & \(90.15\pm 0.30\) & \(62.59\pm 0.38\) & \(80.74\pm 0.22\) & \(58.73\pm 0.34\) \\ BernNet & \(83.26\pm 0.29\) & \(90.47\pm 0.33\) & \(64.27\pm 0.31\) & \(81.67\pm 0.17\) & \(50.92\pm 0.29\) \\ ChebNeill & \(84.86\pm 0.33\) & \(90.85\pm 0.32\) & \(65.03\pm 0.27\) & \(82.33\pm 0.28\) & \(60.09\pm 0.39\) \\ \hline FavardGNN & \(84.92\pm 0.41\) & \(90.29\pm 0.14\) & \(64.26\pm 0.12\) & - & - \\ OrblasisGNN & \(84.85\pm 0.39\) & \(90.83\pm 0.31\) & \(65.17\pm 0.16\) & \(82.83\pm 0.04\) & \(61.85\pm 0.03\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Experimental results of large-scale datasets (non-homophilous).**_Accuracies \(\pm\) standard errors_ are displayed for each model on each dataset. The best-performing two results are highlighted. Results of BernNet and ChebNet are taken from He et al. (2022). Other results are from Lim et al. (2021). **Note that** for the large Pokec and Wiki datasets, we use the _scaled-up_ version of OptBasisGNN, which is introduced in Section 4.4. \begin{table} \begin{tabular}{l c c c c} \hline \hline Original Image & Y: Band Reject & Y: Low Pass & \multirow{2}{*}{C: Band Reject} \\ & Cx: Low Pass & & & & \\ \hline \hline \multirow{4}{*}{} & & & & & \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & \\ \cline{2-5} & & & & & \\ \hline \hline \end{tabular} \end{table} Table 4: Illustration of our multichannel filter learning experiment. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Penn4 & Genius & Twitch-Gamers & Palace & Wiki \\ \(\|V\|\) & 41.554 & 421,961 & 168,114 & 1,632,803 & 1,925,342 \\ \(\|E\|\) & 1,562,229 & 984,979 & 6,797,557 & 30,62,564 & 303,543,860 \\ \(\mathcal{H}(\mathcal{O})\) & 470 & 618 & 545 & 445 & 389 \\ \hline MLP & \(73.61\pm 0.40\) & \(86.68\pm 0.09\) & \(60.92\ periment in YCbCr color space. Each \(100\times 100\) image is considered as a grid graph with input node signals on three channels: Y, Cb and Cr. Each signal might be filtered by complex filtering operations defined in (He et al., 2021). As shown in Table 4, using different filters on each channel results in different combination effects. We create a synthetic dataset with 60 samples from 15 original images. More about the synthetic dataset are in Appendix D.2. Following He et al. (2021), we use input signals \(X\) and the true filtered signals \(Y\) to supervise the learning process of \(\alpha\). The optimization goal is to minimize \(\frac{1}{2}\|Z-Y\|_{2}^{2}\), where \(Z\) is the output multi-channel signal defined in Equation (2). During training, we use an Adam optimizer with a learning rate of \(0.1\) and a weight decay of \(5\mathrm{e}{-4}\). We allow a maximum of \(500\) epochs, and stop iteration when the difference of losses between two epochs is less than \(1\mathrm{e}{-4}\). For baselines, we choose the Monomial basis, Bernstein basis, Chebyshev basis (with Chebyshev interpolation) corresponding to GPRGNN, BernNet and ChebNetII, respectively. We also include arbitrary orthonormal basis learned by Favard for comparison. Note that, we learn _different filters on each channel for all baseline basis_ for fairness. **Results.** We exhibit the mean MSE losses with standard errors of the 60 samples achieved by different bases in Table 5. Optbasis, which promises the best convergence property, demonstrates an overwhelming advantage. A special note is needed that, the Monomial basis has _not finished converging_ at the maximum allowed \(500\)th epoch. In Section 5.4, we extend the maximum allowed epochs to 10,000, and use the slowly-converging Monomial basis curve as a counterpoint to the non-converging Favard curve. Particularly, in Figure 2, we visualize the converging process on **one sample**. Obviously, OptBasis show **best convergence property** in terms of both the fastest speed and smallest MSE error. Check Appendix D.2 for more samples. ### Non-Convergence of FavardGNN Notably, in Figure 2, an obvious _bump_ appeared near the \(130\)th epoch. We now re-examine the non-convergence problem of FavardGNN (Section 3.3). We rerun the multi-channel filter learning task by canceling early stopping and stretching the epoch number to 10,000. As shown in Figure 3 (left), the curve of Favard bump several times. In contrast with Favard is the Monomial basis, though showing an inferior performance in Table 5, it converges slowly but stably. We observe a similar phenomenon with a node classification setup in Figure 3 (right) (See Appendix D.3 for details). Still, very large bumps appear. Such a phenomenon might seem contradictory to the outstanding performance of FavardGNN in node classification tasks. We owe the good performances in Table 1 and 2 to the early stop mechanism. ## 6 Conclusion In this paper, we tackle the fundamental challenges of basis learning and computation in polynomial filters. We propose two models: FavardGNN and OptBasisGNN. FavardGNN learns arbitrary basis from the whole space of orthonormal polynomials, which is rooted in classical theorems in orthonormal polynomials. OptBasisGNN leverages the optimal basis defined in Wang and Zhang (2022) efficiently, which was thought unsolvable. Extensive experiments are conducted to demonstrate the effectiveness of our proposed models. An interesting future direction is to derive a convex and easier-to-optimize algorithm for FavardGNN. Figure 3: Drop of loss in 10,000 epochs. _Left_: MSE loss of regression task on one sample. _Right_: Cross entropy loss of classification problem on the Chameleon dataset. Models based on Monomial basis converge slowly, but stably. while FavardGNNs don’t converge. For the convergence curve for OptBasis, please check Figure 2. It converges much faster than Monomial Basis. \begin{table} \begin{tabular}{l l l l l l} \hline \hline BASIS & OptBasis & ChebII & Bernstein & Favard & Monomial \\ \hline MSE & **0.0058** & \(0.1501\) & \(0.4231\) & \(0.3175\) & \(3.9076\) \\ \(\pm\) STDV & \(\pm\)**0.0157** & \(\pm\) 0.2433 & \(\pm\) 0.4918 & \(\pm\) 0.2840 & \(\pm\) 2.9263 \\ \hline \hline \end{tabular} \end{table} Table 5: Experimental results of the multichannel filtering learning task. _MSE loss \(\pm\) standard errors_ of the 60 samples achieved by different bases are exhibited. Figure 2: Convergence rate of minimizing \(\frac{1}{2}\|Z-Y\|_{2}^{2}\) on one sample. _Sample message_: The true filters for this sample are low-pass(Y) / band-reject(Cb) / band-reject(Cr). _Legends_: ChebII means using Chebyshev polynomials combined with interpolation on chebynodes as in ChebNetII (He et al., 2022). Favard means the bases are learned as FavardGNN. In 500 epochs, the experimental groups of the Monomial basis and Bernstein basis did not converge. OptBasis achieves the smallest MSE error in the shortest time.
2304.08928
ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns as graph data can contain personal or sensitive information. Differentially private GNN models have been recently proposed to preserve privacy while still allowing for effective learning over graph-structured datasets. However, achieving an ideal balance between accuracy and privacy in GNNs remains challenging due to the intrinsic structural connectivity of graphs. In this paper, we propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs. Combined with the aggregation perturbation technique to ensure differential privacy, ProGAP splits a GNN into a sequence of overlapping submodels that are trained progressively, expanding from the first submodel to the complete model. Specifically, each submodel is trained over the privately aggregated node embeddings learned and cached by the previous submodels, leading to an increased expressive power compared to previous approaches while limiting the incurred privacy costs. We formally prove that ProGAP ensures edge-level and node-level privacy guarantees for both training and inference stages, and evaluate its performance on benchmark graph datasets. Experimental results demonstrate that ProGAP can achieve up to 5-10% higher accuracy than existing state-of-the-art differentially private GNNs. Our code is available at https://github.com/sisaman/ProGAP.
Sina Sajadmanesh, Daniel Gatica-Perez
2023-04-18T12:08:41Z
http://arxiv.org/abs/2304.08928v2
# ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees ###### Abstract Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns as graph data can contain personal or sensitive information. Differentially private GNN models have been recently proposed to preserve privacy while still allowing for effective learning over graph-structured datasets. However, achieving an ideal balance between accuracy and privacy in GNNs remains challenging due to the intrinsic structural connectivity of graphs. In this paper, we propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs. Combined with the aggregation perturbation technique to ensure differential privacy, ProGAP splits a GNN into a sequence of overlapping submodels that are trained progressively, expanding from the first submodel to the complete model. Specifically, each submodel is trained over the privately aggregated node embeddings learned and cached by the previous submodels, leading to an increased expressive power compared to previous approaches while limiting the incurred privacy costs. We formally prove that ProGAP ensures edge-level and node-level privacy guarantees for both training and inference stages, and evaluate its performance on benchmark graph datasets. Experimental results demonstrate that ProGAP can achieve up to 5-10% higher accuracy than existing state-of-the-art differentially private GNNs. Keywords:Graph Neural Network Differential Privacy Progressive Learning Node Classification. ## 1 Introduction Graph Neural Networks (GNNs) have emerged as a powerful tool for learning from graph-structured data, and their popularity has surged due to their ability to achieve impressive performance in a wide range of applications, including social network analysis, drug discovery, recommendation systems, and traffic prediction [22, 5, 14, 48, 2]. GNNs excel at learning from the structural connectivity of graphs by iteratively updating node embeddings through information aggregation and transformation from neighboring nodes, making them well-suited for tasks such as node classification, graph classification, and link prediction [50, 16, 15, 25, 46, 7]. However, as with many data-driven approaches, GNNs can expose individuals to privacy risks when applied to graph data containing sensitive information, such as social connections, medical records, and financial transactions [36, 42]. Recent studies have shown that various attacks, such as link stealing, membership inference, and node attribute inference, can successfully break the privacy of graph datasets [18, 19, 34, 44], posing a significant challenge for the practical use of GNNs in privacy-sensitive applications. To address the privacy concerns associated with GNNs, researchers have recently studied _differential privacy (DP)_, a well-established mathematical framework that provides strong privacy guarantees, usually by adding random noise to the data [9, 10]. However, applying DP to GNNs is very challenging due to the complex structural connectivity of graphs, rendering traditional private learning methods, such as differentially private stochastic gradient descent (DP-SGD) [1], infeasible [3, 8, 39]. Recently, the _aggregation perturbation_ (AP) approach [39] has emerged as a state-of-the-art technique for ensuring DP in GNNs. Rather than perturbing the model gradients as done in the standard DP-SGD algorithm and its variants, this method perturbs the aggregate information obtained from the GNN neighborhood aggregation step. Consequently, such perturbations can obfuscate the presence of a single edge, which is called _edge-level privacy_, or a single node and all its adjacent edges, referred to as _node-level privacy_[37]. The key limitation of AP is its incompatibility with standard GNN architectures due to the high privacy costs it entails [39]. This is because conventional GNN models constantly query the aggregation functions with every update to the model parameters, which necessitates the re-perturbation of all aggregate outputs at every training iteration to ensure DP, leading to a significant increase in privacy costs. To mitigate this issue, Sajadmanesh _et al_. [39] proposed a method called GAP, which decouples the aggregation steps from the model parameters. In GAP, node features are recursively aggregated first, and then a classifier is learned over the resulting perturbed aggregations, enabling DP to be maintained without incurring excessive privacy costs. Due to having non-trainable aggregations, however, such decoupling approaches reduce the expressiveness of the GNN [12], leading to suboptimal accuracy-privacy trade-offs. In the face of these challenges, we present a novel differentially private GNN, called _"Progressive **G**NN with **A**ggregation **P**erturbation"_ (ProGAP). Our new method uses the same AP technique as in GAP to ensure DP. However, instead of decoupling the aggregation steps from the learnable modules, ProGAP adopts a multi-stage, progressive training paradigm to surmount the formidable privacy costs associated with AP. Specifically, ProGAP converts a \(K\)-layer GNN model into a sequence of overlapping submodels, where the \(i\)-th submodel comprises the first \(i\) layers of the model, followed by a lightweight supervision head layer with softmax activation that utilizes node labels to guide the submodel's training. Starting with the shallowest submodel, ProGAP then proceeds progressively to train deeper submodels, each of which is referred to as a training stage. At every stage, the learned node embeddings from the preceding stage are aggregated, perturbed, and then cached to save privacy budget, allowing ProGAP to learn a new set of private node embeddings. Ultimately, the last stage's embeddings are used to generate final node-wise predictions. The proposed progressive training approach overcomes the high privacy costs of AP by allowing the perturbations to be applied only once per stage rather than at every training iteration. ProGAP also maintains a higher level of expressive power compared to GAP, as the aggregation steps now operate on the learned embeddings from the preceding stages, which are more expressive than the raw node features. Moreover, we prove that ProGAP retains all the benefits of GAP, such as edge- and node-level privacy guarantees and zero-cost privacy at inference time. We evaluate ProGAP on five node classification datasets, including Facebook, Amazon, and Reddit, and demonstrate that it can achieve up to 10.4% and 5.5% higher accuracy compared to GAP under edge- and node-level DP with an epsilon of 1 and 8, respectively. ## 2 Related Work Several recent studies have investigated differential privacy (DP) to provide formal privacy guarantees in various GNN learning settings. For example, Sajadmanesh and Gatica-Perez [38] propose a locally private GNN for a distributed learning environment, where node features and labels remain private, while the GNN training is federated by a central server with access to graph edges. Lin _et al_. [29] also introduce a locally private GNN, called Solitude, that preserves edge privacy in a decentralized graph, where each node keeps its own private connections. However, both of these approaches use local differential privacy [24], which operates under a different problem setting from our method. Other approaches propose edge-level DP algorithms for GNNs. Wu _et al_. [44] developed an edge-level private method that modifies the input graph directly through randomized response or the Laplace mechanism, followed by training a GNN on the resulting noisy graph. In contrast, Kolluri _et al_. [27] propose LPGNet, which adopts a tailored neural network architecture. Instead of directly using the graph edges, they encode graph adjacency information in the form of low-sensitivity cluster vectors, which are then perturbed using the Laplace mechanism to preserve edge-level privacy. Unlike our approach, however, neither of these methods provides node-level privacy guarantees. Olatunji _et al_. [33] propose the first node-level private GNN by adapting the framework of PATE [35]. In their approach, a student GNN model is trained on public graph data, with each node privately labeled using teacher GNN models that are trained exclusively for the corresponding query node. Nevertheless, their approach relies on public graph data and may not be applicable in all situations. Daigavane _et al_. [8] extend the standard DP-SGD algorithm and privacy amplification by subsampling to bounded-degree graph data to achieve node-level DP, but their method fails to provide inference privacy. Finally, Sajadmanesh _et al_. [39] propose GAP, a private GNN learning framework that provides both edge-level and node-level privacy guarantees using the aggregation perturbation approach. They decouple the aggregation steps from the neu ral network model to manage the privacy costs of their method. Although our method leverages the same aggregation perturbation technique, we take a different approach to limit the privacy costs using a progressive training scheme. The main concept behind progressive learning is to train the model on simpler tasks first and then gradually move towards more challenging tasks. It was originally introduced to stabilize the training of deep learning models and has been widely adopted in various computer vision applications, such as facial attribute editing [45], image super-resolution [43], image synthesis [23], and representation learning [28]. This technique has also been extended to federated learning, mainly to minimize the communication overhead between clients and the central server [4, 17, 41]. However, the potential benefit of progressive learning in DP applications has not been explored yet. In this paper, we are first to examine the advantages of progressive learning in the context of private GNNs. ## 3 Background ### Differential Privacy Differential privacy (DP) is a widely accepted framework for measuring the privacy guarantees of algorithms that operate on sensitive data. The main idea of DP is to ensure that the output of an algorithm is not significantly affected by the presence or absence of any particular individual's data in the input. This means that even if an attacker has access to all but one individual's data, they cannot determine whether that individual's data was used in the computation. The formal definition of DP is as follows [10]: Definition 1: Given \(\epsilon>0\) and \(\delta\in[0,1]\), a randomized algorithm \(\mathcal{A}\) satisfies \((\epsilon,\delta)\)-differential privacy, if for all adjacent datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) differing by at most one record and for all possible subsets of \(\mathcal{A}\)'s outputs \(S\subseteq Range(\mathcal{A})\): \[\Pr[\mathcal{A}(\mathcal{D})\in S]\leq e^{\epsilon}\Pr[\mathcal{A}(\mathcal{ D}^{\prime})\in S]+\delta.\] To adapt the definition of DP for graphs, two different notions of adjacency are defined: edge-level and node-level adjacency. In the former, two graphs are adjacent if they differ only in the presence of a single edge, whereas in the latter, the two graphs differ by a single node with its features, labels, and all attached edges. Accordingly, the definitions of edge-level and node-level DP are derived from these definitions [37]. Specifically, an algorithm \(\mathcal{A}\) provides edge-/node-level \((\epsilon,\delta)\)-DP if for every two edge-/node-level adjacent graph datasets \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) and any set of outputs \(S\subseteq Range(\mathcal{A})\), we have \(\Pr[\mathcal{A}(\mathcal{G})\in S]\leq e^{\epsilon}\Pr[\mathcal{A}(\mathcal{ G}^{\prime})\in S]+\delta\). ### Graph Neural Networks Consider a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with a set of nodes \(\mathcal{V}=\{v_{1},\ldots,v_{N}\}\) and edges \(\mathcal{E}\) represented by an adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\). Node features are represented by a matrix \(\mathbf{X}\in\mathbb{R}^{N\times d}\), where \(\mathbf{X}_{i}\) denotes the \(d\)-dimensional feature vector of node \(v_{i}\). A common \(K\)-layer GNN is composed of \(K\) layers of graph convolution that are applied sequentially. Specifically, layer \(k\) takes as input the adjacency matrix \(\mathbf{A}\) and the node embeddings produced by layer \(k-1\), denoted by \(\mathbf{X}^{(k-1)}\), and outputs a new embedding for each node by aggregating the embeddings of its adjacent neighbors, followed by a neural network transformation. In its simplest form, the formal update rule for layer \(k\) can be written as follows: \[\mathbf{X}^{(k)}=\text{MLP}\left(\text{Agg}(\mathbf{A},\mathbf{X}^{(k-1)}); \mathbf{\Theta}^{(k)}\right), \tag{1}\] where Agg is a differentiable permutation-invariant _neighborhood aggregation function_ and MLP denotes a multilayer perceptron parameterized by \(\mathbf{\Theta}^{(k)}\) that takes the aggregated embeddings as input and produces a new embedding for each node. The aggregation function can take various forms, such as mean, sum, or max pooling. The input to the first layer is \(\mathbf{X}^{(0)}=\mathbf{X}\), i.e., the initial node features. The output of the final layer \(\mathbf{X}^{(K)}\) can then be used for downstream tasks, such as node classification or link prediction. ### Problem Definition Consistent with prior work [8, 39], we focus on the node classification task. Consider a GNN-based node classification model \(\mathcal{M}(\mathbf{A},\mathbf{X};\mathbf{\Theta})\) parameterized by a set of parameters \(\mathbf{\Theta}\) that takes the adjacency matrix \(\mathbf{A}\) and the node features \(\mathbf{X}\), and outputs the corresponding predicted node labels \(\mathbf{\widehat{Y}}\): \[\mathbf{\widehat{Y}}=\mathcal{M}(\mathbf{A},\mathbf{X};\mathbf{\Theta}). \tag{2}\] We seek to minimize a standard classification loss function \(\mathcal{L}\), such as cross-entropy, with respect to the set of model parameters \(\mathbf{\Theta}\): \[\arg\min_{\mathbf{\Theta}}\mathcal{L}(\mathcal{M}(\mathbf{A},\mathbf{X}; \mathbf{\Theta}),\mathbf{Y}), \tag{3}\] where \(\mathbf{Y}\in\{0,1\}^{N\times C}\) is the ground-truth node labels with \(C\) being the number of classes. Given a graph dataset \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{Y})\), our goal is to ensure the privacy of \(\mathcal{G}\) at both the training (Eq. 3) and inference (Eq. 2) phases of the model \(\mathcal{M}\), using the differential privacy notions defined for graphs, i.e., edge-level and node-level DP. Note that preserving privacy during the inference stage is of utmost importance since the adjacency information of the graph is still used at inference time to generate the predicted labels, and thus sensitive information about the graph could potentially be leaked even with \(\mathbf{\Theta}\) being differentially private [39]. ## 4 Proposed Method In this section, we present our proposed ProGAP method, which leverages the aggregation perturbation (AP) technique [39] to ensure differential privacy but introduces a novel progressive learning scheme to restrain the privacy costs of AP incurred during training. The overview of ProGAP architecture is illustrated in Figure 1, and its forward propagation (inference) and training algorithms are presented in Algorithm 1 and Algorithm 2, respectively. In the following, we first describe our method in detail and then analyze its privacy guarantees. ### Model Architecture and Training We start by considering a simple non-private sequential GNN model \(\mathcal{M}\) with \(K\) aggregation layers as the following: \[\mathbf{X}^{(0)} =\mathrm{MLP}^{(0)}_{base}\left(\mathbf{X};\mathbf{\Theta}^{(0)}_{ base}\right), \tag{4}\] \[\mathbf{X}^{(k)} =\mathrm{MLP}^{(k)}_{base}\left(\mathbf{A}_{\text{GG}}(\mathbf{A},\mathbf{X}^{(k-1)});\mathbf{\Theta}^{(k)}_{base}\right),\quad\forall k\in\{1, \ldots,K\},\] (5) \[\mathbf{\widehat{Y}} =\mathrm{MLP}_{head}\left(\mathbf{X}^{(K)};\mathbf{\Theta}_{ head}\right), \tag{6}\] where \(\mathbf{X}^{(k)}\) is the node embeddings generated at layer \(k\) by \(\mathrm{MLP}^{(k)}_{base}\) having parameters \(\mathbf{\Theta}^{(k)}_{base}\), and \(\mathrm{MLP}_{head}\) is a multi-layer perceptron parameterized by \(\mathbf{\Theta}_{head}\) with the softmax activation function that maps the final embeddings \(\mathbf{X}^{(K)}\) to the predicted class probabilities \(\mathbf{\widehat{Y}}\). To make this model differentially private, we follow the aggregation perturbation technique proposed by Sajadmanesh_et al_. [39] and add noise to the output of the aggregation function. Specifically, we replace the original aggregation function Agg in Eq. 5 with a _Normalize-Aggregate-Perturb_ mechanism defined as: \[\mathrm{NAP}\left(\mathbf{A},\mathbf{X};\sigma\right)=\left[\sum_{j=1}^{N} \frac{\mathbf{X}_{i}}{\|\mathbf{X}_{i}\|_{2}}\mathbf{A}_{j,i}+\mathcal{N}( \mathbf{0},\sigma^{2}\mathbf{I}_{d})\mid\forall i\in\{1,\ldots,N\}\right], \tag{7}\] where \(N\) is the number of nodes, \(d\) is the dimension of the input node embeddings, and \(\sigma\) is the standard deviation of the Gaussian noise. Concretely, the Figure 1: An example ProGAP architecture with three stages. MLP and JK represent multi-layer perceptron and Jumping Knowledge [47] modules, respectively. NAP denotes the normalize-aggregate-perturb module used to ensure the privacy of the adjacency matrix, with its output cached immediately after computation to save privacy budget. Training is done progressively, starting with the first stage and then expanding to the second and third stages, each using its own head MLP. The final prediction is obtained by the head MLP of the last stage. NAP mechanism row-normalizes the input embeddings to limit the contribution of each node to the aggregated output, then applies the sum aggregation function followed by adding Gaussian noise to the results. It can be easily shown that the resulting model provides edge-level DP as every query to the adjacency matrix \(\mathbf{A}\) is immediately perturbed with noise. However, training such a model comes at the cost of a significant increase in the privacy budget, which is proportional to the number of queries to the adjacency matrix. Concretely, with \(T\) training iterations, the NAP mechanism is queried \(KT\) times (at each forward pass and each layer), leading to an excessive accumulated privacy cost of \(O(\sqrt{KT})\). To reduce this cost, we propose a progressive training approach as the following: We first split the model \(\mathcal{M}\) into \(K+1\) overlapping submodels, where submodel \(\mathcal{M}_{s}\), \(s\in\{0,1,\ldots,K\}\), is defined as: \[\widetilde{\mathbf{X}}^{(s)} =\mathrm{NAP}\left(\mathbf{A},\mathbf{X}^{(s-1)};\sigma\right), \tag{8}\] \[\mathbf{X}^{(s)} =\mathrm{MLP}^{(s)}_{base}\left(\widetilde{\mathbf{X}}^{(s)}; \mathbf{\Theta}^{(s)}_{base}\right),\] (9) \[\widehat{\mathbf{Y}}^{(s)} =\mathrm{MLP}^{(s)}_{head}\left(\mathrm{JK}^{(s)}(\bigcup_{k=0}^ {s}\{\mathbf{X}^{(k)}\};\mathbf{\Theta}^{(s)}_{jump});\mathbf{\Theta}^{(s)}_ {head}\right), \tag{10}\] where \(\widetilde{\mathbf{X}}^{(s)}\) is the noisy aggregate embeddings of \(\mathcal{M}_{s}\), with \(\widetilde{\mathbf{X}}^{(0)}=\mathbf{X}\). \(\mathrm{JK}^{(s)}\) is a Jumping Knowledge module [47] with parameters \(\mathbf{\Theta}^{(s)}_{jump}\) that combines the embeddings generated by submodels \(\mathcal{M}_{0}\) to \(\mathcal{M}_{s}\), and \(\mathrm{MLP}^{(s)}_{head}\) is a lightweight, 1-layer head MLP with parameters \(\mathbf{\Theta}^{(s)}_{head}\) used to train \(\mathcal{M}_{s}\). Finally, \(\widehat{\mathbf{Y}}^{(s)}\) is the output predictions of \(\mathcal{M}_{s}\). Then, we progressively train the model in \(K+1\) stages, starting from the shallowest submodel \(\mathcal{M}_{0}\) and gradually expanding to the deepest submodel \(\mathcal{M}_{K}\) (which is equivalent to the full model \(\mathcal{M}\)) as explained by Algorithm 2. For the final inference after training, we simply use the labels predicted by the last submodel \(\mathcal{M}_{K}\), i.e., \(\widehat{\mathbf{Y}}=\widehat{\mathbf{Y}}^{(K)}\). _The key point in this training strategy is that we immediately save the outputs of NAP modules on their first query and reuse them throughout the training._ More specifically, at each stage \(s\), the perturbed aggregate embedding matrix \(\widetilde{\mathbf{X}}^{(s)}\) computed in the first forward pass of \(\mathcal{M}_{s}\) (via Eq. 8) is stored in the cache and reused in all further queries. This caching mechanism allows us to reduce the privacy costs of the model by a factor of \(T\), as the NAP module in this case is only queried \(K\) times (once per stage) instead of \(KT\) times. At the same time, the aggregations \(\widetilde{\mathbf{X}}^{(s)}\) are computed over the embeddings \(\mathbf{X}^{(s-1)}\) that are already learned in the preceding stage \(s-1\), which provide more expressive power than the raw node features as they also encode information from the adjacency matrix and node labels, and thus lead to better performance. Remark 1: The proposed ProGAP model can also be trained in a layerwise fashion, i.e., by training each layer \(\text{MLP}_{base}^{(k)}\) individually, while keeping the parameters of the preceding layers frozen and using the same caching mechanism. Note that this is different from the proposed progressive approach, in which all the parameters from layer 0 to layer \(s\), i.e., \(\mathbf{\Theta}_{base}^{(0)},\ldots,\mathbf{\Theta}_{base}^{(s)}\) are trained together in each stage \(s\). In Section 6, we show that such a progressive training strategy leads to better performance than layerwise training. ``` Input : Adjacency matrix \(\mathbf{A}\); node features \(\mathbf{X}\); node labels \(\mathbf{Y}\); model depth \(K\); noise standard deviation \(\sigma\); Output : Trained model parameters \(\mathfrak{P}_{K}^{*}\) 1 initialize \(\mathbf{\Theta}_{base}^{(0)},\mathbf{\Theta}_{jump}^{(0)},\mathbf{\Theta}_{ head}^{(0)}\) randomly 2\(\mathfrak{P}_{0}\leftarrow\{\mathbf{\Theta}_{base}^{(0)},\mathbf{\Theta}_{ jump}^{(0)},\mathbf{\Theta}_{head}^{(0)}\}\) 3for\(s\in\{0,\ldots,K\}\)do 4\(\mathfrak{P}_{s}^{*}\leftarrow\arg\min_{\mathfrak{P}}\mathcal{L}\Big{(} \mathcal{M}_{s}\left(\mathbf{A},\mathbf{X};\sigma,\mathfrak{P}_{s}\right), \mathbf{Y}\Big{)}\) 5if\(s<K\)then 6 initialize \(\mathbf{\Theta}_{base}^{(s+1)},\mathbf{\Theta}_{jump}^{(s+1)},\mathbf{ \Theta}_{head}^{(s+1)}\) randomly 7\(\mathfrak{P}_{s+1}\leftarrow\mathfrak{P}_{s}^{*}\cup\{\mathbf{\Theta}_{base}^ {(s+1)},\mathbf{\Theta}_{jump}^{(s+1)},\mathbf{\Theta}_{head}^{(s+1)}\}\setminus \{\mathbf{\Theta}_{jump}^{*(s)},\mathbf{\Theta}_{head}^{*(s)}\}\) 8 9 end for 10 11 end for return\(\mathfrak{P}_{K}^{*}\) ``` **Algorithm 2**ProGAP Training ### Privacy Analysis With the following theorem, we show that the proposed training strategy provides edge-level DP. The proof is provided in Appendix 0.A.2. Theorem 4.1: _Given the maximum stage \(K\geq 0\) and noise variance \(\sigma^{2}\), for any \(\delta\in(0,1)\) Algorithm 2 satisfies edge-level \((\epsilon,\delta)\)-DP with \(\epsilon=\frac{K}{2\sigma^{2}}+\sqrt{2K\log{(1/\delta)}/\sigma}\)._ To ensure node-level DP, however, we must train every submodel using DP-SGD or its variants, as in this case node features and labels are also private and can be leaked with non-private training. Theorem 4.2 establishes the node-level DP guarantee of ProGAP's training algorithm when combined with DP-SGD: Theorem 4.2: _Given the number of nodes \(N\), batch-size \(B<N\), number of per-stage training iterations \(T\), gradient clipping threshold \(C>0\), maximum stage \(K\geq 0\), maximum cut-off degree \(D\geq 1\), noise variance for aggregation perturbation \(\sigma^{2}_{AP}>0\), and noise variance for gradient perturbation \(\sigma^{2}_{GP}>0\), Algorithm 2 satisfies node-level \((\epsilon,\delta)\)-DP for any \(\delta\in(0,1)\) with:_ \[\epsilon\leq\min_{\alpha>1} \frac{(K+1)T}{\alpha-1}\log\Bigg{\{}\bigg{(}1-\frac{B}{N}\bigg{)} ^{\alpha-1}\left(\alpha\frac{B}{N}-\frac{B}{N}+1\right)\] \[+\binom{\alpha}{2}\left(\frac{B}{N}\right)^{2}\left(1-\frac{B}{N }\right)^{\alpha-2}e^{\frac{C^{2}}{\sigma^{2}_{GP}}}\] \[+\sum_{l=3}^{\alpha}\binom{\alpha}{l}\left(1-\frac{B}{N}\right) ^{\alpha-l}\left(\frac{B}{N}\right)^{l}e^{(l-1)(\frac{C^{2}l}{2\sigma^{2}_{GP} })}\Bigg{\}}\] \[+\frac{DK\alpha}{2\sigma^{2}_{AP}}+\frac{\log(1/\delta)}{\alpha-1},\] _providing that the optimization in line 4 of Algorithm 2 is done using DP-SGD._ The proof is deferred to Appendix 0.A.3. Note that to decrease the node-level sensitivity of the NAP mechanism (i.e., the impact of adding/removing a node on the output of the NAP mechanism), we assume an upper bound \(D\) on node degrees, and randomly sample edges from the graph to ensure that each node has no more than \(D\) outgoing edges. This is a standard technique to ensure bounded-degree graphs [8, 39]. In addition to training privacy, ProGAP also guarantees privacy during inference at both edge and node levels without any further privacy costs. This is because the entire noisy aggregate matrices \(\widehat{\mathbf{X}}^{(i)}\) corresponding to all the nodes -both training and test ones- are already computed and cached during training and reused for inference (i.e., lines 4 and 5 of Algorithm 1 is not executed at inference time). As a result, the inference for a node no longer depends on its private neighborhood and is done by post-processing differentially private outputs, which does not incur any additional privacy costs. ## 5 Experimental Setup We test our proposed method on node-wise classification tasks and evaluate its effectiveness in terms of classification accuracy and privacy guarantees. ### Datasets We conduct experiments on three real-world datasets that have been used in previous work [33, 39, 8], namely Facebook [40], Reddit [16], and Amazon [6], and also two new datasets: Facebook-100 [40] and WeNet [30, 13]. The Facebook dataset is a collection of anonymized social network data from UIUC students, where nodes represent users, edges indicate friendships, and the task is to predict students' class year. The Reddit dataset comprises a set of Reddit posts as nodes, where edges represent if the same user commented on both posts, and the goal is to predict the posts' subreddit. The Amazon dataset is a product co-purchasing network, with nodes representing products and edges indicating if two products are purchased together, and the objective is to predict product category. Facebook-100 is an extended version of the Facebook dataset combining the social network of 100 different American universities. WeNet is a mobile sensing dataset collected from university students in four different countries. Nodes represent eating events, which are linked based on the similarity of location and Wi-Fi sensor readings. Node features are extracted based on cellular and application sensors, and the goal is to predict the country of the events.A summary of the datasets is provided in Table 1. ### Baselines We compare our ProGAP method against GAP [39], which is the closest related work to ours. We use GAP's official implementation on GitHub1 and follow the same experimental setup as reported in the original paper. We do not include other available differentially private GNN approaches as they either: (i) are outperformed by GAP (e.g., [44, 8]) or (ii) have different problem settings (e.g., [34, 38]) that make them not directly comparable to our method. Footnote 1: [https://github.com/sisaman/GAP](https://github.com/sisaman/GAP) ### Implementation Details We use PyTorch Geometric [11] for implementing the models, \(\mathsf{autodp}^{2}\) for privacy accounting, and Opacus [49] for DP training. We follow the same experimental setup as GAP [39], and randomly split the nodes in all the datasets into \begin{table} \begin{tabular}{l r r r r r} \hline \hline Dataset & \# Nodes & \# Edges & \# Features & \# Classes & Med. Degree \\ \hline Facebook & 26,406 & 2,117,924 & 501 & 6 & 62 \\ Reddit & 116,713 & 46,233,380 & 602 & 8 & 209 \\ Amazon & 1,790,731 & 80,966,832 & 100 & 10 & 22 \\ Facebook-100 & 1,120,280 & 86,304,478 & 537 & 6 & 57 \\ WeNet & 37,576 & 22,684,206 & 44 & 4 & 286 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset Statistics. training, validation, and test sets with 75/10/15% ratio, respectively. We vary \(\epsilon\) within \(\{0.25,0.5,1,2,4,\infty\}\) for the edge-level privacy (\(\epsilon=\infty\) corresponds to the non-private setting) and within \(\{2,4,8,16,32\}\) for the node-level privacy setting. For each \(\epsilon\) value, we tune the following hyperparameters based on the mean validation set accuracy computed over 10 runs: \(\text{MLP}_{base}\) layers in \(\{1,2\}\), model depth \(K\) in \(\{1,2,3,4,5\}\), and learning rate in \(\{0.01,0.05\}\). The value of \(\delta\) is fixed per each dataset to be smaller than the inverse number of private units (i.e., edges for edge-level privacy, nodes for node-level privacy). For all cases, we set the number of \(\text{MLP}_{head}\) layers to 1 and use concatenation for the JK modules. Additionally, we set the number of hidden units to 16 and use the SeLU activation function [26]. We use batch normalization except for the node-level setting, for which we use group normalization with one group. Under the edge-level setting, we train the models with full-sized batches for 100 epochs using the Adam optimizer and perform early stopping based on the validation set accuracy. For the node-level setting, we use randomized neighbor sampling to bound the maximum degree \(D\) to 50 for Amazon, 100 for Facebook and Facebook-100, and 400 for Reddit and WeNet. We use DP-Adam [15] with a clipping threshold of 1.0. We tune the number of per-stage epochs in \(\{5,10\}\) and set the batch size to 256, 1024, 2048, 4096, and 4096 for Facebook, Reddit, Amazon, Facebook-100, and WeNet, respectively. Finally, we report the average test accuracy over 10 runs with 95% confidence intervals calculated by bootstrapping with 1000 samples. We open-source our implementation on GitHub.3 Footnote 3: It will be made public upon acceptance. ## 6 Results and Discussion ### Accuracy-Privacy Trade-off Table 2 presents the test accuracy of ProGAP against GAP at three different privacy levels: non-private with \(\epsilon=\infty\), edge-level privacy with \(\epsilon=1\), and node-level privacy with \(\epsilon=8\). The results are reported as mean accuracy \(\pm\) 95% confidence interval. We observe that ProGAP outperforms GAP in almost all cases, and often by a substantial margin. Specifically, in the non-private setting, \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Privacy & \multirow{2}{*}{Method} & \multirow{2}{*}{\(\epsilon\)} & \multirow{2}{*}{Facebook} & \multirow{2}{*}{Reddit} & \multirow{2}{*}{Amazon} & \multirow{2}{*}{Facebook-100} & \multirow{2}{*}{Wenet} \\ Level & & & & & & \\ \hline Non- & ProGAP & \(\infty\) & **84.5 \(\pm\) 0.24** & 99.3 \(\pm\) 0.03 & **93.3 \(\pm\) 0.04** & **74.4 \(\pm\) 0.14** & **73.9 \(\pm\) 0.25** \\ Private & GAP & \(\infty\) & 80.5 \(\pm\) 0.42 & **99.5 \(\pm\) 0.01** & 92.0 \(\pm\) 0.10 & 66.4 \(\pm\) 0.35 & 69.7 \(\pm\) 0.14 \\ \hline Edge & ProGAP & 1.0 & **77.2 \(\pm\) 0.33** & **97.8 \(\pm\) 0.05** & **84.2 \(\pm\) 0.07** & **56.9 \(\pm\) 0.30** & **68.8 \(\pm\) 0.23** \\ Level & GAP & 1.0 & 69.4 \(\pm\) 0.39 & 97.5 \(\pm\) 0.06 & 78.8 \(\pm\) 0.26 & 46.5 \(\pm\) 0.58 & 62.4 \(\pm\) 0.28 \\ \hline Node & ProGAP & 8.0 & **69.3 \(\pm\) 0.33** & **94.0 \(\pm\) 0.04** & **79.1 \(\pm\) 0.10** & **48.5 \(\pm\) 0.36** & **61.0 \(\pm\) 0.34** \\ Level & GAP & 8.0 & 63.9 \(\pm\) 0.49 & 93.9 \(\pm\) 0.09 & 77.6 \(\pm\) 0.07 & 43.0 \(\pm\) 0.20 & 58.2 \(\pm\) 0.39 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of Experimental Results (Mean Accuracy \(\pm\) 95% CI) ProGAP achieves significantly higher test accuracies on all datasets except Reddit, on which GAP performs only slightly better. Under both the edge-level and node-level privacy settings, however, ProGAP consistently outperforms GAP on all datasets, with the largest performance gap of 10.4% and 5.5% accuracy points, respectively, which are both observed on Facebook-100. To examine the performance of the methods at different privacy budgets, we varied \(\epsilon\) between 0.25 to 4 for edge-level privacy and 2 to 32 for node-level private algorithms. We then recorded the accuracy of each method for each privacy budget. The outcome for both edge-level and node-level privacy settings is depicted in Figure 2.4 Notably, we observe that ProGAP achieves higher accuracies than GAP across all \(\epsilon\) values tested and approaches the non-private accuracy more quickly under both privacy settings. This is because in ProGAP each aggregation step is computed on the node embeddings learned in the previous stage, providing greater expressive power than GAP, which recursively computes the aggregations on the initial node representations. Footnote 4: The results on the Facebook dataset are omitted due to space limitation. ### Convergence Analysis We examine the convergence of ProGAP to further understand its behavior under the two privacy settings. We report the training and validation accuracy of ProGAP per training step under edge-level privacy with \(\epsilon=1\) and node-level privacy with \(\epsilon=8\). For all datasets, ProGAP is trained for 100 and 10 epochs per stage under edge and node-level privacy, respectively. We fix \(K=5\) in all settings. The results are shown in Figure 3. We observe that both training and validation accuracies increase as ProGAP moves from stage 0 to 5, with diminishing returns for more stages, which indicates the higher importance of the nearby neighbors to each node, since the receptive field of nodes grows with the number of stages. Moreover, we observe negligible discrepancies between training and validation accuracy when the model converges, which suggests higher Figure 2: Accuracy-privacy trade-off of edge-level (top) and node-level (bottom) private methods. The dotted line represents the accuracy of the non-private ProGAP. resilience to privacy attacks, such as membership inference, which typically rely on large generalization gaps. This result is in line with previous work showing the effectiveness of DP against privacy attacks [20, 21, 32, 39]. ### Effect of the Model Depth We explore how the performance of ProGAP is influenced by modifying the model depth \(K\), or equivalently, the number of stages \(K+1\). We experiment with different values of \(K\) ranging from \(1\) to \(5\) and evaluate ProGAP's accuracy under varying privacy budgets of \(\epsilon\in\{0.25,1,4\}\) for edge-level DP and \(\epsilon\in\{2,8,32\}\) for node-level privacy. The results are demonstrated in Figure 4. We observe that ProGAP can generally gain advantages from increasing the depth, but there is a compromise depending on the privacy budget: deeper models lead to better accuracy under higher privacy budgets, while lower privacy budgets require shallower models to achieve optimal performance. This is because ProGAP can leverage data from more remote nodes with a higher value of \(K\), which can boost the final accuracy, but it also increases the amount of noise in the aggregations, which has a detrimental effect on the model's accuracy. When the privacy budget is lower and the amount of noise is greater, ProGAP has the best performance at smaller values of \(K\). But as the privacy budget grows, the magnitude of the noise is lowered, enabling the models to take advantage of greater \(K\) values. ### Progressive vs. Layerwise Training We compare the performance of ProGAP using two different training strategies: progressive training and layerwise training. Similar to Table 2, we report the test accuracy of both strategies at three different privacy levels: non-private with \(\epsilon=\infty\), edge-level privacy with \(\epsilon=1\), and node-level privacy with \(\epsilon=8\). The results are presented in Table 3. Overall, we observe that the progressive training yields higher accuracies than the layerwise strategy in most cases, which as mentioned in Section 4, is due to the higher capacity of the progressive approach. Figure 3: Convergence of ProGAP with \(K=5\) under edge-level (top) and node-level (bottom) privacy, with \(\epsilon=1\) and \(\epsilon=8\), respectively. ## 7 Conclusion In this paper, we introduced ProGAP, a novel differentially private GNN that improves the challenging accuracy-privacy trade-off in learning from graph data. Our approach uses a progressive training scheme that splits the GNN into a sequence of overlapping submodels, each of which is trained over privately aggregated node embeddings learned and cached by the previous submodels. By combining this technique with the aggregation perturbation method, we formally proved that ProGAP can ensure edge-level and node-level privacy guarantees for both training and inference stages. Empirical evaluations on benchmark graph datasets demonstrated that ProGAP can achieve state-of-the-art accuracy by outperforming existing methods. Future work could include exploring new architectures or training strategies to further improve the accuracy-privacy trade-off of differentially private GNNs, especially in the more challenging node-level privacy setting. Figure 4: Effect of the model depth on ProGAP’s accuracy under edge-level (top) and node-level (bottom) privacy. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Privacy & \(\epsilon\) & Training & \multirow{2}{*}{Facebook} & \multirow{2}{*}{Reddit} & \multirow{2}{*}{Amazon} & \multirow{2}{*}{Facebook-100} & \multirow{2}{*}{Wenet} \\ Level & & & Strategy & & & & \\ \hline Non- & \multirow{2}{*}{\(\infty\)} & PR & 84.5 \(\pm\) 0.24 & **99.3 \(\pm\) 0.03** & **93.3 \(\pm\) 0.04** & **74.4 \(\pm\) 0.14** & **73.9 \(\pm\) 0.25** \\ Private & \(\infty\) & LW & **85.6 \(\pm\) 0.29** & 99.3 \(\pm\) 0.03 & 92.9 \(\pm\) 0.04 & 74.0 \(\pm\) 0.16 & 71.9 \(\pm\) 0.19 \\ \hline Edge & \multirow{2}{*}{1.0} & PR & **77.2 \(\pm\) 0.33** & 97.8 \(\pm\) 0.05 & **84.2 \(\pm\) 0.07** & **56.9 \(\pm\) 0.30** & **68.8 \(\pm\) 0.23** \\ Level & & LW & 76.8 \(\pm\) 0.22 & **98.0 \(\pm\) 0.06** & 83.4 \(\pm\) 0.08 & 55.7 \(\pm\) 0.25 & 67.7 \(\pm\) 0.25 \\ \hline Node & \multirow{2}{*}{8.0} & PR & **69.3 \(\pm\) 0.33** & **94.0 \(\pm\) 0.04** & **79.1 \(\pm\) 0.10** & 48.5 \(\pm\) 0.36 & **61.0 \(\pm\) 0.34** \\ Level & & LW & 68.7 \(\pm\) 0.48 & 94.0 \(\pm\) 0.07 & 78.8 \(\pm\) 0.05 & **49.2 \(\pm\) 0.57** & 59.3 \(\pm\) 0.42 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy Comparison of Progressive (PR) and Layerwise (LW) Training ## Acknowledgments This work was supported by the European Commission's H2020 Program ICT-48-2020, AI4Media Project, under grant number 951911. It was also supported by the European Commission's H2020 WeNet Project, under grant number 823783.
2310.10020
Optimized nanodevice fabrication using clean transfer of graphene by polymer mixture: Experiments and Neural Network based simulations
In this study, we investigate both experimentally and computationally the molecular interactions of two distinct polymers with graphene. Our experimental findings indicate that the use of a polymer mixture reduces the transfer induced doping and strain in fabricated graphene devices as compared to conventional single polymer wet transfer. We found that such reduction is related to the decreased affinity of mixture of polymethyl methacrylate and angelica lactone polymer for graphene. We investigated changes in binding energy (BE) of polymer mixture and graphene by considering energy decomposition analysis using a pre-trained potential neural network. It was found that numerical simulations accurately predicted two-fold reduction of BE and order of magnitude reduction of electrostatic interaction between polymers.
Jared K. Averitt, Sajedeh Pourianejad, Olubunmi Ayodele, Kirby Schmidt, Anthony Trofe, Joseph Starobin, Tetyana Ignatova
2023-10-16T02:43:11Z
http://arxiv.org/abs/2310.10020v1
###### Abstract ###### Abstract In this study, we investigate both experimentally and computationally the molecular interactions of two distinct polymers with graphene. Our experimental findings indicate that the use of a polymer mixture reduces the transfer induced doping and strain in fabricated graphene devices as compared to conventional single polymer wet transfer. We found that such reduction is related to the decreased affinity of mixture of polymethyl methacrylate and angelica lactone polymer for graphene. We investigated changes in binding energy (BE) of polymer mixture and graphene by considering energy decomposition analysis using a pre-trained potential neural network. It was found that numerical simulations accurately predicted two-fold reduction of BE and order of magnitude reduction of electrostatic interaction between polymers. **Optimized nanodevice fabrication using clean transfer of graphene by polymer mixture: Experiments and Neural Network based simulations** Jared K. Averitt, Sajedeh Pourianejad, Olubunmi Ayodele, Kirby Schmidt, Anthony Trofe, Joseph Starobin*, and Tetyana Ignatova\({}^{*}\) _Department of Nanoscience, University of North Carolina at Greensboro, Greensboro, North Carolina 27401, United States_ JKA, SP contributed equally ## Introduction Among 2D materials, graphene has been the most extensively studied due to remarkable properties [1-4] and promising applications. The chemical vapor deposition (CVD) method remains the most reliable to produce high quality and large area graphene on Cu [5,6], Ni [7] or Pt [8,9] surfaces. Even though the CVD-grown graphene usually consists of a single or sometimes multiple layers of graphene, [10-12] it is unusable on the growth metallic surface, and thus clean transfer on to target substrate such as Si/SiO\({}_{2}\)[13], glass [14], polyethylene terephthalate [15] or paper [16] is crucially important for various applications ranging from biomedical [17] to nanoelectronics and quantum computing [18]. Several transfer methods are in use [19-25], but the polymer support method is a promising one because it could be of interest. The polymer of interest includes polycarbonate [28], polydimethylsiloxane, [7] but the most popular one is polymethylmethacrylate (PMMA) [29,30]. However, inconsistency in the quality of graphene transferred with PMMA has limited its application in Figure 1: The gas phase relaxed molecules. P MMA (a) and (c) ALP polymer. Structural formulas of PMMA (b), ALP (d). 5x7 unit cell of graphene was used as the graphene model top view (e) with periodic boundary conditions shown in dashed line. device fabrication. This inconsistency is attributed to the presence of carbonyl functional groups (C=O) [31] and long chain structures [32] which are contributing to high binding energy of PMMA to graphene and cause incomplete removal from 2D surface after transfer to device substrate (here we are not discussing graphene imperfections such as defects, grain boundaries, edges, etc. [27]) Additional aggressive solvent treatment (either hot or fuming acetone) [24, 33] or thermal annealing [34] did not significantly improve PMMA removal. Other cleaning methods based on either UV/ozone treatment [35] or argon beam bombardment [36] have been employed but cause graphene quality reduction. There are reports of other less aggressive alternative methods requiring complicated equipment setups and involving the use of two layers of PMMA [34, 37] which further cause appearance of additional wrinkles and cracks in graphene during transfer [35]. The efficiency of graphene transfer can be improved by blending PMMA with a polymer having a low binding energy to graphene [36, 37]. In this work we demonstrated large area, clean graphene transfer using PMMA and an additive, the polyfuranone chain products produced from biomass-derived angelica lactone via C-C coupling reaction, which we will call ALP for simplicity (Fig. 1 c, d) [38]. Understanding the physical mechanisms behind binding polymer molecules on graphene is a challenging computational problem. Indeed, the binding cannot be described as a single global minimum of a potential energy since polymer molecules are not covalently attached to graphene surface. To address this challenge, we used a potential neural network-based approach to calculate minimal energy configurations of graphene and polymer mixture by considering multiple initial conditions for positions of polymer atoms (high throughput cycle as shown in Fig. 2). This method was chosen to circumvent time related deficiency of electron configuration calculations typical for DFT based simulations. Figure 2: Schematic representation of the molecular rotation algorithm used to generate confirmations. The orange cube represents one polymer molecule while the blue represents the second polymer molecule. **Experimental Approach and Results of Experiments** Figure S1 shows the procedure of the proposed process for transferring CVD graphene. In the conventional transfer method, PMMA is typically spin-coated on the graphene-on-growth substrate. In this work we mixed the solutions of PMMA and ALP at different weight concentration ratios of ALP:PMMA, as [1:1], [1:2], [1:4], [1:6], [1:0] and then spin-coated on CVD graphene grown on Cu foil. It has to be noted that ALP stays as a jelly like substance after all solvent removal, even after cooling down polymer to 4\({}^{0}\) C, therefore we could not use ALP alone as a sacrificial layer in transfer procedure. The rotation speed was adjusted to get thickness of polymer film approximately 1 \(\upmu\)m. After spin-coating, samples were dried at room temperature for 24 h and then soft-baked at temperature 95\({}^{\circ}\)C for 5 min to evaporate solvent. Cu foil was delaminated by applying the "bubbling procedure" which is basically a water electrolysis process, in details described previously [39, 40]. We observed that the polymer graphene stack was detached from the Cu foil very effectively and fast (3-5 seconds) for ALP:PMMA concentration of [1:4], leaving behind clean grown substrate. After cleaning with de-ionized water, the floating polymer-graphene "sandwich" was deposited on Si/SiO\({}_{2}\) substrate and dried gradually at 90 - 135\({}^{\circ}\)C for 30 min. Finally, the sacrificial layer made of polymer mixture was removed by acetone in Soxhlet extractor to prevent any contamination from solvent side [20]. We applied multiple characterization techniques to compare quality of transferred material. Scanning electron microscopy (SEM) images of graphene transferred using ALP:PMMA [1:4] Figure 3: the SEM images of graphene transferred with (a) ALP:PMMA and (b) PMMA; area of scan is 135 \(\upmu\)m2; the dark spots observed on both images are atmospheric water molecules adsorbed on the surfaces of graphene. AFM maps of graphene transferred with (c) ALP:PMMA (RMS =0.96 \(\pm\) 0.43 nm) and (d) PMMA (RMS =1.98 \(\pm\) 0.47 nm); the inserts show the line profiles and surface roughness. KPFM maps of graphene transferred with (e) ALP:PMMA and (f) PMMA and line profile of surface contact potential. Scale bars are 1 \(\upmu\)m. showed fewer defects and polymer residues (Fig. 3 a, b) in comparison to graphene transferred with PMMA only. Results for other concentrations of ALP:PMMA could be found in SI. The concentration of polymer mixture ALP:PMMA [1:4] will be used for all future considerations and will be compared to the PMMA-only transferred graphene. To analyze polymer residues on graphene surface, we used atomic force microscopy (AFM). With great consistency to previous reports [41-43] the PMMA transferred sample showed the presence of a few intermittent cracks in graphene and moderate polymer residues (RMS=1.98 \(\pm\) 0.47 nm). The higher quality of the ALP:PMMA transferred sample (Fig. 3 c, see line profile in insert) with RMS= 0.96 \(\pm\) 0.43 nm could be attributed to (1) softening of polymer blend by adding jelly-like ALP, allowing to evenly distribute introduced by polymer strain and so minimizing appearance of graphene cracks; (2) decreasing of adhesion of polymer blend vs. PMMA, resulting in easier removal sacrificial layer from graphene surface during the final transfer step. Of course, we cannot prevent attachment of polymer residues at defected sites of graphene which in turn would result in the formation of strong covalent bonding between graphene and polymer molecules as shown by Leong et. al. [31]. The proposed novel ALP:PMMA transfer method significantly reduces graphene damage, thus only non-covalent interactions between graphene and polymer sacrificial film are expected and numerical calculations will provide more details on these. The adsorbed polymer residues can significantly reduce charge carrier mobility of graphene [43-45]. Therefore, the transport properties of graphene can be improved by minimizing the polymer residues [41-43,47-49]. The Kelvin Probe Force Microscopy (KPFM) characterization reveals homogeneous surface potential distribution over large area of graphene transferred with ALP:PMMA (Fig. 3 e and insert). Whereas twice higher variations in the surface potential distribution were observed in the PMMA-transferred graphene sample (Fig. 3 f and insert) reflecting introduced parasitic graphene doping (Fermi level shift). Kim et.al., reported that the origin of this inhomogeneity is directly related to PMMA residues [50]. Figure 4: Analysis of the quality of graphene using Raman Spectroscopy. Doping (a) and strain (b) map of graphene transferred by polymer blend method; blue dotted region is few layers of graphene. Doping (c) and strain (d) map of graphene transferred by PMMA method. The correlation plot of 2D band center vs width (e). 2D band center vs G band center (f). scale for(a)-(d) is 0.5\(\mu\)m per pixel. Hyperspectral Raman characterization is known to be a powerful tool to examine quality, layer number as well as quantify local doping and strain variations over large area of graphene [51, 52, 53]. We performed Raman mapping for samples transferred by polymer blend and PMMA. The spectra were fit using least squares minimization of Lorentzian peaks. The position, broadening and shift of the Raman characteristic peaks of graphene (D, G, and 2D) were analyzed. From the correlation plots in the Figure 4 e, f, we clearly see that the PMMA-transferred sample has a wider 2D peak indicating reduce charge carrier doping, and a significant shift in both the G and 2D peaks when compared with ALP:PMMA-transferred sample. In accordance with [51], strain and doping induce shifts in the characteristic peaks (2D and G) of graphene. The G peak is particularly responsive to doping, while the 2D peak is influenced by strain. Plotting the 2D against the G peak provides a visual representation of the level of strain and doping. Points at the intersection indicate zero strain and doping. As strain is applied, peak positions shift along the red curve (Fig.4 f), with both G and 2D peaks moving to lower wave numbers for tensile strain and higher for compressive strain. Increased p-doping shifts the peak band along the magenta curve. It's important to note that this procedure is only applicable to monolayer graphene, hence a few-layer graphene area in the ALP:PMMA-transferred sample was excluded as an outlier from the scatter plots. The corresponding doping maps in Figure 4 a, c strongly aligns with the KPFM results, showing a substantial reduction in parasitic p-doping in the ALP:PMMA-transferred graphene compared to PMMA-transferred graphene. In Figure 4 a, c, green hexagons denote multilayer graphene areas. Additionally, in Figure 4 c, d, we observe both high and low strain areas in both samples, with the ALP:PMMA-transferred sample exhibiting a more uniformly distributed strain. ## Results of Numerical Simulations ### Introduction to numerical methodology As mentioned above we have developed a new approach to perform high throughput cycle atomic simulations to quantify nonlocal interactions on 2D interface between graphene and non-covalently bonded molecules. To create a matrix of atomic parameters (positions, masses, energies and forces) we used an atomic simulation environment (ASE) [54] (Fig. 5 a). At each step of matrix update the atomic level forces have been determined using the potential neural network (PNN), ANI-1ccx [55]. After that the positions were updated to determine dynamic parameters of nuclei by using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization predictor-corrector numerical process (Fig. 5 b) [56]. The atomic charges \(q_{\mathrm{i}}\) of each molecule were calculated separately with respective atomic parameters and relaxed geometries [54] using the full Geometry-dependent Atomic Charges (GDAC) [57] method (Fig 5 c). We utilized Multiwft [58] software package to perform energy decomposition analysis using classical force field (EDA-FF) similar to that described in [59]. The binding energy (BE) of each system (Fig. 1) was normalized per unit area of van der Waals overlap. The van der Waals area of overlap is the region where the binding energy can be approximated by a Lennard-Jones potential. This region is where the polymer atoms are in close proximity of the 2D interface. The values we used for the van der Waals radii: C, H, O atoms are 1.77, 1.2, 1.52 A respectively [58]. ### High throughput cycle description The matrix of atomic parameters (which will be called matrix from here on) is initialized before going through the PNN (Fig. 5). We initialized the initial positions using a series of rotations relative to a single input configuration (Fig 2 a). The rotations are performed using Euler rotations, where phi, theta, and psi are the Euler angles, and the molecules center of masses are the center of rotation. In order to generate 576 configurations of equal-proportion displacements (Fig. 2), we generated 6 sets of conformations (representing each face of a 6-sided cube), with each set consisting of 4 conformations (representing \(90^{9}\) rotations along each face of the cube). ### Atomic charge calculations To perform EDA-FF calculations one needs to determine the charge of each atom. The PNN does not provide these charges, so we implemented a geometry dependent atomic charge (GDAC) method [60]. ### Energy Decomposition Analysis using Force Fields (EDA-FF) The converged matrix positions and the charges from the cycle (Fig 5 c) are used to calculate the binding energy (BE) through Energy Decomposition Analysis using Force Fields (EDA-FF). By calculating the binding energies (\(\Delta E\)) of the respective polymer-graphene interacting system \(\sum E(C_{i})\) and energies after disassociation\(E(C_{1},C_{2},\ldots,C_{N})\), a numerical comparison of the adhesion ability of the polymer(s) and graphene are obtained. The binding energy between N molecular fragments: \[\Delta E=E(C_{1},C_{2},\ldots,C_{N})-\sum E(C_{i})(1)\] Strongest adhesion is directly proportional to larger negative values of the BE. Energy Decomposition Analysis using Force Fields (EDA-FF) is an attractive method due to its requirement of only optimized structures and atomic charges as inputs. This feature makes it computationally efficient, requiring negligible resources (\(<5\) seconds per calculation on a single CPU). EDA-FF calculates BE as three separate terms: electrostatic (\(\Delta E_{es}\)), short-range (exchange) repulsion (\(\Delta E_{ex}\)) and long- range dispersion (\(\Delta E_{disp}\)). \[\Delta E=\Delta E_{es}+\Delta E_{ex}+\Delta E_{disp}(2)\] where the electrostatic energy (Coulomb potential) between atoms A and B is: \[E_{es}=\frac{q_{A}q_{B}}{r_{AB}}\] Figure 5: Schematic representation of the algorithm used. PNN/BFGS iterates until energy convergence of 0.05 meV. Outputs of atomic positions (x, y, z), charges (q) and the van der Waals area are computed from the optimized structures. and the van der Waals interaction energy (Lennard Jones potential) between atoms A and B is the sum of the repulsive interaction due to Pauli repulsion: \[E_{ex}=\varepsilon_{AB}\left(\frac{R_{AB}^{0}}{r_{AB}}\right)^{12}\] and the attractive dispersive interaction (dispersion). \[E_{disp}=-2\varepsilon_{AB}\left(\frac{R_{AB}^{0}}{r_{AB}}\right)^{6}\] The \(\varepsilon_{AB}\) is the well-depth of interatomic van der Waals interaction potential, while \(R_{AB}^{0}\) is the van der Waals radius. The \(r_{AB}\) is the distance between atom A and atom B. The interatomic parameters \(\varepsilon_{AB}\) and \(R_{AB}^{0}\) are provided by the trained force fields and the values are commonly defined for each atom type: \[\varepsilon_{AB}=\surd\varepsilon_{A}\varepsilon_{B}\quad\quad,\quad R_{AB}^{ 0}=R_{A}^{*}+R_{B}^{*}\] where \(\varepsilon_{AB}\) and \(R_{B}^{*}\) are parameters defined by the AMBER atom types which are available here [46]. **Van der Waals sphere half area of overlap** Using information about the van der Waals radii and relative positions of each atom in a system we are able to calculate the overlapping van der Waals area between the polymer(s) and the graphene surface. First, we listed the graphene atoms and grouped them with the PyVista [60] spheres that represent them. Next, we checked for any overlapping spheres between the graphene and polymer atoms by comparing their distances and van der Waals radii. If we found any overlapping spheres, we recorded the indices of those atoms. We then merged all the overlapping graphene atoms and polymer atoms together into a single mesh object. We took the boolean intersection of these two new object meshes, and calculated the area of that intersection. This method allowed us to avoid overcounting the area in cases where two or more van der Waals spheres overlapped with a single atom. Finally, we take a factor of 1/2 to avoid double counting the area. **Constraints over configurational space calculations** To obtain meaningful results when exploring a large range of configurations, constraints are necessary. We excluded configurations that did not meet our selection criteria, which included a van der Waals sphere half area of overlap greater than \(2\AA^{2}\) and a negative binding energy. Only the systems that satisfied these criteria were considered for analysis, and the minima energies were reported in Table 1. We chose this selection criteria because a positive binding energy does not have physical meaning, while a van der Waals sphere half area of overlap less than \(2\AA^{2}\) suggests that the molecules are too far apart to interact non-locally, resulting in an unphysical system. **Comparison of polymer-polymer and polymer(s)-graphene interactions** Quantification of the interactions at the polymer-graphene interface is represented by binding energy per unit area of van der Waals overlap. We employ a gaussian fit over all possible energies of the relaxed geometries within the constraints (Fig. 6). The labeling convention for each model in the computational results corresponds to ALP, PMMA and graphene as \(A\), \(P\) and \(G\) respectively. Furthermore, the subscripts G, A, P correspond to the _C0_, _C1_ values that correspond to \(G\), \(A\) or \(P\) of each model used in the calculation of eq. 1. Table 1 shows the results of different models and their corresponding contributions to the binding energy values per unit area (\(\Delta E\)). The models include \(GAP_{GP}\) (i.e. interaction of GP for model _GAP_ ), \(GAP_{AP}\), _GP_ (i.e. interaction of GP for model _GP_), _AP_, _PP_, and _AA_. All the results have negative total energy values, indicating that the systems are stable. The largest negative total energy value is \(-2.5\) [meV/particle/\(\AA^{2}\)] in the _GP_ model, while the smallest is \(-1.2\) [meV/particle/\(\AA^{2}\)] in the _GAP_\({}_{GP}\) model/dimmer. This significant reduction in the binding energy per unit area is indicative to the mix of polymers having a strong decrease in their binding energy with graphene when compared to just PMMA alone. _Table1. Minima values from EDA-FF corrected with van der Waals sphere half area of overlap, physical selection criteria: \(E_{int}\)\(<\)0 and vdW area \(>\) 2\(\AA^{2}\)_ \begin{tabular}{c c c c c} & & \(\cdot\) & Energy/Area [eV/particle/\(\AA^{2}\)] \\ & (total) & (attractive) & (repulsive) \\ Model\({}_{(\text{dimer})}\) & \(\Delta E_{int}\) & \(\Delta E_{es}\) & \(\Delta E_{disp}\) & \(\Delta E_{ex}\) \\ \cline{2-5} \(GAP_{(GP)}\) & -1.2 & -0.1 & -1.1 & -0.1 \\ _GP_ & -2.5 & -0.1 & -3.0 & 0.5 \\ \(GAP_{(AP)}\) & -2.2 & -0.1 & -5.2 & 2.9 \\ _AA_ & -1.7 & -0.2 & -2.3 & 0.4 \\ _AP_ & -2.3 & -0.1 & -5.0 & 2.6 \\ _PP_ & -1.5 & 0.01 & -2.4 & 0.9 \\ \end{tabular} In terms of the energy components, electrostatic and dispersion energies are always negative, while repulsion energies are positive, which is true for all stable systems. Comparing the models, the \(GAP_{GP}\) model has the lowest repulsion energy, while the \(AP\) model has the highest. The \(GAP_{AP}\) model has the highest dispersion energy, while the \(GAP_{GP}\) model has the lowest. Overall, the table provides a useful summary of the energy values for different models and can help guide further analysis and understanding of the systems being studied. PMMA has a significant reduction in BE (1.3 meV) when both polymers are on graphene (i.e. _GAP_). This is consistent to the experimental observations of lower residue concentration on the polymer blend used for graphene transfer. This table provides a more in-depth picture to the successful transfer we observed and the less amount of residues and uniform properties observed. In the simulations of the graphene surface, a periodic array consisting of 5 \(\times\) 7 graphene unit cells was utilized, with all carbon atoms of the graphene being constrained in all directions. We simulated PMMA as a fragment with \(n=2\) and ALP as a dimmer of 2 lactone rings. **Conclusions** We demonstrated clean and large area graphene transfer by using polymer blend with optimized ratio of two polymers. In addition, we designed an algorithm for numerical calculations follows the experimental trend and provides a novel and effective approach for quantifying non-local interactions in multi-molecular systems. This suggests that considering the van der Waals sphere half area of overlap reveals a better representation of the underlying physical interactions in molecular systems on the surface of graphene. The approach allows for reporting energy units to energy/area, which is consistent with the trend in experimental results. **Instrumentation** The samples'morphologies were obtained with a scanning electron microscope (Zeiss Auriga FIB/FESEM, Jena, Germany) at an accelerating voltage of 5 kV. The surface roughness was obtained using Oxford Research AFM (MFP-3D infinity, Santa Barbara, Ca, USA) in the tapping mode at ambient conditions. A Si tips coated with Al (TAP300AL-G probe, Budget Sensors) was used for the topological probing. The amplitude modulation mode in Kelvin probe force microscopy (AM-KPFM) was employed for the measurement of contact potential difference (CPD) of the transferred graphene. A conductive probe consisting of Pt/Ir-coated tip (EFM, Nanoword) was used while silver paint served as the ground. Raman spectra were measured using a Horiba XploRa Raman Confocal system (Kyoto, Japan) with an excitation wavelength of 532 nm and a 1200 L mm-1 diffraction grating. The mapping of the total coverage of graphene (4 \(\upmu\)m x 4 \(\upmu\)m) resulting in 2000 data points were collected using a 100x objective in x-y-z directions. **Acknowledgements** J.K.A. acknowledges that this material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. [1945980]. T.I., J.K.A., A.T. acknowledge the US Department of Defense [Contract #W911QY2220006]. This work was performed at the Joint School of Nanoscience and Nanoengineering, a member of the Southeastern Nanotechnology Infrastructure Corridor (SENIC) and National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the Figure 6: Gaussian fit of binding energies [eV/Å\({}^{2}\)] for the models: GP and GAP\({}_{\rm GP}\) (labeled as GAP here). National Science Foundation [ECCS-1542174]. T.I., J.K.A., A.T. acknowledge the 2DCC grant as NSF cooperative agreement DMR-1539916. ## References * Novoselov et al. (2004) Novoselov, K. S., Geim, A. K., Morozov, S. V., Jiang, D., Zhang, Y., Dubonos, S. V., Grigorieva, I. V., Firsov, A. A. Electric field effect in atomically thin carbon films. Science 2004, 306(5696), 666. * Orlita et al. (2008) Orlita, M.; Faugeras, C.; Plochocka, P.; Neugebauer, P.; Martinez, G.; Maude, D. K.; Barra, A.-L.; Sprinkle, M.; Berger, C.; de Heer, W. A.; Potemski, M. Approaching the Dirac Point in High-Mobility Multilayer Epitaxial Graphene. Physical Review Letters 2008, 101, 267601. * Nair et al. (2008) Nair, R. R.; Blake, P.; Grigorenko, A. N.; Novoselov, K. S.; Booth, T. J.; Stauber, T.; Peres, N. M. R.; Geim, A. K. Fine Structure Constant Defines Visual Transparency of Graphene. Science 2008, 320, 1308-1308. * Nalamati et al. (2020) Nalamati, S.; Devkota, S.; Li, J.; Lavelle, R.; Huet, B.; Snyder, D.; Penn, A.; Garcia, R.; Reynolds, L.; Iyer, S. Hybrid GaAsSb/GaAs Heterostructure Core-Shell Nanowire/Graphene and Photodetector Applications. ACS Applied Electronic Materials 2020, 2, 3109-3120. * Whitener and Sheehan (2014) Whitener, K. E.; Sheehan, P. E. Graphene synthesis. Diamond and Related Materials 2014, 46, 25-34. * Singh et al. (2011) Singh, V.; Joung, D.; Zhai, L.; Das, S.; Khandaker, S. I.; Seal, S. Graphene based materials: Past, present and future. Progress in Materials Science 2011, 56, 1178-1271. * Kim et al. (2009) Kim, K. S.; Zhao, Y.; Jang, H.; Lee, S. Y.; Kim, J. M.; Kim, K. S.; Ahn, J.-H.; Kim, P.; Choi, J.-Y.; Hong, B. H. Large-scale pattern growth of graphene films for stretchable transparent electrodes. Nature 2009, 457, 706-710. * Nam et al. (2017) Nam, J.; Kim, D.; Yun, H.; Shin, D.H.; Nam, S.; Lee, W.K.; Hwang, J.Y.;Lee, S.W. ; Weman, H.; Kim, K.S. Chemical vapor deposition of graphene on platinum: Growth and substrate interaction, Carbon 2017, 111, 733-740. * Ji et al. (2020) Ji, J.; Pang, Y.; Li, D.; Huang, Z.; Zhang, Z.; Xue, N.; Xu, Y.; Mu, X. An aptamer-based shear horizontal surface acoustic wave biosensor with a CVD-grown single-layered graphene film for high-sensitivity detection of a label-free endotoxin. Microsystems & Nanoengineering 2020, 6. * No et al. (2018) No, Y.-S.; Choi, H. K.; Kim, J.-S.; Kim, H.; Yu, Y.-J.; Choi, C.-G.; Choi, J. S. Layer number identification of CVD-grown multilayer graphene using Si peak analysis. Scientific Reports 2018, 8. * Stanford et al. (2020) Stanford, M. G.; Zhang, C.; Fowlkes, J. D.; Hoffman, A.; Ivanov, I. N.; Rack, P. D.; Tour, J. M. High-Resolution Laser-Induced Graphene. Flexible Electronics beyond the Visible Limit. ACS Applied Materials &amp Interfaces 2020, 12, 10902-10907. [13] Li, X.; Zhu, Y.; Cai, W.; Borysiak, M.; Han, B.; Chen, D.; Piner, R. D.; Colombo, L.; Ruoff, R. S. Transfer of Large-Area Graphene Films for High-Performance Transparent Conductive Electrodes. Nano Letters 2009, 9, 4359-4363. [14] Suk, J. W.; Kitt, A.; Magnuson, C. W.; Hao, Y.; Ahmed, S.; An, J.; Swan, A. K.; Goldberg, B. B.; Ruoff, R. S. Transfer of CVD-Grown Monolayer Graphene onto Arbitrary Substrates. ACS Nano 2011, 5, 6916-6924. [15] Juang, Z.-Y.; Wu, C.-Y.; Lu, A.-Y.; Su, C.-Y.; Leou, K.-C.; Chen, F.-R.; Tsai, C.-H. Graphene synthesis by chemical vapor deposition and transfer by a roll-to-roll process. Carbon 2010, 48, 3169-3174. [16] Citak, E.; Istanbullu, B.; Sakalak, H.; G\(\acute{\text{e}}\)Ersoy, M.; Karaman, M. All-Dry Hydrophobic Functionalization of Paper Surfaces for Efficient Transfer of CVD Graphene. Macromolecular Chemistry and Physics 2019, 220, 1900277. [17] Ayodele, O. O.; Adesina, A. O.; Pourianejad, S.; Averitt, J.; Ignatova, T. Recent Advances in Nanomaterial-Based Aptasensors in Medical Diagnosis and Therapy. Nanomaterials 2021, 11, 932. [18] Calafell, I. A.; Cox, J. D.; Radonjic, M.; Saavedra, J. R. M.; de Abajo, F. J. G.; Rozema, L. A.; Walther, P. Quantum computing with graphene plasmons. npj Quantum Information 2019, 5. [19] Naguib, M.; Mochalin, V. N.; Barsoum, M. W.; Gogotsi, Y. 25th Anniversary Article: MXenes: A New Family of Two-Dimensional Materials. Advanced Materials 2013, 26, 992-1005. [20] Ayodele, O. O.; Pourianejad, S.; Trofe, A.; Prokofjevs, A.; Ignatova, T. Application of Soxhlet Extractor for Ultra-clean Graphene Transfer. ACS Omega 2022, 7, 7297-7303. [21] Bae, S. et al. Roll-to-roll production of 30-inch graphene films for transparent electrodes. Nature Nanotechnology 2010, 5, 574-578. [22] Kang, M. H.; Lopez, L. O. P.; Chen, B.; Teo, K.; Williams, J. A.; Milne, W. I.; Cole, M. T. Mechanical Robustness of Graphene on Flexible Transparent Substrates. ACS Applied Materials &amp Interfaces 2016, 8, 22506-22515. [23] Shivayogimath, A.; Whelan, P. R.; Mackenzie, D. M.; Luo, B.; Huang, D.; Luo, D.; Wang, M.; Gammelgaard, L.; Shi, H.; Ruoff, R. S.; Boggild, P.; Booth, T. J. Do-It-Yourself Transfer of Large-Area Graphene Using an Office Laminator and Water. Chemistry of Materials 2019, 31, 2328-2336. [24] Kuten, D.; Nowacka, B.; Pelka, M.; Gnatek, D.; Klimek, M.; Nazim, T.; Sadowska, K.; Wietecka, A.; Galazka, M. Towards clean HSMG graphene transfer. Materials Chemistry and Physics 2020, 251, 123161. [25] Zhang, X.; Xu, C.; Zou, Z.; Wu, Z.; Yin, S.; Zhang, Z.; Liu, J.; Xia, Y.; Lin, C.-T.; Zhao, P.; Wang, H. A scalable polymer-free method for transferring graphene onto arbitrary surfaces. Carbon 2020, 161, 479-485. * [26] Ullah, S.; Yang, X.; Ta, H. Q.; Hasan, M.; Bachmatiuk, A.; Tokarska, K.; Trzebicka, B.; Fu, L.; Rummeli, M. H. Graphene transfer methods: A review. Nano Research 2021, 14, 3756-3772. * [27] Shen, X.; Wang, D.; Ning, J.; Wang, B.; Guo, H.; Zhang, C.; Jia, Y.; Dong, J.; Feng, X.; Wang, X.; Zhang, J.; Hao, Y. MMA-enabled ultraclean graphene transfer for fast response graphene/GaN ultraviolet photodetectors. Carbon 2020, 169, 92-98. * [28] Lin, Y.-C.; Jin, C.; Lee, J.-C.; Jen, S.-F.; Suenaga, K.; Chiu, P.-W. Clean Transfer of Graphene for Isolation and Suspension. ACS Nano 2011, 5, 2362-2368. * [29] Liang, X. et al. Toward Clean and Crackless Transfer of Graphene. ACS Nano 2011, 5, 9144-9153. * [30] Her, M.; Beams, R.; Novotny, L. Graphene transfer with reduced residue. Physics Letters A 2013, 377, 1455-1458. * [31] Leong, W. S.; Wang, H.; Yeo, J.; Martin-Martinez, F. J.; Zubair, A.; Shen, P.-C.; Mao, Y.; Palacios, T.; Buehler, M. J.; Hong, J.-Y.; Kong, J. Paraffin-enabled graphene transfer. Nature Communications 2019, 10. * [32] Zhang, Z.; Du, J.; Zhang, D.; Sun, H.; Yin, L.; Ma, L.; Chen, J.; Ma, D.; Cheng, H.-M.; Ren, W. Rosin-enabled ultraclean and damage-free transfer of graphene for large-area flexible organic light-emitting diodes. Nature Communications 2017, 8. * [33] Dai, B.; Fu, L.; Zou, Z.; Wang, M.; Xu, H.; Wang, S.; Liu, Z. Rational design of a binary metal alloy for chemical vapour deposition growth of uniform single-layer graphene. Nature Communications 2011, 2 * [34] Barin, G. B.; Song, Y.; de Fatima Gimenez, I.; Filho, A. G. S.; Barreto, L. S.; Kong, J. Optimized graphene transfer: Influence of polymethylmethacrylate [PMMA] layer concentration and baking time on graphene final performance. Carbon 2015, 84, 82-90. * [35] Sun, H.; Chen, D.; Wu, Y.; Yuan, Q.; Guo, L.; Dai, D.; Xu, Y.; Zhao, P.; Jiang, N.; Lin, C.-T. High quality graphene films with a clean surface prepared by an UV/ozone assisted transfer process. Journal of Materials Chemistry C 2017, 5, 1880-1884. * [36] Tyler, B. J.; Brennan, B.; Stec, H.; Patel, T.; Hao, L.; Gilmore, I. S.; Pollard, A. J. Removal of Organic Contamination from Graphene with a Controllable Mass-Selected Argon Gas Cluster Ion Beam. The Journal of Physical Chemistry C 2015, 119, 17836-17841. * [37] Lui, C. H.; Liu, L.; Mak, K. F.; Flynn, G. W.; Heinz, T. F. Ultrafast graphene. Nature 2009, 462, 339-341. * [38] Ngoc, H. V.; Qian, Y.; Han, S. K.; Kang, D. J. PMMA-Etching-Free Transfer of Waferscale Chemical Vapor Deposition Two-dimensional Atomic Crystal by a Water-Soluble Polyvinyl Alcohol Polymer Method. Scientific Reports 2016, 6. [39] Wood, J. D.; Doidge, G. P.; Carrion, E. A.; Koepke, J. C.; Kaitz, J. A.; Datye, I.; Behnam, A.; Hewaparakrama, J.; Aruin, B.; Chen, Y.; Dong, H.; Haasch, R. T.; Lyding, J. W.; Pop, E. Annealing free, clean graphene transfer using alternative polymer scaffolds. Nanotechnology 2015, 26, 055302. [40] Ayodele, O. O.; Dawodu, F. A.; Yan, D.; Xin, J.; Zhang, S. Catalytic synthesis of renewable hydrocarbons via hydrodeoxygenation of angelica lactone di/trimers. Fuel 2018, 221, 311-319. [41] Son, B. H.; Kim, H. S.; Jeong, H.; Park, J.-Y.; Lee, S.; Ahn, Y. H. Electron beam induced removal of PMMA layer used for graphene transfer. Scientific Reports 2017, 7. [42] Lin, Y.-C.; Lu, C.-C.; Yeh, C.-H.; Jin, C.; Suenaga, K.; Chiu, P.-W. Graphene Annealing: How Clean Can It Be? Nano Letters 2011, 12, 414-419. [43] Gong, C.; Floresca, H. C.; Hinojos, D.; McDonnell, S.; Qin, X.; Hao, Y.; Jandhyala, S.; Mordi, G.; Kim, J.; Colombo, L.; Ruoff, R. S.; Kim, M. J.; Cho, K.; Wallace, R. M.; Chabal, Y. J. Rapid Selective Etching of PMMA Residues from Transferred Graphene by Carbon Dioxide. The Journal of Physical Chemistry C 2013, 117, 23000-23008 [44] BROYDEN, C. G. The Convergence of a Class of Double-rank Minimization Algorithms General Considerations. IMA Journal of Applied Mathematics 1970, 6, 76-90. [45] Racek, T.; Schindler, O.; Tousek, D.; Horsky, V.; Berka, K.; Koca, J.; Svobodova, R. Atomic Charge Calculator II: web-based tool for the calculation of partial atomic charges. Nucleic Acids Research 2020, 48, W591 W596. [46] Cho, K.-H.; Kang, Y. K.; No, K. T.; Scheraga, H. A. A Fast Method for Calculating Geometry-Dependent Net Atomic Charges for Polypeptides. The Journal of Physical Chemistry B 2001, 105, 3624-3634. [47] Bhuyan, M. S. A.; Uddin, M. N.; Islam, M. M.; Bipasha, F. A.; Hossain, S. S. Synthesis of graphene. International Nano Letters 2016, 6, 65-83. [48] Panchal, V.; Pearce, R.; Yakimova, R.; Tzalenchuk, A.; Kazakova, O. Standardization of surface potential measurements of graphene domains. Scientific Reports 2013, 3. [49] Prudkovskiy, V.; Katin, K.; Maslov, M.; Puech, P.; Yakimova, R.; Deligeorgis, G. Efficient cleaning of graphene from residual lithographic polymers by ozone treatment. Carbon 2016, 109, 221-226. [50] Kim, H. H.; Kang, B.; Suk, J. W.; Li, N.; Kim, K. S.; Ruoff, R. S.; Lee, W. H.; Cho, K. Clean Transfer of Wafer-Scale Graphene ivia/i Liquid Phase Removal of Polycyclic Aromatic Hydrocarbons. ACS Nano 2015, 9, 4726-4733. [51] Mueller, N. S.; Heeg, S.; Alvarez, M. P.; Kusch, P.; Wasserroth, S.; Clark, N.; Schedin, F.; Parthenios, J.; Papagelis, K.; Galiotis, C.; Kalbac, M.; Vijayaraghavan, A.; Huebner, U.; Gorbachev, R.; Frank, O.; Reich, S. Evaluating arbitrary strain configurations and doping in graphene with Raman spectroscopy. 2D Materials 2017, 5, 015016 * [52] Ignatova, T., Pourianejad, S., Li, X., Schmidt, K., Aryeetey, F., Aravamudhan, S. and Rotkin, S.V., Multidimensional imaging reveals mechanisms controlling multimodal label-free biosensing in vertical 2DM-heterostructures. ACS nano2022, 16(2), 259. * [53] Ferrari, A.C. Raman spectroscopy of graphene and graphite: Disorder, electron-phonon coupling, doping and nonadiabatic effects. Solid State Communications 2007, 143,47-57. * [54] Larsen, A. H. et al. The atomic simulation environment--a Python library for working with atoms. Journal of Physics: Condensed Matter 2017, 29, 273002. * [55] Smith, J. S.; Nebgen, B. T.; Zubatyuk, R.; Lubbers, N.; Devereux, C.; Barros, K.; Tretiak, S.; Isayev, O.; Roitberg, A. Outsmarting Quantum Chemistry Through Transfer Learning. 2018. * [56] BROYDEN, C. G. The Convergence of a Class of Double-rank Minimization Algorithms General Considerations. IMA Journal of Applied Mathematics 1970, 6, 76-90. * [57] Cho, K.-H.; Kang, Y. K.; No, K. T.; Scheraga, H. A. A Fast Method for Calculating Geometry-Dependent Net Atomic Charges for Polypeptides. The Journal of Physical Chemistry B 2001, 105, 3624-3634. * [58] Lu, T.; Chen, F. Multiwfn: A multifunctional wavefunction analyzer. Journal of Computational Chemistry 2011, 33, 580-592. * Member carbon rings [cyclo[n]carbons] - A density functional study". Materials Science and Engineering: B2021, 273, 115425. * [60] Sullivan, C.; Kaszynski, A. PyVista: 3D plotting and mesh analysis through a stream-lined interface for the Visualization Toolkit [VTK]. Journal of Open Source Software 2019, 4, 1450. * [61] Alvarez, S. A cartography of the van der Waals territories. Dalton Transactions 2013, 42, 8617.28 **Supplementary Information** **Optimized nanodevice fabrication using clean transfer of graphene by polymer mixture: Experiments and Neural Network based simulations.** Jared K. Averitt, Sajedeh Pourianejad, Olubunmi Ayodele, Kirby Schmidt, Anthony Trofe, Joseph Starobin*, and Tetyana Ignatova\({}^{\star}\) Nanoscience Department, University of North Carolina, Greensboro, United States of America. JA and SP contributed equally. ## 2 Raman spectrum blending at room temperature (RT) does not results in the formation of new type of polymer and further proof was obtained by non-appearance of new functional groups (Fig. S7). However, the shift in absorption bands at line a (C=O stretching mode) and the disappearance of line b in ALP (-OH stretching due to H\({}_{2}\)O physisorption) further confirmed a change in polymer geometry. Based on the calorimetric measurement, the \(T_{g}\) of the polymer blend is within the \(T_{g}\) of PMMA irrespective of the mixing ratios indicating that the two polymers do not bind strongly together. Table S1. Comparative analysis of the surface roughness of graphene obtained using the blended polymers Figure S4. DSC curves of ALP (black), PMMA (red), ALP:PMMA 1:2 (purple) and ALP:PMMA 1:4 (blue) ## 4 Surface energy and surface tension calculation: To estimate the binding energy experimentally we performed contact angle measurements, where surface energy values were calculated for each polymer, polymer mixture and graphene on Si/SiO2 (The graphene partial transparency theory implies that graphene surface energy is dependent on supporting substrate, Si/SiO\({}_{2}\) in our case). To measure free surface energy of graphene, we used equation (1), resulted from the Girifalco-Good-Fowke's Young equation [3, 4]: \[\gamma_{G}=\ \frac{(\gamma_{H_{2}O}\ (1+cos\theta))^{2}}{4\ \gamma_{H_{2}O}^{d}} \tag{1}\] Where, \(\gamma_{H_{2}O}\) is the surface tension of the water drop, \(\gamma_{G}\) is the free surface energy of graphene (solid surface), \(\gamma_{H_{2}O}^{d}\) is water dispersive interactions, and cos\(\theta\) is the contact angle between the liquid-vapor interface and the solid surface. The relation between interfacial tension of solid surface and the solid-liquid interface can determine whether contact angle (\(\theta\)) is either less or greater than 90\({}^{\rm o}\), which is an interpretation of the wettability of the surface. If 0 \(<\)\(\theta\)\(<\) 90\({}^{\rm o}\), the liquid partially wets the solid and the surface is said to be hydrophilic. The hydrophobicity rises as the contact angle of the droplets with the surface increases. Hence, hydrophobic surfaces have contact angles larger than 90\({}^{\rm o}\). Figure S6. Contact angle schematic. The surface tension of polymer can be estimated by the molar parachor, which was introduced by Sugden (1924), who defined a list of atom-groups' contributions [5]; \[\gamma_{P}=\left(\frac{P_{S}}{V}\right)^{4}=\ \left(\frac{P_{S}\ \times\ \rho}{M}\right)^{4} \tag{2}\] \(\gamma\ ({\rm ALP})=13.22\ {\rm mJ/\ m^{2}}\) \(\gamma\ ({\rm PMMA})=42.5\ {\rm mJ/\ m^{2}}\) For ALP:PMMA 1:4 \(\gamma\ ({\rm polymer\ blend})=(1/5)\times 13.22+(4/5)\times 42.5=36.64\ {\rm mJ/\ m^{2}}\) For ALP:PMMA 1:2 \(\gamma\ ({\rm polymer\ blend})=\ 32.74\ {\rm mJ/m^{2}}\) where \(\gamma_{P}\) is surface tension of polymer, Ps is molecular parachor, V is molar volume, M is molecular weight and \(\rho\) is density. Knowing the surface energy of graphene and the surface tension of the support polymers, one can calculate the interfacial energy between graphene and support polymers and subsequently the adhesion energy by using a relation proposed by Girifalco, Good, and Fowkes [6, 7]: \[\gamma_{GP}=\ \gamma_{G}\ +\ \gamma_{P}\ -2\sqrt{\gamma_{G}\cdot\gamma_{P}}= \mbox{-}E_{A} \tag{2}\] \(\gamma_{P}\,\gamma_{G}\,\gamma_{GP}\) and \({\rm E_{A}}\) are the surface free energy of graphene (phase 1), the surface free energy of polymer (phase 2) and the interfacial tension between graphene and polymer, and the adhesion energy, respectively (Table S2).
2301.12638
Graph rules for recurrent neural network dynamics: extended version
This is an extended version of our survey article, "Graph rules for recurrent neural network dynamics," to appear in the April 2023 edition of the Notices of the AMS. It includes additional results, derivations, figures, references, and a set of open questions.
Carina Curto, Katherine Morrison
2023-01-30T03:43:54Z
http://arxiv.org/abs/2301.12638v1
# Graph rules for recurrent neural network dynamics: extended version ###### Abstract We present a general framework for recurrent neural networks, which is a generalization of the notion of _gluing rules_ that allow us to determine all fixed points of the network by gluing together those of the components. These gluing rules are reminiscent of sheaf-theoretic constructions, with fixed points playing the role of sections over subnetworks. First, we review some basics of recurrent neural networks and a bit of historical context. Basic network setup.A _recurrent neural network_ is a directed graph \(G\) together with a prescription for the dynamics on the vertices, which represent neurons (see Figure 1A). To each vertex \(i\) we associate a function \(x_{i}(t)\) that tracks the activity level of neuron \(i\) as it evolves in time. To each ordered pair of vertices \((i,j)\) we assign a weight, \(W_{ij}\), governing the strength of the influence of neuron \(j\) on neuron \(i\). In principle, there can be a nonzero weight between any two nodes, with the graph \(G\) providing constraints on the allowed values \(W_{ij}\), depending on the specifics of the model. The dynamics often take the form of a system of ODEs, called a _firing rate model_[1, 1, 10, 11]: \[\tau_{i}\frac{dx_{i}}{dt} = -x_{i}+\varphi\left(\sum_{j=1}^{n}W_{ij}x_{j}+b_{i}\right),\] \[= -x_{i}+\varphi(y_{i}),\] for \(i=1,\ldots,n.\) The various terms in the equation are illustrated in Figure 1, and can be thought of as follows: * \(x_{i}=x_{i}(t)\) is the firing rate of a single neuron \(i\) (or the average activity of a subpopulation of neurons); * \(\tau_{i}\) is the "leak" timescale, governing how quickly a neuron's activity exponentially decays to zero in the absence of external or recurrent input; * \(W\) is a real-valued matrix of synaptic interaction strengths, with \(W_{ij}\) representing the strength of the connection from neuron \(j\) to neuron \(i\); * \(b_{i}=b_{i}(t)\) is a real-valued external input to neuron \(i\) that may or may not vary with time; * \(y_{i}=y_{i}(t)=\sum_{j=1}^{n}W_{ij}x_{j}(t)+b_{i}(t)\) is the total input to neuron \(i\) as a function of time; and * \(\varphi:\mathbb{R}\rightarrow\mathbb{R}\) is a nonlinear, but typically monotone increasing function. Of particular importance for this article is the family of _threshold-linear networks_ (TLNs). In this case, the nonlinearity is chosen to be the popular threshold-linear (or ReLU) function, \[\varphi(y)=[y]_{+}=\max\{0,y\}.\] TLNs are common firing rate models that have been used in computational neuroscience for decades [11, 10, 12, 13, 14]. The use of threshold-linear units in neural modeling dates back at least to 1958 [15]. In the last 20 years, TLNs have also been shown to be surprisingly tractable mathematically [16, 17, 18, 19, 20, 21, 22], though much of the theory remains under-developed. We are especially interested in _competitive_ or _inhibition-dominated_ TLNs, where the \(W\) matrix is non-positive so the effective interaction between any pair of neurons is inhibitory. In this case, the activity remains bounded despite the lack of saturation in the nonlinearity [19]. These networks produce complex nonlinear dynamics and can possess a remarkable variety of attractors [19, 20, 21, 22]. Firing rate models of the form (1) are examples of _recurrent_ networks because the \(W\) matrix allows for all pairwise interactions, and there is no constraint that the architecture (i.e., the underlying graph \(G\)) be feedforward. Unlike deep neural networks, which can be thought of as classifiers Figure 1: (A) Recurrent network setup. (B) A Ramón y Cajal drawing of real cortical neurons. implementing a clustering function, recurrent networks are primarily thought of as dynamical systems. And the main purpose of these networks is to model the dynamics of neural activity in the brain. The central question is thus: **Question 1.** Given a firing rate model defined by (1) with network parameters \((W,b)\) and underlying graph \(G\), what are the emergent network dynamics? What can we say about the dynamics from knowledge of \(G\) alone? We are particularly interested in understanding the _attractors_ of such a network, including both stable fixed points and dynamic attractors such as limit cycles. The attractors are important because they comprise the set of possible asymptotic behaviors of the network in response to different inputs or initial conditions (see Figure 2). Note that Question 1 is posed for a fixed connectivity matrix \(W\), but of course \(W\) can change over time (e.g., as a result of learning or training of the network). Here we restrict ourselves to considering constant \(W\) matrices; this allows us to focus on understanding network dynamics on a fast timescale, assuming slowly varying synaptic weights. Understanding the dynamics associated to changing \(W\) is an important topic, currently beyond the scope of this work. **Historical interlude: memories as attractors.** Attractor neural networks became popular in the 1980s as models of associative memory encoding and retrieval. The best-known example from that era is the Hopfield model [14, 15], originally conceived as a variant on the Ising model from statistical mechanics. In the Hopfield model, the neurons can be in one of two states, \(s_{i}\in\{\pm 1\}\), and the activity evolves according to the discrete time update rule: \[s_{i}(t+1)=\operatorname{sgn}\left(\sum_{j=1}^{n}W_{ij}s_{j}(t)-\theta_{i} \right).\] Hopfield's famous 1982 result is that the dynamics are guaranteed to converge to a stable fixed point, provided the interaction matrix \(W\) is _symmetric_: that is, \(W_{ij}=W_{ji}\) for every \(i,j\in\{1,\dots,n\}\). Specifically, he showed that the "energy" function, \[E=-\frac{1}{2}\sum_{i,j}W_{ij}s_{i}s_{j}+\sum_{i}\theta_{i}s_{i},\] decreases along trajectories of the dynamics, and thus acts as a Lyapunov function [14]. The stable fixed points are local minima of the energy landscape (Figure 2A). A stronger, more general convergence result for competitive neural networks was shown in [1]. These fixed points are the only attractors of the network, and they represent the set of memories encoded in the network. Hopfield networks perform a kind of _pattern completion_: given an initial condition \(s(0)\), the activity evolves until it converges to one of multiple stored patterns in the network. If, for example, the individual neurons store black and white pixel values, this process could input a corrupted image and recover the original image, provided it had previously been stored as a stable fixed point in the network by appropriately selecting the weights of the \(W\) matrix. The novelty at the time was the nonlinear phenomenon of multistability: namely, that the network could encode many such stable equilibria and thus maintain an entire catalogue of stored memory patterns. The key to Hopfield's convergence result was the requirement that \(W\) be a symmetric interaction matrix. Although this was known to be an unrealistic assumption for real (biological) neural networks, it was considered a tolerable price to pay for guaranteed convergence. One Figure 2: **Attractor neural networks.** (A) For symmetric Hopfield networks and symmetric inhibitory TLNs, trajectories are guaranteed to converge to stable fixed point attractors. Sample trajectories are shown, with the basin of attraction for the blue stable fixed point outlined in blue. (B) For asymmetric TLNs, dynamic attractors can coexist with (static) stable fixed point attractors. did not want an associative memory network that wandered the state space indefinitely without ever recalling a definite pattern. Twenty years later, Hahnloser, Seung, and others followed up and proved a similar convergence result in the case of symmetric inhibitory threshold-linear networks [10]. Specifically, they found a Lyapunov-like function \[L=\frac{1}{2}x^{T}(I-W)x-b^{T}x,\] following the notation in (1) with \(\varphi(y)=[y]_{+}\). For fixed \(b\), it can easily be shown that \(L\) is strictly decreasing along trajectories of the TLN dynamics, and minima of \(L\) correspond to steady states - provided \(W\) is symmetric and \(I-W\) is copositive [10, Theorem 1]. More results on the collections of stable fixed points that can be simultaneously encoded in a symmetric TLN can be found in [1, 13, 14], including some unexpected connections to Cayley-Menger determinants and classical distance geometry. In all of this work, stable fixed points have served as the model for encoded memories. Indeed, these are the only types of attractors that arise for symmetric Hopfield networks or symmetric TLNs. Whether or not guaranteed convergence to stable fixed points is desirable, however, is a matter of perspective. For a network whose job it is to perform pattern completion or classification for static images (or codewords), as in the classical Hopfield model, this is exactly what one wants. But it is also important to consider memories that are temporal in nature, such as sequences and other dynamic patterns of activity. Sequential activity, as observed in central pattern generator circuits (CPGs) and spontaneous activity in hippocampus and cortex, is more naturally modeled by dynamic attractors such as limit cycles. This requires shifting attention to the _asymmetric_ case, in order to be able to encode attractors that are not stable fixed points (Figure 2B). Beyond stable fixed points.When the symmetry assumption is removed, TLNs can support a rich variety of dynamic attractors such as limit cycles, quasiperiodic attractors, and even strange (chaotic) attractors. Indeed, this richness can already be observed in a special class of TLNs called combinatorial threshold-linear networks (CTLNs), introduced in Section 3. These networks are defined from directed graphs, and the dynamics are almost entirely determined by the graph structure. A striking feature of CTLNs is that the dynamics are shaped not only by the stable fixed points, but also the _unstable_ fixed points. In particular, we have observed a direct correspondence between certain types of unstable fixed points and dynamic attractors (see Figure 3) [13]. This is reviewed in Section 4. Despite exhibiting complex, high-dimensional, nonlinear dynamics, recent work has shown that TLNs - and especially CTLNs - are surprisingly tractable mathematically. Motivated by the relationship between fixed points and attractors, a great deal of progress has been made on the problem of relating fixed point structure to network architecture. In the case of CTLNs, this has resulted in a series of _graph rules_: theorems that allow us to rule in and rule out potential fixed points based purely on the structure of the underlying graph [13, 14, 15]. In Section 5, we give a novel exposition of graph rules, and introduce several _elementary graph rules_ from which the others can be derived. Inhibition-dominated TLNs and CTLNs also display a remarkable degree of modularity. Namely, attractors associated to smaller networks can be embedded in larger ones with minimal distortion [13]. This is likely a consequence of the high levels of background inhibition: it serves to stabilize and preserve local properties of the dynamics. These networks also exhibit a kind of Figure 3: **Stable and unstable fixed points. (A) Stable fixed points are attractors of the network. (B-C) Unstable fixed points are not themselves attractors, but certain unstable fixed points seem to correspond to dynamic attractors (B), while others function solely as tipping points between multiple attractors (C).** compositionality, wherein fixed points and attractors of subnetworks can be effectively "glued" together into fixed points and attractors of a larger network. These local-to-global relationships are given by a series of theorems we call _gluing rules_, given in Section 6. ## 2 TLNs and hyperplane arrangements For firing rate models with threshold-nonlinearity \(\varphi(y)=[y]_{+}=\max\{0,y\}\), the network equations (1) become \[\frac{dx_{i}}{dt} = -x_{i}+\left[\sum_{j=1}^{n}W_{ij}x_{j}+b_{i}\right]_{+}\] \[= -x_{i}+[y_{i}]_{+},\] for \(i=1,\ldots,n.\) We also assume \(W_{ii}=0\) for each \(i\). Note that the leak timescales have been set to \(\tau_{i}=1\) for all \(i\). We thus measure time in units of this timescale. For constant \(W\) matrix and input vector \(b\), the equations \[y_{i}=\sum_{j=1}^{n}W_{ij}x_{j}+b_{i}=0,\] define a hyperplane arrangement \(\mathcal{H}=\mathcal{H}(W,b)=\{H_{1},\ldots,H_{n}\}\) in \(\mathbb{R}^{n}\). The \(i\)-th hyperplane \(H_{i}\) is defined by \(y_{i}=\vec{n}_{i}\cdot x+b_{i}=0\), with normal vector \(\vec{n}_{i}=(W_{i1},\ldots,W_{in})\), population activity vector \(x=(x_{1},\ldots,x_{n})\), and affine shift \(b_{i}\). If \(W_{ij}\neq 0\), then \(H_{i}\) intersects the \(j\)-th coordinate axis at the point \(x_{j}=-b_{i}/W_{ij}\). \(H_{i}\) is parallel to the \(i\)-th axis. The hyperplanes \(\mathcal{H}\) partition the positive orthant \(\mathbb{R}^{n}_{\geq 0}\) into chambers. Within the interior of any chamber, each point \(x\) is on the plus or minus side of each hyperplane \(H_{i}\). The equations thus reduce to a linear system of ODEs, with the equation for each \(i=1,\ldots,n\) being either \[\frac{dx_{i}}{dt}=-x_{i}+y_{i}=-x_{i}+\sum_{j=1}^{n}W_{ij}x_{j}+b_{i},\text{ if }y_{i}>0,\] or \[\frac{dx_{i}}{dt}=-x_{i},\text{ if }y_{i}\leq 0.\] In particular, TLNs are piecewise-linear dynamical systems with a different linear system, \(L_{\sigma}\), governing the dynamics in each chamber [1]. A _fixed point_ of a TLN (2) is a point \(x^{*}\in\mathbb{R}^{n}\) that satisfies \(dx_{i}/dt|_{x=x^{*}}=0\) for each \(i\in\{1,\ldots,n\}\). In particular, we must have \[x_{i}^{*}=[y_{i}^{*}]_{+}\text{ for all }i=1,\ldots,n, \tag{3}\] where \(y_{i}^{*}\) is \(y_{i}\) evaluated at the fixed point. We typically assume a nondegeneracy condition on \((W,b)\)[1, 1], which guarantees that each linear system is nondegenerate and has a single fixed point. This fixed point may or may not lie within the chamber where its corresponding linear system applies. The fixed points of the TLN are precisely the fixed points of the linear systems that lie within their respective chambers. Figure 4 illustrates the hyperplanes and chambers for a TLN with \(n=2\). Each chamber, denoted as a region \(R_{\sigma}\), has its own linear system of ODEs, \(L_{\sigma}\), for \(\sigma=\emptyset,\{1\},\{2\},\) or \(\{1,2\}\). The fixed point corresponding to each linear system is Figure 4: **TLNs as a patchwork of linear systems.** (A) The connectivity matrix \(W\), input \(b\), and differential equations for a TLN with \(n=2\) neurons. (B) The state space is divided into chambers (regions) \(R_{\sigma}\), each having dynamics governed by a different linear system \(L_{\sigma}\). The chambers are defined by the hyperplanes \(\{H_{i}\}_{i=1,2}\), with \(H_{i}\) defined by \(y_{i}=0\) (gray lines). denoted by \(x^{*}\), in matching color. Note that only chamber \(R_{\{2\}}\) contains its own fixed point (in red). This fixed point, \(x^{*}=[0,b_{2}]^{T}\), is thus the only fixed point of the TLN. Figure 5 shows an example of a TLN on \(n=3\) neurons. The \(W\) matrix is constructed from a 3-cycle graph and \(b_{i}=\theta=1\) for each \(i\). The dynamics fall into a limit cycle where the neurons fire in a repeating sequence that follows the arrows of the graph. This time, the TLN equations define a hyperplane arrangement in \(\mathbb{R}^{3}\), again with each hyperplane \(H_{i}\) defined by \(y_{i}=0\) (Figure 5C). An initial condition near the unstable fixed point in the all \(+\) chamber (where \(y_{i}>0\) for each \(i\)) spirals out and converges to a limit cycle that passes through four distinct chambers. Note that the threshold nonlinearity is critical for the model to produce nonlinear behavior such as limit cycles; without it, the system would be linear. It is, nonetheless, nontrivial to prove that the limit cycle shown in Figure 5 exists. A recent proof was given for a special family of TLNs constructed from any \(k\)-cycle graph [1]. The set of all fixed points \(\text{FP}(W,b)\).A central object that is useful for understanding the dynamics of TLNs is the collection of _all_ fixed points of the network, both stable and unstable. The _support_ of a fixed point \(x^{*}\in\mathbb{R}^{n}\) is the subset of active neurons, \[\operatorname{supp}x^{*}\stackrel{{\text{def}}}{{=}}\{i\mid x _{i}^{*}>0\}.\] Our nondegeneracy condition (that is generically satisfied) guarantees we can have at most one fixed point per chamber of the hyperplane arrangement \(\mathcal{H}(W,b)\), and thus at most one fixed point per support. We can thus label all the fixed points of a given network by their supports: \[\text{FP}(W,b)\stackrel{{\text{def}}}{{=}} \{\sigma\subseteq[n]\mid\sigma=\operatorname{supp}x^{*},\text{ for some}\] \[\text{ fixed pt }x^{*}\text{ of the TLN }(W,b)\},\] where \[[n]\stackrel{{\text{def}}}{{=}}\{1,\ldots,n\}.\] For each support \(\sigma\in\text{FP}(W,b)\), the fixed point itself is easily recovered. Outside the support, \(x_{i}^{*}=0\) for all \(i\not\in\sigma\). Within the support, \(x^{*}\) is given by: \[x_{\sigma}^{*}=(I-W_{\sigma})^{-1}b_{\sigma}.\] Here \(x_{\sigma}^{*}\) and \(b_{\sigma}\) are the column vectors obtained by restricting \(x^{*}\) and \(b\) to the indices in \(\sigma\), and \(W_{\sigma}\) is the induced principal submatrix obtained by restricting rows and columns of \(W\) to \(\sigma\). From (3), we see that a fixed point with \(\operatorname{supp}x^{*}=\sigma\) must satisfy the "on-neuron" conditions, \(y_{i}^{*}>0\) for all \(i\in\sigma\), as well as the "off-neuron" conditions, \(y_{k}^{*}\leq 0\) for all \(k\notin\sigma\), to ensure that \(x_{i}^{*}>0\) for each \(i\in\sigma\) and \(x_{k}^{*}=0\) for each \(k\notin\sigma\). Equivalently, these conditions guarantee that the fixed point \(x^{*}\) of \(L_{\sigma}\) lies inside its corresponding chamber, \(R_{\sigma}\). Note that for such a fixed point, the values \(x_{i}^{*}\) for \(i\in\sigma\) depend only on the restricted subnetwork \((W_{\sigma},b_{\sigma})\). Therefore, the on-neuron conditions for \(x^{*}\) in \((W,b)\) are satisfied if and only if they hold in \((W_{\sigma},b_{\sigma})\). Since the off-neuron conditions are trivially satisfied in \((W_{\sigma},b_{\sigma})\), it follows that \(\sigma\in\text{FP}(W_{\sigma},b_{\sigma})\) is a necessary condition for \(\sigma\in\text{FP}(W,b)\). It is not, however, sufficient, as the off-neuron conditions may fail in the larger network. Satisfying all the on- and off-neuron conditions, however, is both necessary and sufficient to guarantee \(\sigma\in\text{FP}(G)\)[13, 14]. Conveniently, the off-neuron conditions are independent and can be checked one neuron at a time. Thus, \[\sigma\in\text{FP}(W,b)\Leftrightarrow\sigma\in\text{FP}(W_{\sigma\cup k},b_ {\sigma\cup k})\text{ for all }k\notin\sigma.\] When \(\sigma\in\text{FP}(W_{\sigma},b_{\sigma})\) satisfies all the off-neuron conditions, so that \(\sigma\in\text{FP}(W,b)\), we say that \(\sigma\)_survives_ to the larger network; otherwise, we say \(\sigma\)_dies_. The fixed point corresponding to \(\sigma\in\text{FP}(W,b)\) is _stable_ if and only if all eigenvalues of \(-I+W_{\sigma}\) have negative real part. For competitive (or inhibition-dominated) TLNs, all fixed points - whether stable or unstable - have a stable manifold. This is because competitive TLNs have \(W_{ij}\leq 0\) for all \(i,j\in[n]\). Applying the Perron-Frobenius theorem to \(-I+W_{\sigma}\), we see that the largest magnitude eigenvalue is guaranteed to be real and negative. The corresponding eigenvector provides an attracting direction into the fixed point. Combining this observation with the nondegeneracy condition reveals that the unstable fixed points are all hyperbolic (i.e., saddle points). ## 3 Combinatorial threshold-linear networks _Combinatorial threshold-linear networks_ (CTLNs) are a special case of competitive (or inhibition-dominated) TLNs, with the same threshold nonlinearity, that were first introduced in [1, 2]. What makes CTLNs special is that we restrict to having only two values for the connection strengths \(W_{ij}\), for \(i\neq j\). These are obtained as follows from a directed graph \(G\), where \(j\to i\) indicates that there is an edge from \(j\) to \(i\) and \(j\not\to i\) indicates that there is no such edge: \[W_{ij}=\left\{\begin{array}{ll}0&\mbox{if $i=j$,}\\ -1+\varepsilon&\mbox{if $j\to i$ in $G$,}\\ -1-\delta&\mbox{if $j\not\to i$ in $G$.}\end{array}\right. \tag{5}\] Additionally, CTLNs typically have a constant external input \(b_{i}=\theta\) for all \(i\) in order to ensure the dynamics are internally generated rather than inherited from a changing or spatially heterogeneous input. A CTLN is thus completely specified by the choice of a graph \(G\), together with three real parameters: \(\varepsilon,\delta\), and \(\theta\). We additionally require that \(\delta>0\), \(\theta>0\), and \(0<\varepsilon<\dfrac{\delta}{\delta+1}\). When these conditions are met, we say the parameters are within the _legal range_. Note that the upper bound on \(\varepsilon\) implies \(\varepsilon<1\), and so the \(W\) matrix is always effectively inhibitory. For fixed parameters, only the graph \(G\) varies between networks. The network in Figure 5 is a CTLN with the _standard parameters_\(\varepsilon=0.25\), \(\delta=0.5\), and \(\theta=1\). We interpret a CTLN as modeling a network of \(n\) excitatory neurons, whose net interactions are effectively inhibitory due to a strong global inhibition (Figure 6). When \(j\not\to i\), we say \(j\)_strongly inhibits_\(i\); when \(j\to i\), we say \(j\)_weakly inhibits_\(i\). The weak inhibition is thought of as the sum of an excitatory synaptic connection and the background inhibition. Note that because \(-1-\delta<-1<-1+\varepsilon\), when \(j\not\to i\), neuron \(j\) inhibits \(i\)_more_ than it inhibits itself via its leak term; when \(j\to i\), neuron \(j\) inhibits \(i\)_less_ than it inhibits itself. These differences in inhibition strength cause the activity to follow the arrows of the graph. The set of fixed point supports of a CTLN with graph \(G\) is denoted as: \[\mbox{FP}(G,\varepsilon,\delta) \stackrel{{\rm def}}{{=}} \{\sigma\subseteq[n]\mid\sigma=\mbox{supp}\,x^{*}\mbox{ for some}\] \[\mbox{ fixed pt $x^{*}$ of the associated CTLN}\}.\] Figure 5: **A network on \(n=3\) neurons, its hyperplane arrangement, and limit cycle.** (A) A TLN whose connectivity matrix \(W\) is dictated by a 3-cycle graph, together with the TLN equations. (B) The TLN from A produces firing rate activity in a periodic sequence. (C) (Left) The hyperplane arrangement defined by the equations \(y_{i}=0\), with a trajectory initialized near the fixed point shown in black. (Right) A close-up of the trajectory, spiraling out from the unstable fixed point and falling into a limit cycle. Different colors correspond to different chambers of the hyperplane arrangement through which the trajectory passes. Figure 6: **CTLNs.** A neural network with excitatory pyramidal neurons (triangles) and a background network of inhibitory interneurons (gray circles) that produces a global inhibition. The corresponding graph (right) retains only the excitatory neurons and their connections. \(\mathrm{FP}(G,\varepsilon,\delta)\) is precisely \(\mathrm{FP}(W,b)\), where \(W\) and \(b\) are specified by a CTLN with graph \(G\) and parameters \(\varepsilon\) and \(\delta\). Note that \(\mathrm{FP}(G,\varepsilon,\delta)\) is independent of \(\theta\), provided \(\theta\) is constant across neurons as in a CTLN. It is also frequently independent of \(\varepsilon\) and \(\delta\). For this reason we often refer to it as \(\mathrm{FP}(G)\), especially when a fixed choice of \(\varepsilon\) and \(\delta\) is understood. The legal range condition, \(\varepsilon<\frac{\delta}{\delta+1}\), is motivated by a theorem in [1]. It ensures that single directed edges \(i\to j\) are not allowed to support stable fixed points \(\{i,j\}\in\mathrm{FP}(G,\varepsilon,\delta)\). This allows us to prove the following theorem connecting a certain graph structure to the absence of stable fixed points. Note that a graph is _oriented_ if for any pair of nodes, \(i\to j\) implies \(j\not\to i\) (i.e., there are no bidirectional edges). A _sink_ is a node with no outgoing edges. **Theorem 3.1**.: [1, Theorem 2.4] _Let \(G\) be an oriented graph with no sinks. Then for any parameters \(\varepsilon,\delta,\theta\) in the legal range, the associated CTLN has no stable fixed points. Moreover, the activity is bounded._ The graph in Figure 5A is an oriented graph with no sinks. It has a single fixed point, \(\mathrm{FP}(G)=\{123\}\), irrespective of the parameters (note that we use "123" as shorthand for the set \(\{1,2,3\}\)). This fixed point is unstable and the dynamics converge to a limit cycle (Figure 5C). Even when there are no stable fixed points, the dynamics of a CTLN are always bounded [1]. In the limit as \(t\to\infty\), we can bound the total population activity as a function of the parameters \(\varepsilon,\delta\), and \(\theta\): \[\frac{\theta}{1+\delta}\leq\sum_{i=1}^{n}x_{i}\leq\frac{\theta}{1-\varepsilon}. \tag{6}\] In simulations, we observe a rapid convergence to this regime. Figure 7 depicts four solutions for the same CTLN on \(n=100\) neurons. The graph \(G\) was generated as a directed Erdos-Renyi random graph with edge probability \(p=0.2\); note that it is _not_ an oriented graph. Since the network is deterministic, the only difference between simulations is the initial conditions. While panel A appears to show chaotic activity, the solutions in panels B, C and D all settle into a fixed point or a limit cycle within the allotted time frame. The long transient of panel B is especially striking: around \(t=200\), the activity appears as though it will fall into the same limit cycle from panel D, but then escapes into another period of chaotic-looking dynamics before abruptly converging to a stable fixed point. In all cases, the total population activity rapidly converges to lie within the bounds given in (6), depicted in gray. Fun examples.Despite their simplicity, CTLNs display a rich variety of nonlinear dynamics. Even very small networks can exhibit interesting attractors with unexpected properties. Theorem 3.1 tells us that one way to guarantee that a network will produce dynamic - as opposed to static - attractors is to choose \(G\) to be an oriented graph with no sinks. The following examples are of this type. _The Gaudi attractor._ Figure 8 shows two solutions to a CTLN for a cyclically symmetric tournament1 graph on \(n=5\) nodes. For some initial conditions, the solutions converge to a somewhat forcing limit cycle with the firing rates \(x_{1}(t),\ldots,x_{5}(t)\) all peaking in the expected sequence, 12345 (bottom middle). For a different set of initial conditions, however, the solution converges to the beautiful and unusual attractor displayed at the top. Footnote 1: A _tournament_ is a directed graph in which every pair of nodes has exactly one (directed) edge between them. _Symmetry and synchrony._ Because the pattern of weights in a CTLN is completely determined by the graph \(G\), any symmetry of the graph necessarily translates to a symmetry of the differential equations, and hence of the vector field. It follows that the automorphism group of \(G\) also acts on the set of all attractors, which must respect the symmetry. For example, in the cyclically symmetric tournament of Figure 8, both the Gaudi attractor and the "boring" limit cycle below it are invariant under the cyclic permutation (12345): the solution is preserved up to a time translation. Another way for symmetry to manifest itself in an attractor is via synchrony. The network in Figure 9A depicts a CTLN with a graph on \(n=5\) nodes that has a nontrivial automorphism group \(C_{3}\), cyclically permuting the nodes \(2,3\) and \(4\). In the corresponding attractor, the neurons \(2,3,4\) perfectly synchronize as the solution settles into the limit cycle. Notice, however, what happens for the network in Figure 9B. In this case, the limit cycle looks very similar to the one in A, with the same synchrony among neurons \(2,3\) and \(4\). However, the graph is missing the \(4\to 5\) edge, and so the graph has no nontrivial automorphisms. We refer to this phenomenon as _surprise symmetry_. Figure 8: **Gaudi attractor.** A CTLN for a cyclically symmetric tournament on \(n=5\) nodes produces two distinct attractors, depending on initial conditions. We call the top one the Gaudi attractor because the undulating curves are reminiscent of work by the architect from Barcelona. Figure 7: **Dynamics of a CTLN network on \(n=100\) neurons.** The graph \(G\) is a directed Erdos-Renyi random graph with edge probability \(p=0.2\) and no self loops. The CTLN parameters are \(\varepsilon=0.25\), \(\delta=0.5\), and \(\theta=1\). Initial conditions for each neuron, \(x_{i}(0)\), are randomly and independently chosen from the uniform distribution on \([0,0.1]\). (A-D) Four solutions from the same deterministic network, differing only in the choice of initial conditions. In each panel, the top plot shows the firing rate as a function of time for each neuron in grayscale. The middle plot shows the summed total population activity, \(\sum_{i=1}^{n}x_{i}\), which quickly becomes trapped between the horizontal gray lines – the bounds in equation (6). The bottom plot shows individual rate curves for all \(100\) neurons, in different colors. (A) The network appears chaotic, with some recurring patterns of activity. (B) The solution initially appears to be chaotic, like the one in A, but eventually converges to a stable fixed point supported on a \(3\)-clique. (C) The solution converges to a limit cycle after \(t=300\). (D) The solution converges to a different limit cycle after \(t=200\). Note that one can observe brief “echoes” of this limit cycle in the transient activity of panel B. On the flip side, a network with graph symmetry may have multiple attractors that are exchanged by the group action, but do not individually respect the symmetry. This is the more familiar scenario of spontaneous symmetry breaking. _Emergent sequences._ One of the most reliable properties of CTLNs is the tendency of neurons to fire in sequence. Although we have seen examples of synchrony, the global inhibition promotes competitive dynamics wherein only one or a few neurons reach their peak firing rates at the same time. The sequences may be intuitive, as in the networks of Figures 8 and 9, following obvious cycles in the graph. However, even for small networks the emergent sequences may be difficult to predict. The network in Figure 10A has \(n=7\) neurons, and the graph is a tournament with no non-trivial automorphisms. The corresponding CTLN appears to have a single, global attractor, shown in Figure 10B. The neurons in this limit cycle fire in a repeating sequence, 634517, with 5 being the lowest-firing node. This sequence is highlighted in black in the graph, and corresponds to a cycle in the graph. However, it is only one of many cycles in the graph. Why do the dynamics select this sequence and not the others? And why does neuron 2 drop out, while all others persist? This is particularly puzzling given that node 2 has in-degree three, while nodes 3 and 5 have in-degree two. Indeed, local properties of a network, such as the in- and out-degrees of individual nodes, are insufficient for predicting the participation and ordering of neurons in emergent sequences. Nevertheless, the sequence is fully determined by the structure of \(G\). We just have a limited understanding of how. Recent progress in understanding sequential attractors has relied on special network architectures that are cyclic like the ones in Figures 8 and 9 [22]. Interestingly, although the graph in Figure 10 does not have such an architecture, the induced subgraph generated by the high-firing nodes 1, 3, 4, 6, and 7 is isomorphic to the graph in Figure 8. This graph, as well as the two graphs in Figure 9, have corresponding networks that are in some sense irreducible in their dynamics. These are examples of graphs that we refer to as _core motifs_[14]. ## 4 Minimal fixed points, core motifs, and attractors Stable fixed points of a network are of obvious interest because they correspond to static attractors [17, 18, 19, 20]. One of the most striking features of CTLNs, however, is the strong Figure 10: **Emergent sequences can be difficult to predict.** (A) (Left) The graph of a CTLN that is a tournament on 7 nodes. (Right) The same graph, but with the cycle corresponding to the sequential activity highlighted in black. (B) A solution to the CTLN that converges to a limit cycle. This appears to be the only attractor of the network for the standard parameters. Figure 9: **Symmetry and synchrony.** (A) A graph with automorphism group \(C_{3}\) has an attractor where neurons \(2,3,\) and \(4\) fire synchronously. The overall sequence of activation is denoted \(1(234)5\), indicating that neurons \(2,3,4\) fire synchronously after neuron 1 and before 5, repeating periodically. (B) The symmetry is broken due to the dropped \(4\to 5\) edge. Nevertheless, the attractor still respects the \((234)\) symmetry with nodes \(2,3,\) and \(4\) firing synchronously. Note that both attractors are very similar limit cycles, but the one in B has longer period. (Simulations used the standard parameters: \(\varepsilon=0.25\), \(\delta=0.5\), \(\theta=1\).) connection between _unstable_ fixed points and dynamic attractors [14, 15, 16]. **Question 2**.: For a given CTLN, can we predict the dynamic attractors of the network from its unstable fixed points? Can the unstable fixed points be determined from the structure of the underlying graph \(G\)? Throughout this section, \(G\) is a directed graph on \(n\) nodes. Subsets \(\sigma\subseteq[n]\) are often used to denote both the collection of vertices indexed by \(\sigma\) and the induced subgraph \(G|_{\sigma}\). The corresponding network is assumed to be a nondegenerate CTLN with fixed parameters \(\varepsilon,\delta,\) and \(\theta\). Figure 11 provides two example networks to illustrate the relationship between unstable fixed points and dynamic attractors. Any CTLN with the graph in panel A has three fixed points, with supports \(\mathrm{FP}(G)=\{4,123,1234\}\). The collection of fixed point supports can be thought of as a partially ordered set, ordered by inclusion. In our example, 4 and 123 are thus _minimal_ fixed point supports, because they are minimal under inclusion. It turns out that the corresponding fixed points each have an associated attractor (Figure 11B). The one supported on 4, a sink in the graph, yields a stable fixed point, while the 123 (unstable) fixed point, whose induced subgraph \(G|_{123}\) is a 3-cycle, yields a limit cycle attractor with high-firing neurons 1, 2, and 3. Figure 11C depicts all three fixed points in the state space. Here we can see that the third one, supported on 1234, acts as a "tipping point" on the boundary of two basins of attraction. Initial conditions near this fixed point can yield solutions that converge either to the stable fixed point or the limit cycle. Figure 11D-F provides another example network, called "baby chaos," in which all fixed points are unstable. The minimal fixed point supports, \(125,235,345\) and \(145\), all correspond to core motifs (embedded 3-cycles in the graph). The corresponding attractors are chaotic, and are depicted as firing rate curves (panel E) and trajectories in the state space (panel F). Note that the graph has an automorphism group that exchanges core motifs and their corresponding attractors. Not all minimal fixed points have corresponding attractors. In [15] we saw that the key property of such a \(\sigma\in\mathrm{FP}(G)\) is that it be minimal not only in \(\mathrm{FP}(G)\) but also in \(\mathrm{FP}(G|_{\sigma})\), corresponding to the induced subnetwork restricted to the nodes in \(\sigma\). In other words, \(\sigma\) is the only fixed point in \(\mathrm{FP}(G|_{\sigma})\). This motivates the definition of core motifs. **Definition 4.1**.: Let \(G\) be the graph of a CTLN on \(n\) nodes. An induced subgraph \(G|_{\sigma}\) is a _core motif_ of the network if \(\mathrm{FP}(G|_{\sigma})=\{\sigma\}\). When the graph \(G\) is understood, we sometimes refer to \(\sigma\) itself as a core motif if \(G|_{\sigma}\) is one. The associated fixed point is called a _core fixed point_. Core motifs can be thought of as "irreducible" networks because they have a single fixed point which has full support. Since the activity is bounded and must converge to an attractor, the attractor can be said to correspond to this fixed point. A larger network that contains \(G|_{\sigma}\) as an induced subgraph may or may not have \(\sigma\in\mathrm{FP}(G)\). When the core fixed point does survive, we say refer to the embedded \(G|_{\sigma}\) as a _surviving_ core motif, and we expect the associated attractor to survive. In Figure 11, the surviving core motifs are \(G|_{4}\) and \(G|_{123}\), and they precisely predict the attractors of the network. The simplest core motifs are cliques. When these survive inside a network \(G\), the corresponding attractor is always a stable fixed point supported on all nodes of the clique [13]. In fact, we conjectured that any stable fixed point for a CTLN must correspond to a maximal clique of \(G\) - specifically, a _target-free_ clique [13, 13]. Up to size 4, all core motifs are parameter-independent. For size 5, 37 of 45 core motifs are parameter-independent. Figure 12 shows the complete list of all core motifs of size \(n\leq 4\), together with some associated attractors. The cliques all correspond to stable fixed points, the simplest type of attractor. The 3-cycle yields the limit cycle attractor in Figure 5, which may be distorted when embedded in a larger network (see Figure 11B). The other core motifs whose fixed points are unstable have dynamic attractors. Note that the 4-cycu graph has a (23) symmetry, and the rate curves for these two neurons are synchronous in the attractor. This synchrony is also evident in the 4-ufd attractor, despite the fact that this graph does not have the (23) symmetry. Perhaps the most interesting attractor, however, is the one for the fusion 3-cycle graph. Here the 123 3-cycle attractor, which does not survive the embedding to the larger graph, appears to "fuse" with the stable fixed point associated to 4 (which also does not survive). The resulting attractor can be seen as a "fuse" with the stable fixed point. Figure 11: **Core motifs of CTLNs correspond to attractors.** (A) The graph of a CTLN. The fixed point supports are given by \(\text{FP}(G)=\{4,123,1234\}\), irrespective of parameters \(\varepsilon,\delta,\theta\). (B) Solutions to the CTLN in A using the standard parameters \(\theta=1\), \(\varepsilon=0.25\), and \(\delta=0.5\). (Top) The initial condition was chosen as a small perturbation of the fixed point supported on 123. The activity quickly converges to a limit cycle where the high-firing neurons are the ones in the fixed point support. (Bottom) A different initial condition yields a solution that converges to the static attractor corresponding to the stable fixed point on node 4. (C) The three fixed points are depicted in a three-dimensional projection of the four-dimensional state space. Perturbations of the fixed point supported on 1234 produce solutions that either converge to the limit cycle or to the stable fixed point from B. (D) A network on \(n=5\) nodes whose fixed point supports are also independent of the CTLN parameters. (E) The four core motifs, supported on \(125,235,345\) and \(145\), each have a corresponding chaotic attractor. (F) A projection of the four chaotic attractors (black trajectories) together with all nine fixed points of the network (pink dots), which are all unstable. Figure 12: **Small core motifs.** For each of these graphs, \(\text{FP}(G)=\{[n]\}\), where \(n\) is the number of nodes. Attractors are shown for CTLNs with the standard parameters \(\varepsilon=0.25\), \(\delta=0.5\), and \(\theta=1\). be thought of as binding together a pair of smaller attractors. Figure 13A depicts a larger example of a network whose fixed point structure \(\mathrm{FP}(G)\) is predictive of the attractors. Note that only four supports are minimal: 48, 189, 236, and 345. The first two correspond to surviving cliques, and the last two correspond to 3-cycles with surviving fixed points. An extensive search of attractors for this network reveals only four attractors, corresponding to the four surviving core motifs. Figure 13B shows trajectories converging to each of the four attractors. The cliques yield stable fixed points, as expected, while the 3-cycles correspond to dynamic attractors: one limit cycle, and one strange or chaotic attractor. We have performed extensive tests on whether or not core motifs predict attractors in small networks. Specifically, we decomposed all 9608 non-isomorphic directed graphs on \(n=5\) nodes into core motif components, and used this to predict the attractors [10]. We found that 1053 of the graphs have surviving core motifs that are not cliques; these graphs were thus expected to support dynamic attractors. The remaining 8555 graphs contain only cliques as surviving core motifs, and were thus expected to have only stable fixed point attractors. Overall, we found that core motifs correctly predicted the set of attractors in 9586 of the 9608 graphs. Of the 22 graphs with mistakes, 19 graphs have a core motif with no corresponding attractor, and 3 graphs have no core motifs for the chosen parameters [10]. Across the 1053 graphs with core motifs that are not cliques, we observed a total of 1130 dynamic attractors. Interestingly, these fall into distinct equivalence classes determined by (a) the core motif, and (b) the details of how the core motif is embedded in the larger graph. In the case of _oriented graphs_ on \(n=5\) nodes, we performed a more detailed analysis of the dynamic attractors to determine a set of attractor families [26]. Here we observed a striking modularity of the embedded attractors, wherein the precise details of an attractor remained nearly identical across large families of non-isomorphic graphs with distinct CTLNs. Figure 14 gives a sampling of these common attractors, together with corresponding graph families. Graph families are depicted via "master graphs," with solid edges being shared across all graphs in the family, and dashed edges being optional. Graph counts correspond to non-isomorphic graphs. See [26] for more details. ## 5 Graph rules We have seen that CTLNs exhibit a rich variety of nonlinear dynamics, and that the attractors are closely related to the fixed points. This opens up a strategy for linking attractors to the underlying network architecture \(G\) via the fixed point supports \(\mathrm{FP}(G)\). Our main tools for doing this are _graph rules_. Throughout this section, we will use greek letters \(\sigma,\tau,\omega\) to denote subsets of \([n]=\{1,\ldots,n\}\) corresponding to fixed point supports (or potential supports), while latin letters \(i,j,k,\ell\) denote individual nodes/neurons. As before, \(G|_{\sigma}\) denotes Figure 13: **Coexistence of attractors.** Stable fixed points supported on 48 and 189, a limit cycle corresponding to 236, and a chaotic attractor for 345. All attractors can be easily accessed via an initial condition near the corresponding fixed point. the induced subgraph obtained from \(G\) by restricting to \(\sigma\) and keeping only edges between vertices of \(\sigma\). The fixed point supports are: \[\text{FP}(G) \stackrel{{\text{def}}}{{=}} \{\sigma\subseteq[n]\mid\sigma=\text{supp}\,x^{*}\text{ for some}\] \[\text{ fixed pt }x^{*}\text{ of the associated CTLN}\}.\] The main question addressed by graph rules is: **Question 3**.: What can we say about \(\text{FP}(G)\) from knowledge of \(G\) alone? For example, consider the graphs in Figure 15. Can we determine from the graph alone which subgraphs will support fixed points? Moreover, can we determine which of those subgraphs are core motifs that will give rise to attractors of the network? We saw in Section 4 (Figure 12) that cycles and cliques are among the small core motifs; can cycles and cliques produce core motifs of any size? Can we identify other graph structures that are relevant for either ruling in or ruling out certain subgraphs as fixed point supports? The rest of Section 5 focuses on addressing these questions. Note that implicit in the above questions is the idea that graph rules are _parameter-independent_: that is, they directly relate the structure of \(G\) to \(\text{FP}(G)\) via results that are valid for all choices of \(\varepsilon,\delta\), and \(\theta\) (provided they lie within the legal range). In order to obtain the most powerful results, we also require that our CTLNs be _non-degenerate_. As has already been noted, nondegeneracy is generically satisfied for TLNs [13]. For CTLNs, it is satisfied irrespective of \(\theta\) and for almost all legal range choices of \(\varepsilon\) and \(\delta\) (i.e., up to a set of measure zero in the two-dimensional parameter space for \(\varepsilon\) and \(\delta\)). ### Examples of graph rules We've already seen some graph rules. For example, Theorem 3.1 told us that if \(G\) is an oriented Figure 14: **Modularity of attractors.** For each attractor family, one or more “master graphs” are shown. The master graphs represent a collection of graphs where the solid edges are shared by all graphs and the dashed edges are optional. For example, the master graph corresponding to att 4 represents 7 distinct graphs, all having the same attractor corresponding to the common core motif \(G_{|123}\), embedded so that node 4 receives an edge from 3 but does not send any edge back to \(G_{|123}\). The other families, att 5, att 6, and att 10, yield attractors supported on the same core motif, \(G_{|123}\), but with different embeddings that alter the shape of the attractors. Note that this analysis only considered oriented graphs with no sinks; so, for example, the master graph for att 4 represents only 7 graphs, not 8, as node 5 is required to have at least one outgoing edge. Adapted from [14]. Figure 15: Graphs for which \(\text{FP}(G)\) is completely determined by graph rules. graph with no sinks, the associated CTLN has no stable fixed points. Such CTLNs are thus guaranteed to only exhibit dynamic attractors. Here we present a set of eight simple graph rules, all proven in [13], that are easy to understand and give a flavor of the kinds of theorems we have found. We will use the following graph theoretic terminology. A _source_ is a node with no incoming edges, while a _sink_ is a node with no outgoing edges. Note that a node can be a source or sink in an induced subgraph \(G|_{\sigma}\), while not being one in \(G\). An _independent set_ is a collection of nodes with no edges between them, while a _clique_ is a set of nodes that is all-to-all bidirectionally connected. A _cycle_ is a graph (or an induced subgraph) where each node has exactly one incoming and one outgoing edge, and they are all connected in a single directed cycle. A _directed acyclic graph_ (DAG) is a graph with a topological ordering of vertices such that \(i\not\to j\) whenever \(i>j\); such a graph does not contain any directed cycles. Finally, a _target_ of a graph \(G|_{\sigma}\) is a node \(k\) such that \(i\to k\) for all \(i\in\sigma\setminus\{k\}\). Note that a target may be inside or outside \(G|_{\sigma}\). The graph rules presented here can be found, with detailed proofs, in [13]. We also summarize them in Table 1 and Figure 16. Examples of graph rules: **Rule 1** (independent sets): If \(G|_{\sigma}\) is an independent set, then \(\sigma\in\operatorname{FP}(G)\) if and only if each \(i\in\sigma\) is a sink in \(G\). **Rule 2** (cliques): If \(G|_{\sigma}\) is a clique, then \(\sigma\in\operatorname{FP}(G)\) if and only if there is no node \(k\) of \(G\), \(k\notin\sigma\), such that \(i\to k\) for all \(i\in\sigma.\) In other words, \(\sigma\in\operatorname{FP}(G)\) if and only if \(G|_{\sigma}\) is a target-free clique. If \(\sigma\in\operatorname{FP}(G)\), the corresponding fixed point is stable. **Rule 3** (cycles): If \(G|_{\sigma}\) is a cycle, then \(\sigma\in\operatorname{FP}(G)\) if and only if there is no node \(k\) of \(G\), \(k\notin\sigma\), such that \(k\) receives two or more edges from \(\sigma\). If \(\sigma\in\operatorname{FP}(G)\), the corresponding fixed point is unstable. **Rule 4** (sources): (i) If \(G|_{\sigma}\) contains a source \(j\in\sigma\), with \(j\to k\) for some \(k\in[n]\), then \(\sigma\notin\operatorname{FP}(G)\). (ii) Suppose \(j\notin\sigma\), but \(j\) is a source in \(G\). Then \(\sigma\in\operatorname{FP}(G|_{\sigma\cup j})\) if and only if \(\sigma\in\operatorname{FP}(G|_{\sigma})\). **Rule 5** (targets): (i) If \(\sigma\) has target \(k\), with \(k\in\sigma\) and \(k\not\to j\) for some \(j\in\sigma\) (\(j\neq k\)), then \(\sigma\notin\operatorname{FP}(G|_{\sigma})\) and thus \(\sigma\notin\operatorname{FP}(G)\). (ii) If \(\sigma\) has target \(k\not\in\sigma\), then \(\sigma\notin\operatorname{FP}(G|_{\sigma\cup k})\) and thus \(\sigma\notin\operatorname{FP}(G)\). **Rule 6** (sinks): If \(G\) has a sink \(s\notin\sigma\), then \(\sigma\cup\{s\}\in\operatorname{FP}(G)\) if and only if \(\sigma\in\operatorname{FP}(G)\). **Rule 7** (DAGs): If \(G\) is a directed acyclic graph with sinks \(s_{1},\ldots,s_{\ell}\), then \(\operatorname{FP}(G)=\{\cup s_{i}\mid s_{i}\text{ is a sink in }G\}\), the set of all \(2^{\ell}-1\) unions of sinks. **Rule 8** (parity): For any \(G\), \(|\operatorname{FP}(G)|\) is odd. In many cases, particularly for small graphs, our graph rules are complete enough that they can be used to fully work out \(\operatorname{FP}(G)\). In such cases, \(\operatorname{FP}(G)\) is guaranteed to be parameter-independent (since the graph rules do not depend on \(\varepsilon\) and \(\delta\)). As an example, consider the graph on \(n=5\) nodes in Figure 15A; we will show that \(\operatorname{FP}(G)\) is completely determined by graph rules. Going through the possible subsets \(\sigma\) of different sizes, we find that for \(|\sigma|=1\) only \(3,4\in\operatorname{FP}(G)\) (as those are the sinks). Using Rules 1, 2, and 4, we see that the only \(|\sigma|=2\) elements in \(\operatorname{FP}(G)\) are the clique 15 and the independent set 34. A crucial ingredient for determining the fixed point supports of sizes 3 and 4 is the sinks rule, which guarantees that 135, 145, and 1345 are the only supports of these sizes. Finally, notice that the total number of fixed points up through size \(|\sigma|=4\) is odd. Using Rule 8 (parity), we can thus conclude that there is no fixed point of full support - that is, with \(|\sigma|=5\). It follows that \(\operatorname{FP}(G)=\{3,4,15,34,135,145\}\); moreover, this result is parameter-independent because it was determined purely from graph rules. Although the precise values of the fixed points will change for different choices of the parameters \(\varepsilon,\delta\) and \(\theta\), the set of supports \(\operatorname{FP}(G)\) is invariant. We leave it as an exercise to use graph rules to show that \(\operatorname{FP}(G)=\{134\}\) for the graph in Figure 15B, and \(\operatorname{FP}(G)=\{4,12,124\}\) for the graph in Figure 15C. For the graph in C, it is necessary to appeal to a more general rule for _uniform indegree_ subgraphs, which we review next. Rules 1-7, and many more, all emerge as corollaries of more general rules. In the next few subsections, we will introduce the uniform in-degree rule, graphical domination, and simply-embedded subgraphs. Then, in Section 5.5, we will pool together the more general rules into a complete set of _elementary graph rules_ from which all others follow. \begin{table} \begin{tabular}{l|l|l} Rule name & \(G|_{\sigma}\) structure & graph rule \\ \hline \hline Rule 1 & independent set & \(\sigma\in\operatorname{FP}(G|_{\sigma})\), and \(\sigma\in\operatorname{FP}(G)\Leftrightarrow\sigma\) is a union of sinks \\ \hline Rule 2 & clique & \(\sigma\in\operatorname{FP}(G|_{\sigma})\), and \(\sigma\in\operatorname{FP}(G)\Leftrightarrow\sigma\) is target-free \\ \hline Rule 3 & cycle & \(\sigma\in\operatorname{FP}(G|_{\sigma})\), and \(\sigma\in\operatorname{FP}(G)\Leftrightarrow\) each \(k\notin\sigma\) \\ & & receives at most one edge \(i\to k\) with \(i\in\sigma\) \\ \hline Rule 4(i) & \(\exists\) a source \(j\in\sigma\) & \(\sigma\notin\operatorname{FP}(G)\) if \(j\to k\) for some \(k\in[n]\) \\ \hline Rule 4(ii) & \(\exists\) a source \(j\not\in\sigma\) & \(\sigma\in\operatorname{FP}(G|_{\sigma\cup j})\Leftrightarrow\sigma\in \operatorname{FP}(G|_{\sigma})\) \\ \hline Rule 5(i) & \(\exists\) a target \(k\in\sigma\) & \(\sigma\notin\operatorname{FP}(G|_{\sigma})\) and \(\sigma\notin\operatorname{FP}(G)\) if \(k\not\to j\) for some \(j\in\sigma\) \\ \hline Rule 5(ii) & \(\exists\) a target \(k\not\in\sigma\) & \(\sigma\not\in\operatorname{FP}(G|_{\sigma\cup k})\) and \(\sigma\notin\operatorname{FP}(G)\) \\ \hline Rule 6 & \(\exists\) a sink \(s\notin\sigma\) & \(\sigma\cup\{s\}\in\operatorname{FP}(G)\Leftrightarrow\sigma\in\operatorname{FP }(G)\) \\ \hline Rule 7 & DAG & \(\operatorname{FP}(G)=\{\cup s_{i}\mid s_{i}\text{ is a sink in }G\}\) \\ \hline Rule 8 & arbitrary & \(|\operatorname{FP}(G)|\) is odd \\ \hline \end{tabular} \end{table} Table 1: Graph rules connect properties of a graph \(G\) to the fixed point supports, \(\operatorname{FP}(G)\), of the associated CTLN. Each rule refers to the structure of the induced subgraph \(G|_{\sigma}\) in order to determine whether \(\sigma\in\operatorname{FP}(G|_{\sigma})\) and/or \(\sigma\in\operatorname{FP}(G)\). Figure 16: **A sampling of graph rules.** (A) Independent sets, cliques, and cycles all yield full-support fixed points in isolation. When embedded in a larger graph, the survival of these fixed points is dictated by Rules 1-3. (B) Illustration of Rules 4(i) and 4(ii), pertaining to a source node \(j\) that lies inside or outside \(\sigma\). The solid \(j\to k\) edge is mandatory in Rule 4(i); dashed edges are optional. (C) Illustration of Rules 5(i) and 5(ii), pertaining to a target node \(k\) that lies inside or outside of \(\sigma\). (D) The only fixed point supports in a DAG are sinks and unions of sinks. ### Uniform in-degree rule It turns out that Rules 1, 2, and 3 (for independent sets, cliques, and cycles) are all corollaries of a single rule for graphs of _uniform in-degree_. **Definition 5.1**.: We say that \(G|_{\sigma}\) has _uniform in-degree \(d\)_ if every node \(i\in\sigma\) has \(d\) incoming edges from within \(G|_{\sigma}\). Note that an independent set has uniform in-degree \(d=0\), a cycle has uniform in-degree \(d=1\), and an \(n\)-clique is uniform in-degree with \(d=n-1\). But, in general, uniform in-degree graphs need not be symmetric. For example, the induced subgraph \(G|_{145}\) in Figure 15A is uniform in-degree, with \(d=1\). For CTLNs, a fixed point \(x^{*}\) with support \(\sigma\) satisfies: \[(I-W_{\sigma})x^{*}_{\sigma}=\theta 1_{\sigma},\] where \(1_{\sigma}\) is a vector of all 1's restricted to the index set \(\sigma\). If \(G|_{\sigma}\) has uniform in-degree \(d\), then the row sums of \(I-W_{\sigma}\) are identical, and so \(1_{\sigma}\) is an eigenvector. In particular, \[x^{*}_{\sigma}=\frac{\theta}{R}1_{\sigma},\] where \(R\) is the (uniform) row sum for the matrix \(I-W_{\sigma}\). For in-degree \(d\), we compute \[R=1+d(1-\varepsilon)+(|\sigma|-d-1)(1+\delta).\] Uniform in-degree fixed points with support \(\sigma\) thus have the same value for all \(i\in\sigma\): \[x^{*}_{i}=\frac{\theta}{|\sigma|+\delta(|\sigma|-d-1)-\varepsilon d}. \tag{7}\] (See also [14, Lemma 18].) From the derivation, it is clear that this formula holds for all uniform in-degree graphs, even those that are not symmetric. We can use the formula (7) to verify that the on-neuron conditions, \(x^{*}_{i}>0\) for each \(i\in\sigma\), are satisfied for \(\varepsilon,\delta,\theta\) within the legal range. Using it to check the off-neuron conditions, we find that for \(k\notin\sigma\), \[y^{*}_{k} = \sum_{i\in\sigma}W_{ki}x^{*}_{i}+\theta,\] \[= \sum_{i\to k}(-1+\varepsilon)x^{*}_{i}+\sum_{i\not\sim k}(-1- \delta)x^{*}_{i}+\theta,\] \[= \theta\left(\frac{d_{k}(-1+\varepsilon)+(|\sigma|-d_{k})(-1- \delta)}{|\sigma|+\delta(|\sigma|-d-1)-\varepsilon d}+1\right),\] where \(d_{k}=|\{i\in\sigma\mid i\to k\}|\). From here, it is not difficult to see that the off-neuron condition, \(y^{*}_{k}\leq 0\), will be satisfied if and only if \(d_{k}\leq d\). This gives us the following theorem. **Theorem 5.2** ([14]).: _Let \(G|_{\sigma}\) be an induced subgraph of \(G\) with uniform in-degree \(d\). For \(k\notin\sigma\), let \(d_{k}\) denote the number of edges \(i\to k\) for \(i\in\sigma\). Then \(\sigma\in\mathrm{FP}(G|_{\sigma})\), and_ \[\sigma\in\mathrm{FP}(G|_{\sigma\cup k})\;\Leftrightarrow\;d_{k}\leq d.\] _In particular, \(\sigma\in\mathrm{FP}(G)\) if and only if there does not exist \(k\notin\sigma\) such that \(d_{k}>d\)._ Figure 17 gives examples of uniform in-degree graphs and illustrates the survival condition in Theorem 5.2. ### Graphical domination We have seen that uniform in-degree graphs support fixed points that have uniform firing rates (equation (7)). More generally, fixed points can have very different values across neurons. However, there is some level of "graphical balance" that is required of \(G|_{\sigma}\) for any fixed point support \(\sigma\). For example, it can be shown that if \(\sigma\) contains a pair of neurons \(j,k\) that have the property Figure 17: (A) All uniform in-degree graphs of size \(n=3\). (B) The fixed point survival rule in Theorem 5.2. that all neurons sending edges to \(j\) also send edges to \(k\), and \(j\to k\) but \(k\not\to j\), then \(\sigma\) cannot be a fixed point support. Intuitively, this is because \(k\) is receiving a strict superset of the inputs to \(j\), and this imbalance rules out their ability to coexist in the same fixed point support. This property motivates the following definition. **Definition 5.3**.: We say that \(k\)_graphically dominates \(j\) with respect to \(\sigma\)_ in \(G\) if the following three conditions all hold: 1. For each \(i\in\sigma\setminus\{j,k\}\), if \(i\to j\) then \(i\to k\). 2. If \(j\in\sigma\), then \(j\to k\). 3. If \(k\in\sigma\), then \(k\not\to j\). We refer to this as "inside-in" domination if \(j,k\in\sigma\) (see Figure 18A). In this case, we must have \(j\to k\) and \(k\not\to j\). If \(j\in\sigma\), \(k\notin\sigma\), we call it "outside-in" domination (Figure 18B). On the other hand, "inside-out" domination is the case where \(k\in\sigma\), \(j\notin\sigma\), and "outside-out" domination refers to \(j,k\notin\sigma\) (see Figure 18C-D). What graph rules does domination give us? Intuitively, when inside-in domination is present, the "graphical balance" necessary to support a fixed point is violated, and so \(\sigma\notin\mathrm{FP}(G)\). When \(k\) outside-in dominates \(j\), with \(j\in\sigma\) and \(k\notin\sigma\), again there is an imbalance, and this time it guarantees that neuron \(k\) turns on, since it receives all the inputs that were sufficient to turn on neuron \(j\). Thus, there cannot be a fixed point with support \(\sigma\) since node \(k\) will violate the off-neuron conditions. We can draw interesting conclusions in the other cases of graphical domination as well, as Theorem 5.4 shows. **Theorem 5.4** ([19]).: _Suppose \(k\) graphically dominates \(j\) with respect to \(\sigma\) in \(G\). Then the following all hold:_ 1. _(inside-in) If_ \(j,k\in\sigma\)_, then_ \(\sigma\notin\mathrm{FP}(G|_{\sigma})\) _and thus_ \(\sigma\notin\mathrm{FP}(G)\)_._ 2. _(outside-in) If_ \(j\in\sigma\)_,_ \(k\notin\sigma\)_, then_ \(\sigma\notin\mathrm{FP}(G|_{\sigma\cup k})\) _and thus_ \(\sigma\notin\mathrm{FP}(G)\)_._ 3. _(inside-out) If_ \(k\in\sigma\)_,_ \(j\notin\sigma\)_, then_ \(\sigma\in\mathrm{FP}(G|_{\sigma})\;\Rightarrow\;\sigma\in\mathrm{FP}(G|_{ \sigma\cup j})\)_._ 4. _(outside-out) If_ \(j,k\notin\sigma\)_, then_ \(\sigma\in\mathrm{FP}(G|_{\sigma\cup k})\;\Rightarrow\;\sigma\in\mathrm{FP}(G|_ {\sigma\cup j})\)_._ The four cases of Theorem 5.4 are illustrated in Figure 18. This theorem was originally proven in [19]. Here we provide a more elementary proof, using only the definition of CTLNs and ideas from Section 2. Proof.: Suppose that \(k\) graphically dominates \(j\) with respect to \(\sigma\) in \(G\). To prove statements 1 and 2 in the theorem, we will also assume that there exists a fixed point \(x^{*}\) of the associated CTLN with support \(\mathrm{supp}(x^{*})=\sigma\). This will allow us to arrive at a contradiction. If \(x^{*}\) is a fixed point, we must have \(x^{*}_{i}=[y^{*}_{i}]_{+}\) for all \(i\in[n]\) (see equation (3) from Section 2). Recalling that \(W_{jj}=W_{kk}=0\), and that \(x^{*}_{i}=0\) for \(i\notin\sigma\), it follows that for any \(j,k\in[n]\), we have: \[y^{*}_{j} = \sum_{i\in\sigma\setminus\{j,k\}}W_{ji}x^{*}_{i}+W_{jk}x^{*}_{k}+\theta,\] \[y^{*}_{k} = \sum_{i\in\sigma\setminus\{j,k\}}W_{ki}x^{*}_{i}+W_{kj}x^{*}_{j}+\theta.\] Since \(k\) graphically dominates \(j\) with respect to \(\sigma\), we know that \(W_{ji}\leq W_{ki}\) for all \(i\in\sigma\setminus\{j,k\}\). This is because the off-diagonal values \(W_{\ell i}\) are either \(-1+\varepsilon\), for \(i\to\ell\), or \(-1-\delta\), for \(i\not\to\ell\); and \(-1+\varepsilon>-1-\delta\). It now follows from the above Figure 18: **Graphical domination: four cases. In all cases, \(k\) graphically dominates \(j\) with respect to \(\sigma\). In particular, the set of vertices of \(\sigma\setminus\{j,k\}\) sending edges to \(k\) (red ovals) always contains the set of vertices sending edges to \(j\) (blue ovals).** equations that \(y_{j}^{*}-W_{jk}x_{k}^{*}\leq y_{k}^{*}-W_{kj}x_{j}^{*}\). Equivalently, \[y_{j}^{*}+W_{kj}x_{j}^{*}\leq y_{k}^{*}+W_{jk}x_{k}^{*}. \tag{8}\] We will refer frequently to (8) in what follows. There are four cases of domination to consider. We begin with the first two: 1. (inside-in) If \(j,k\in\sigma\), then \(x_{j}^{*}=y_{j}^{*}>0\) and \(x_{k}^{*}=y_{k}^{*}>0\), and so at the fixed point we must have \((1+W_{kj})x_{j}^{*}\leq(1+W_{jk})x_{k}^{*}\). But domination in this case implies \(j\to k\) and \(k\not\to j\), so that \(W_{kj}=-1+\varepsilon\) and \(W_{jk}=-1-\delta\). Plugging this in, we obtain \(\varepsilon x_{j}^{*}\leq-\delta x_{k}^{*}\). This results in a contradiction, since \(x_{j}^{*},x_{k}^{*}>0\) and \(\varepsilon,\delta>0\). We conclude that \(\sigma\notin\operatorname{FP}(G)\). More specifically, since the contradiction involved only the on-neuron conditions, it follows that \(\sigma\notin\operatorname{FP}(G|_{\sigma})\). 2. (outside-in) If \(j\in\sigma\) and \(k\notin\sigma\), then \(x_{j}^{*}=y_{j}^{*}>0\) and \(x_{k}^{*}=0\), with \(y_{k}^{*}\leq 0\). It follows from (8) that \((1+W_{kj})x_{j}^{*}\leq\ 0\). Since this case of domination also has \(j\to k\), we obtain \((1+W_{kj})x_{j}^{*}=\varepsilon x_{j}^{*}\leq 0\), a contradiction. Again, we can conclude that \(\sigma\notin\operatorname{FP}(G)\), and more specifically that \(\sigma\notin\operatorname{FP}(G|_{\sigma\cup k})\). This completes the proof of statements 1 and 2. To prove statements 3 and 4, we assume only that \(\sigma\in\operatorname{FP}(G|_{\sigma})\), so that a fixed point \(x^{*}\) with support \(\sigma\) exists in the restricted network \(G|_{\sigma}\), but does not necessarily extend to larger networks. Whether or not it extends depends on whether \(y_{i}^{*}\leq 0\) for all \(i\notin\sigma\). 1. (inside-out) If \(j\not\in\sigma\) and \(k\in\sigma\), then \(x_{j}^{*}=0\) and \(x_{k}^{*}=y_{k}^{*}>0\), and so (8) becomes \(y_{j}^{*}\leq(1+W_{jk})x_{k}^{*}\). Domination in this case implies \(k\not\to j\), so we obtain \(y_{j}^{*}\leq-\delta x_{k}^{*}<0\). This shows that \(j\) is guaranteed to satisfy the required off-neuron condition. We can thus conclude that \(\sigma\in\operatorname{FP}(G|_{\sigma\cup j})\). 2. (outside-out) If \(j,k\notin\sigma\), then \(x_{j}^{*}=x_{k}^{*}=0\), and so (8) tells us that \(y_{j}^{*}\leq y_{k}^{*}\). This is true irrespective of whether or not \(j\to k\) or \(k\to j\) (and both are optional in this case). Clearly, if \(y_{k}^{*}\leq 0\) then \(y_{j}^{*}\leq 0\). We can thus conclude that if \(\sigma\in\operatorname{FP}(G|_{\sigma\cup k})\), then \(\sigma\in\operatorname{FP}(G|_{\sigma\cup j})\). Rules 4, 5, and 7 are all consequences of Theorem 5.4. To see how, consider a graph with a source \(j\in\sigma\) that has an edge \(j\to k\) for some \(k\in[n]\). Since \(j\) is a source, it has no incoming edges from within \(\sigma\). If \(k\in\sigma\), then \(k\) inside-in dominates \(j\) and so \(\sigma\notin\operatorname{FP}(G)\). If \(k\notin\sigma\), then \(k\) outside-in dominates \(j\) and again \(\sigma\notin\operatorname{FP}(G)\). Rule 4(i) immediately follows. We leave it as an exercise to prove Rules 4(ii), 5(i), 5(ii), and 7. ### Simply-embedded subgraphs and covers Finally, we introduce the concept of simply-embedded subgraphs. This is the last piece we need before presenting the complete set of elementary graph rules. **Definition 5.5** (simply-embedded).: We say that a subgraph \(G|_{\tau}\) is _simply-embedded in \(G\)_ if for each \(k\notin\tau\), either 1. \(k\to i\) for all \(i\in\tau\), or 2. \(k\not\to i\) for all \(i\in\tau\). In other words, while \(G|_{\tau}\) can have any internal structure, the rest of the network treats all nodes in \(\tau\) equally (see Figure 19A). By abuse of notation, we sometimes say that the corresponding subset of vertices \(\tau\subseteq[n]\) is simply-embedded in \(G\). We allow \(\tau=[n]\) as a trivial case, meaning that \(G\) is simply-embedded in itself. At the other extreme, all singletons \(\tau=\{i\}\) and the empty set \(\tau=\emptyset\) are simply-embedded in \(G\), also for trivial reasons. Note that a subset of a simply-embedded set, \(\omega\subset\tau\), need not be simply-embedded. This is because nodes in \(\tau\setminus\omega\) may not treat those in \(\omega\) equally. Figure 19: **Simply-embedded subgraphs.** Now let's consider the CTLN equations for neurons in a simply-embedded subgraph \(G|_{\tau}\), for \(\tau\subset[n]\). For each \(i\in\tau\), the equations for the dynamics can be rewritten as: \[\frac{dx_{i}}{dt}=-x_{i}+\left[\sum_{j\in\tau}W_{ij}x_{j}+\sum_{k\not\in\tau}W_{ ik}x_{k}+\theta\right]_{+},\] where the term \(\sum_{k\not\in\tau}W_{ik}x_{k}\) is identical for all \(i\in\tau\). This is because \(W_{ik}=-1+\varepsilon\), if \(k\to i\), and \(W_{ik}=-1-\delta\) if \(k\not\to i\); so the fact that \(k\) treats all \(i\in\tau\) equally means that the matrix entries \(\{W_{ik}\}_{i\in\tau}\) are identical for fixed \(k\). We can thus define a single time-varying input function, \[\mu_{\tau}(t)=\sum_{k\not\in\tau}W_{ik}x_{k}(t)+\theta,\ \ \text{for}\ \ i\in\tau,\] that is the same independent of the choice of \(i\in\tau\). This gives us: \[\frac{dx_{i}}{dt}=-x_{i}+\left[\sum_{j\in\tau}W_{ij}x_{j}+\mu_{\tau}(t)\right] _{+},\ \text{for each}\ i\in\tau.\] In particular, the neurons in \(\tau\) evolve according to the dynamics of the local network \(G|_{\tau}\) in the presence of a time-varying input \(\mu_{\tau}(t)\), in lieu of the constant \(\theta\). Suppose we have a fixed point \(x^{*}\) of the full network \(G\), with support \(\sigma\in\text{FP}(G)\). At the fixed point, \[\mu_{\tau}^{*}=\sum_{k\not\in\tau}W_{ik}x_{k}^{*}+\theta=\sum_{k\in\sigma \setminus\tau}W_{ik}x_{k}^{*}+\theta,\] which is a constant. We can think of this as a new choice of the CTLN input parameter, \(\widetilde{\theta}=\mu_{\tau}^{*}\), with the caveat that we may have \(\widetilde{\theta}\leq 0\). It follows that the restriction of the fixed point to \(\tau\), \(x_{\tau}^{*}\), must be a fixed point of subnetwork \(G|_{\tau}\). If \(\widetilde{\theta}\leq 0\), this will be the zero fixed point corresponding to \(\emptyset\) support. If \(\widetilde{\theta}>0\), this fixed point will have nonempty support \(\sigma\cap\tau\in\text{FP}(G|_{\tau})\). From these observations, we have the following key lemma (see Figure 19B): **Lemma 5.6**.: _Let \(G|_{\tau}\) be simply-embedded in \(G\). Then for any \(\sigma\subseteq[n]\),_ \[\sigma\in\text{FP}(G)\;\Rightarrow\;\sigma\cap\tau\in\text{FP}(G|_{\tau}) \cup\{\emptyset\}.\] What happens if we consider more than one simply-embedded subgraph? Lemma 5.7 shows that intersections of simply-embedded subgraphs are also simply-embedded. However, the union of two simply-embedded subgraphs is only guaranteed to be simply-embedded if the intersection is nonempty. (It is easy to find a counterexample if the intersection is empty.) **Lemma 5.7**.: _Let \(\tau_{1},\tau_{2}\subseteq[n]\) be simply-embedded in \(G\). Then \(\tau_{1}\cap\tau_{2}\) is simply-embedded in \(G\). If \(\tau_{1}\cap\tau_{2}\neq\emptyset,\) then \(\tau_{1}\cup\tau_{2}\) is also simply-embedded in \(G\)._ Proof.: If \(\tau_{1}\cap\tau_{2}=\emptyset\), then the intersection is trivially simply-embedded. Assume \(\tau_{1}\cap\tau_{2}\neq\emptyset\), and consider \(k\notin\tau_{1}\cap\tau_{2}\). If \(k\notin\tau_{1}\), then \(k\) treats all vertices in \(\tau_{1}\) equally and must therefore treat all vertices in \(\tau_{1}\cap\tau_{2}\) equally. By the same logic, if \(k\notin\tau_{2}\) then it must treat all vertices in \(\tau_{1}\cap\tau_{2}\) equally. It follows that \(\tau_{1}\cap\tau_{2}\) is simply-embedded in \(G\). Next, consider \(\tau_{1}\cup\tau_{2}\) for a pair of subsets \(\tau_{1},\tau_{2}\) such that \(\tau_{1}\cap\tau_{2}\neq\emptyset.\) Let \(j\in\tau_{1}\cap\tau_{2}\) and \(k\notin\tau_{1}\cup\tau_{2}\). If \(k\to j\), then \(k\to i\) for all \(i\in\tau_{1}\) since \(k\notin\tau_{1}\); moreover, \(k\to\ell\) for all \(\ell\in\tau_{2}\) since \(k\notin\tau_{2}\). If, on the other hand, \(k\not\to j\), then by the same logic \(k\not\to i\) for any \(i\in\tau_{1}\) and \(k\not\to\ell\) for any \(\ell\in\tau_{2}\). It follows that \(\tau_{1}\cup\tau_{2}\) is simply-embedded in \(G\). If we have two simply-embedded subgraphs, \(G|_{\tau_{i}}\) and \(G|_{\tau_{j}}\), we know that for any \(\sigma\in\text{FP}(G)\), \(\sigma\) must restrict to a fixed point \(\sigma_{i}=\sigma\cap\tau_{i}\) and \(\sigma_{j}=\sigma\cap\tau_{j}\) in each of those subgraphs. But when can we _glue_ together such a \(\sigma_{i}\in\text{FP}(G|_{\tau_{i}})\) and \(\sigma_{j}\in\text{FP}(G|_{\tau_{j}})\) to produce a larger fixed point support \(\sigma_{i}\cup\sigma_{j}\) in \(\text{FP}(G|_{\tau_{i}\cup\tau_{j}})\)? Lemma 5.8 precisely answers this question. It uses the following notation: \[\widehat{\text{FP}}(G)\stackrel{{\text{def}}}{{=}}\text{FP}(G) \cup\{\emptyset\}.\] **Lemma 5.8** (pairwise gluing).: _Suppose \(G|_{\tau_{i}},G|_{\tau_{j}}\) are simply-embedded in \(G\), and consider \(\sigma_{i}\in\widehat{\text{FP}}(G|_{\tau_{i}})\) and \(\sigma_{j}\in\widehat{\text{FP}}(G|_{\tau_{j}})\) that satisfy \(\sigma_{i}\cap\tau_{j}=\sigma_{j}\cap\tau_{i}\) (so that \(\sigma_{i},\sigma_{j}\) agree on the overlap \(\tau_{i}\cap\tau_{j}\))._ _Then_ \[\sigma_{i}\cup\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\] _if and only if one of the following holds:_ 1. \(\tau_{i}\cap\tau_{j}=\emptyset\) _and_ \(\sigma_{i},\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\)_, or_ 2. \(\tau_{i}\cap\tau_{j}=\emptyset\) _and_ \(\sigma_{i},\sigma_{j}\notin\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\)_, or_ 3. \(\tau_{i}\cap\tau_{j}\neq\emptyset\)_._ Parts (i-ii) of Lemma 5.8 are essentially the content of [1, Theorem 14]. Part (iii) can also be proven with similar arguments. ### Elementary graph rules In this section we collect a set of elementary graph rules from which all other graph rules can be derived. The first two elementary rules arise from general arguments about TLN fixed points stemming from the hyperplane arrangement picture. They hold for all competitive/inhibition-dominated nondegenerate TLNs, as does Elem Rule 3 (aka Rule 8). The last three elementary graph rules are specific to CTLNs, and recap results from the previous three subsections. As usual, \(G\) is a graph on \(n\) nodes and \(\mathrm{FP}(G)\) is the set of fixed points supports. There are six elementary graph rules: * (unique supports): For a given \(G\), there is at most one fixed point per support \(\sigma\subseteq[n]\). The fixed points can therefore be labeled by the elements of \(\mathrm{FP}(G)\). * (restriction/lifting): Let \(\sigma\subseteq[n]\). Then \[\sigma\in\mathrm{FP}(G) \Leftrightarrow \sigma\in\mathrm{FP}(G|_{\sigma})\text{ and }\] \[\sigma\in\mathrm{FP}(G|_{\sigma\cup k})\text{ for all }k\notin\sigma.\] Moreover, whether \(\sigma\in\mathrm{FP}(G|_{\sigma})\) survives to \(\sigma\in\mathrm{FP}(G|_{\sigma\cup k})\) depends only on the outgoing edges \(i\to k\) for \(i\in\sigma\), not on the backward edges \(k\to i\). * (parity): The total number of fixed points, \(|\,\mathrm{FP}(G)|\), is always odd. * (uniform in-degree): If \(G|_{\sigma}\) has uniform in-degree \(d\), then 1. \(\sigma\in\mathrm{FP}(G|_{\sigma})\), and 2. \(\sigma\in\mathrm{FP}(G|_{\sigma\cup k})\ \Leftrightarrow\ d_{k}\leq d\) in \(G|_{\sigma\cup k}\). In particular, \(\sigma\in\mathrm{FP}(G)\ \Leftrightarrow\ \text{there does not exist }k\notin\sigma\) that receives more than \(d\) edges from \(\sigma\). * (domination): Suppose \(k\) graphically dominates \(j\) with respect to \(\sigma\). 1. (inside-in) If \(j,k\in\sigma\), then \(\sigma\notin\mathrm{FP}(G|_{\sigma})\) and thus \(\sigma\notin\mathrm{FP}(G)\). 2. (outside-in) If \(j\in\sigma\), \(k\notin\sigma\), then \(\sigma\notin\mathrm{FP}(G|_{\sigma\cup k})\) and thus \(\sigma\notin\mathrm{FP}(G)\). 3. (inside-out) If \(k\in\sigma\), \(j\notin\sigma\), then \(\sigma\in\mathrm{FP}(G|_{\sigma})\ \Rightarrow\ \sigma\in\mathrm{FP}(G|_{\sigma\cup j})\). 4. (outside-out) If \(j,k\not\in\sigma\), then \(\sigma\in\mathrm{FP}(G|_{\sigma\cup k})\ \Rightarrow\ \sigma\in\mathrm{FP}(G|_{\sigma\cup j})\). * (simply-embedded): Suppose that \(G|_{\tau_{i}},G|_{\tau_{j}}\) are simply-embedded in \(G\), and recall the notation \(\widehat{\mathrm{FP}}(G)=\mathrm{FP}(G)\cup\{\emptyset\}\). We have the following restriction and gluing rules: 1. (restriction) \(\sigma\in\mathrm{FP}(G)\Rightarrow\sigma\cap\tau_{i}\in\widehat{\mathrm{FP} }(G|_{\tau_{i}})\). 2. (pairwise gluing) If \(\sigma_{i}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}})\), \(\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{j}})\), and \(\sigma_{i}\cap\tau_{j}=\sigma_{j}\cap\tau_{i}\) (so that \(\sigma_{i},\sigma_{j}\) agree on the overlap \(\tau_{i}\cap\tau_{j}\)), then \(\sigma_{i}\cup\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\) if and only if one of the following holds: 1. \(\tau_{i}\cap\tau_{j}=\emptyset\) and \(\sigma_{i},\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\), 2. \(\tau_{i}\cap\tau_{j}=\emptyset\) and \(\sigma_{i},\sigma_{j}\notin\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\), 3. \(\tau_{i}\cap\tau_{j}\neq\emptyset\). Moreover, if \(\tau_{i}\cap\tau_{j}\neq\emptyset\), we are also guaranteed that \(G|_{\tau_{i}\cup\tau_{j}}\) and \(G|_{\tau_{i}\cap\tau_{j}}\) are simply-embedded in \(G\). Thus, \(\sigma_{i}\cap\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}\cap\tau_{j}})\). If, additionally, \(\sigma_{i}\cap\sigma_{j}\neq\tau_{i}\cap\tau_{j}\), then \(\sigma_{i},\sigma_{j}\in\widehat{\mathrm{FP}}(G|_{\tau_{i}\cup\tau_{j}})\). 3. (lifting) If \(\{\tau_{1},\ldots,\tau_{N}\}\) is a simply-embedded cover of \(G\) and \(\sigma\cap\tau_{i}\in\mathrm{FP}(G|_{\tau_{i}})\) for each \(i\in[N]\), then \[\sigma\in\mathrm{FP}(G)\ \Leftrightarrow\ \sigma\in\mathrm{FP}(G|_{\sigma}).\] Elem Rule 6 is illustrated in Figure 20. It collects several results related to simply-embedded graphs. Elem Rule 6(a) is the same as Lemma 5.6, while Elem Rule 6(b) is given by Lemmas 5.7 and 5.8. Note that this rule is valid even if \(\sigma_{i}\) or \(\sigma_{j}\) is empty. Elem Rule 6(c) applies to _simply-embedded covers_ of \(G\), a notion we will define in the next section (see Definition 6.1, below). The forward direction, \(\sigma\in\mathrm{FP}(G)\Rightarrow\sigma\in\mathrm{FP}(G|_{\sigma})\), follows from Elem Rule 2. The backwards direction is the content of [1, Lemma 8]. ## 6 Gluing rules So far we have seen a variety of graph rules and the elementary graph rules from which they are derived. These rules allow us to rule in and rule out potential fixed points in \(\mathrm{FP}(G)\) from purely graph-theoretic considerations. In this section, we consider networks whose graph \(G\) is composed of smaller induced subgraphs, \(G|_{\tau_{i}}\), for \(i\in[N]=\{1,\ldots,N\}\). What is the relationship between \(\mathrm{FP}(G)\) and the fixed points of the components, \(\mathrm{FP}(G|_{\tau_{i}})\)? It turns out we can obtain nice results if the induced subgraphs \(G|_{\tau_{i}}\) are all simply-embedded in \(G\). In this case, we say that \(G\) has a simply-embedded cover. **Definition 6.1** (simply-embedded covers).: We say that \(\mathcal{U}=\{\tau_{1},\ldots,\tau_{N}\}\) is a _simply-embedded cover_ of \(G\) if each \(\tau_{i}\) is simply-embedded in \(G\), and for every vertex \(j\in[n]\), there exists an \(i\in[N]\) such that \(j\in\tau_{i}\). In other words, the \(\tau_{i}\)'s are a vertex cover of \(G\). If the \(\tau_{i}\)'s are all disjoint, we say that \(\mathcal{U}\) is a _simply-embedded partition_ of \(G\). Every graph \(G\) has a trivial simply-embedded cover, with \(N=n\), obtained by taking \(\tau_{i}=\{i\}\) for each \(i\in[n]\). This is also a simply-embedded partition. At the other extreme, since the full set of vertices \([n]\) is a simply-embedded set, we also have the trivial cover with \(N=1\) and \(\tau_{1}=[n]\). These covers, however, do not yield useful information about \(\mathrm{FP}(G)\). In contrast, nontrivial simply-embedded covers can provide strong constraints on, and in some cases fully determine, the set of fixed points \(\mathrm{FP}(G)\). Some of these constraints can be described via _gluing rules_, which we explain below. In the case that \(G\) has a simply-embedded cover, Lemma 5.6 tells us that all "global" fixed point supports in \(\mathrm{FP}(G)\) must be unions of "local" fixed point supports in the \(\mathrm{FP}(G|_{\tau_{i}})\), since every \(\sigma\in\mathrm{FP}(G)\) restricts to \(\sigma\cap\tau_{i}\in\mathrm{FP}(G|_{\tau_{i}})\cup\{\emptyset\}\). But what about the other direction? **Question 4**.: When does a collection of local fixed point supports \(\{\sigma_{i}\}\), with each nonempty \(\sigma_{i}\in\operatorname{FP}(G|_{\tau_{i}})\), glue together to form a global fixed point support \(\sigma=\cup\sigma_{i}\in\operatorname{FP}(G)\)? To answer this question, we develop some notions inspired by sheaf theory. For a graph \(G\) on \(n\) nodes, with a simply-embedded cover \(\mathcal{U}=\{\tau_{1},\ldots,\tau_{N}\}\), we define the _gluing complex_ as: \[\mathcal{F}_{G}(\mathcal{U}) \stackrel{{\mathrm{def}}}{{=}} \{\sigma=\cup_{i}\sigma_{i}\mid\sigma\neq\emptyset,\sigma_{i}\in \operatorname{FP}(G|_{\tau_{i}})\cup\{\emptyset\},\] \[\text{and }\sigma_{i}\cap\tau_{j}=\sigma_{j}\cap\tau_{i}\text{ for all }i,j\in[N]\}.\] In other words, \(\mathcal{F}_{G}(\mathcal{U})\) consists of all \(\sigma\subseteq[n]\) that can be obtained by gluing together local fixed point supports \(\sigma_{i}\in\operatorname{FP}(G|_{\tau_{i}})\). Note that in order to guarantee that \(\sigma_{i}=\sigma\cap\tau_{i}\) for each \(i\), it is necessary that the \(\sigma_{i}\)'s agree on overlaps \(\tau_{i}\cap\tau_{j}\) (hence the last requirement). This means that \(\mathcal{F}_{G}(\mathcal{U})\) is equivalent to: \[\mathcal{F}_{G}(\mathcal{U})=\{\sigma\neq\emptyset\mid\sigma\cap\tau_{i}\in \widehat{\operatorname{FP}}(G|_{\tau_{i}})\;\forall\;\tau_{i}\in\mathcal{U}\},\] using the notation \(\widehat{\operatorname{FP}}(G|_{\tau_{i}})=\operatorname{FP}(G|_{\tau_{i}}) \cup\{\emptyset\}\). It will also be useful to consider the case where \(\sigma\cap\tau_{i}\) is not allowed to be empty for any \(i\). In this case, we define \[\mathcal{F}_{G}^{*}(\mathcal{U})\stackrel{{\mathrm{def}}}{{=}} \{\sigma\subseteq[n]\mid\sigma\cap\tau_{i}\in\operatorname{FP}(G|_{\tau_{i}}) \;\forall\;\tau_{i}\in\mathcal{U}\}.\] Translating Lemma 5.6 into the new notation yields the following: **Lemma 6.2**.: _A CTLN with graph \(G\) and simply-embedded cover \(\mathcal{U}\) satisfies_ \[\operatorname{FP}(G)\subseteq\mathcal{F}_{G}(\mathcal{U}).\] The central question addressed by gluing rules (Question 4) thus translates to: What elements of \(\mathcal{F}_{G}(\mathcal{U})\) are actually in \(\operatorname{FP}(G)\)? **Some examples.** Before delving into this question, we make a few observations. First, note that although \(\mathcal{F}_{G}(\mathcal{U})\) is never empty (it must contain \(\operatorname{FP}(G)\)), the set \(\mathcal{F}_{G}^{*}(\mathcal{U})\) may be empty. For example, in Figure 21A, \(\mathcal{F}_{G}^{*}(\mathcal{U})=\emptyset\), because the only option for \(\sigma\cap\tau_{1}\) is \(\{123\}\), and this would imply \(3\in\sigma\cap\tau_{2}\); but there is no such option in \(\operatorname{FP}(G|_{\tau_{2}}).\) On the other hand, if we are allowed \(\sigma\cap\tau_{i}=\emptyset\), we can choose \(\sigma=\{4\}\) and satisfy both \(\sigma\cap\tau_{1}\in\widehat{\operatorname{FP}}(G|_{\tau_{1}})\) and \(\sigma\cap\tau_{2}\in\widehat{\operatorname{FP}}(G|_{\tau_{2}})\). In fact, this is the only such choice and therefore \(\mathcal{F}_{G}(\mathcal{U})=\{4\}\). Since \(|\operatorname{FP}(G)|\geq 1\), it follows from Lemma 6.2 that \(\operatorname{FP}(G)=\{4\}\). In this case, \(\operatorname{FP}(G)=\mathcal{F}_{G}(\mathcal{U})\). Figure 21B displays another graph, \(G\), that has a simply-embedded cover \(\mathcal{U}\) with three components, \(\tau_{1},\tau_{2}\), and \(\tau_{3}\). Each set of local fixed point supports, \(\operatorname{FP}(G|_{\tau_{i}})\) (shown at the bottom of Figure 21B), can easily be computed using graph rules. Applying the definitions, we obtain: \[\mathcal{F}_{G}^{*}(\mathcal{U}) = \{12346,123456\},\] \[\mathcal{F}_{G}(\mathcal{U}) = \{12346,123456,1234,12345,56,5,6\}.\] Since \(\operatorname{FP}(G)\subseteq\mathcal{F}_{G}(\mathcal{U})\), this narrows down the list of candidate fixed point supports in \(\operatorname{FP}(G)\). Using Elem Rule 5 (domination), we can eliminate supports \(56\) and \(5\), since \(6\) dominates \(5\) with respect to every \(\sigma\subseteq[n]\). On the other hand, Elem Rule 4 (uniform in-degree) allows us to verify that \(1234,12345,\) and \(123456\) are all fixed point supports of \(G\), while Rule 1 and Rule 6 (sinks) tell Figure 21: Two networks with simply-embedded covers. us that \(6,12346\in\mathrm{FP}(G)\). We can thus conclude that \(\mathrm{FP}(G)=\{12346,123456,1234,12345,6\}\subsetneq\mathcal{F}_{G}(\mathcal{U})\). Note that for both graphs in Figure 21, we have \(\mathcal{F}_{G}^{*}(\mathcal{U})\subseteq\mathrm{FP}(G)\subseteq\mathcal{F}_{G }(\mathcal{U})\). While the second containment is guaranteed by Lemma 6.2, the first one need not hold in general. As mentioned above, the central gluing question is to identify what elements of \(\mathcal{F}_{G}(\mathcal{U})\) are in \(\mathrm{FP}(G)\). Our strategy to address this question will be to identify architectures where we can iterate the pairwise gluing rule, Lemma 5.8 (a.k.a. Elem Rule 6(b)). Iteration is possible in a simply-embedded cover \(\mathcal{U}=\{\tau_{i}\}\) provided the unions at each step, \(\tau_{1}\cup\tau_{2}\cup\cdots\cup\tau_{\ell}\), are themselves simply-embedded (this may depend on the order). Fortunately, this is the case for several types of natural constructions, including _connected unions_, _disjoint unions_, _clique unions_, and _linear chains_, which we consider next. Finally, we will examine the case of _cyclic unions_, where pairwise gluing rules cannot be iterated, but for which we find an equally clean characterization of \(\mathrm{FP}(G)\). All five architectures result in theorems, which we call _gluing rules_, that are summarized in Table 2. ### Connected unions Recall that the _nerve_ of a cover \(\mathcal{U}=\{\tau_{i}\}_{i=1}^{N}\) is the simplicial complex: \[\mathcal{N}(\mathcal{U})\stackrel{{\mathrm{def}}}{{=}}\{\alpha \subseteq[N]\mid\bigcap_{i\in\alpha}\tau_{i}\neq\emptyset\}.\] The nerve keeps track of the intersection data of the sets in the cover. We say that a vertex cover \(\mathcal{U}=\{\tau_{i}\}_{i=1}^{N}\) of \(G\) is _connected_ if its nerve is a connected simplicial complex. This means one can "walk" from any \(\tau_{i}\) to any other \(\tau_{j}\) through a sequence of steps between \(\tau_{i}\)'s that overlap. (Note that a connected nerve does not imply a connected \(G\), or vice versa.) Any graph \(G\) admits vertex covers that are connected. Having a connected cover that is also simply-embedded, however, is quite restrictive. We call such architectures _connected unions_: **Definition 6.3**.: A graph \(G\) is a _connected union_ of induced subgraphs \(\{G|_{\tau_{i}}\}\) if \(\{\tau_{1},\ldots,\tau_{N}\}\) is a simply-embedded cover of \(G\) that is also connected. If \(G\) has a connected simply-embedded cover, then without loss of generality we can enumerate the sets \(\tau_{1},\ldots,\tau_{N}\) in such a way that each partial union \(\tau_{1}\cup\tau_{2}\cup\cdots\cup\tau_{\ell}\) is also simply-embedded in \(G\), by ensuring that \(\tau_{\ell}\cap(\tau_{1}\cup\cdots\cup\tau_{\ell-1})\neq\emptyset\) for each \(\ell\) (see Lemma 5.7). This allows us to iterate the pairwise gluing rule, Elem Rule 6(b)iii. In fact, by analyzing the different cases with the \(\sigma_{i}\) empty or nonempty, we can determine that all gluings of compatible fixed points supports \(\{\sigma_{i}\}\) are realized in \(\mathrm{FP}(G)\). This yields our first gluing rule theorem: **Theorem 6.4**.: _If \(G\) is a connected union of subgraphs \(\{G|_{\tau_{i}}\}_{i=1}^{N}\), with \(\mathcal{U}=\{\tau_{i}\}_{i=1}^{N}\), then_ \[\mathrm{FP}(G)=\mathcal{F}_{G}(\mathcal{U}).\] It is easy to check that this theorem exactly predicts \(\mathrm{FP}(G)\) for the graphs in Figure 20E,F and Figure 21A. Example.To see the power of Theorem 6.4, consider the graph \(G\) on \(n=8\) nodes in Figure 22. \(G\) is a rather complicated graph, but it has a connected, simply-embedded cover \(\{\tau_{1}=123,\tau_{2}=345,\tau_{3}=5678\}\) with subgraphs \(G|_{\tau_{i}}\) given in Figure 22A. Note that for this graph, the simply-embedded requirement automatically determines all additional edges in \(G\). For example, since \(2\to 3\) in \(G|_{\tau_{1}}\), and \(3\in\tau_{2}\), we must also have \(2\to 4,5\). In contrast, \(1\not\to 3\) in \(G|_{\tau_{1}}\), and hence we must have \(1\not\to 4\) and \(1\not\to 5\). Using simple graph rules, it is easy to compute \(\mathrm{FP}(G|_{\tau_{1}})=\{123\}\), \(\mathrm{FP}(G|_{\tau_{2}})=\{34,5,345\}\), and \(\mathrm{FP}(G|_{\tau_{3}})=\{567,678,5678\}\), as these are small graphs. It would be much more difficult to compute the full network's \(\mathrm{FP}(G)\) in this way. However, because \(G\) is a connected union, Theorem 6.4 tells us that \(\mathrm{FP}(G)=\mathcal{F}_{G}(\mathcal{U}).\) By simply checking compatibility on overlaps of the possible \(\sigma_{i}=\sigma\cap\tau_{i}\in\mathrm{FP}(G|_{\tau_{i}})\), we can easily compute: \[\mathrm{FP}(G)\;=\;\mathcal{F}_{G}(\mathcal{U}) = \{1234,1234678,1234567,\] \[12345678,567,5678\}.\] Note that the minimal fixed point supports, \(1234,567\), and \(678\), are all core motifs: \(G|_{1234}\) is a \(4\)-ufd graph, while the others are \(3\)-cycles. Moreover, they each have corresponding attractors, as predicted from our previous observations about core motifs [13]. The attractors are shown in Figure 22C. ### Disjoint unions, clique unions, cyclic unions, and linear chains Theorem 6.4 gave us a nice gluing rule in the case where \(G\) has a connected simply-embedded cover. At the other extreme are simply-embedded _partitions_. If \(\mathcal{U}=\{\tau_{i}\}_{i=1}^{N}\) is a simply-embedded partition, then all \(\tau_{i}\)'s are disjoint and the nerve \(\mathcal{N}(\mathcal{U})\) is completely disconnected, consisting of the isolated vertices \(1,\ldots,N\). The following graph constructions all arise from simply-embedded partitions. **Definition 6.5**.: Consider a graph \(G\) with induced subgraphs \(\{G|_{\tau_{i}}\}\) corresponding to a vertex partition \(\mathcal{U}=\{\tau_{1},\ldots,\tau_{N}\}\). Then * \(G\) is a _disjoint union_ if there are no edges between \(\tau_{i}\) and \(\tau_{j}\) for \(i\neq j\). (See Figure 23A.) * \(G\) is a _clique union_ if it contains all possible edges between \(\tau_{i}\) and \(\tau_{j}\) for \(i\neq j\). (See Figure 23B.) * \(G\) is a _linear chain_ if it contains all possible edges from \(\tau_{i}\) to \(\tau_{i+1}\), for \(i=1,\ldots,N-1\), and no other edges between distinct \(\tau_{i}\) and \(\tau_{j}\). (See Figure 23C.) * \(G\) is a _cyclic union_ if it contains all possible edges from \(\tau_{i}\) to \(\tau_{i+1}\), for \(i=1,\ldots,N-1\), as well as all possible edges from \(\tau_{N}\) to \(\tau_{1}\), but no other edges between distinct components \(\tau_{i}\), \(\tau_{j}\). (See Figure 23D.) Note that in each of these cases, \(\mathcal{U}\) is a simply-embedded partition of \(G\). Since the simply-embedded subgraphs in a partition are all disjoint, Lemma 5.8(i-ii) applies. Consequently, fixed point supports \(\sigma_{i}\in\mathrm{FP}(G|_{\tau_{i}})\) and \(\sigma_{j}\in\mathrm{FP}(G|_{\tau_{j}})\) will glue together if and only if either \(\sigma_{i}\) and \(\sigma_{j}\) both survive to yield fixed points in \(\mathrm{FP}(G)\), or neither survives. For both disjoint unions and clique unions, it is easy to see that all larger unions of the form \(\tau_{1}\cup\tau_{2}\cup\cdots\cup\tau_{\ell}\) are themselves simply-embedded. We can thus iteratively use the pairwise gluing Lemma 5.8. For disjoint unions, Lemma 5.8(i) applies, since every \(\sigma_{i}\in\mathrm{FP}(G|_{\tau_{i}})\) survives in \(G\). This yields our first gluing theorem. Recall that \(\widehat{\mathrm{FP}}(G)=\mathrm{FP}(G)\cup\{\emptyset\}\). Figure 22: **Connected union example.** (A) Component subgraphs and their fixed point supports. (B) The full network \(G\), with \(\mathrm{FP}(G)\) computed using Theorem 6.4. The minimal fixed point supports, \(1234\), \(567\), and \(678\), all correspond to core motifs. Vertices are colored to match the rate curves in C. (C) Several solutions to a CTLN with graph \(G\) and parameters \(\varepsilon=0.51,\delta=1.76\), and \(\theta=1\). The top three panels show that initial conditions near each of the minimal (core) fixed points produce solutions \(x(t)\) that fall into corresponding attractors. The bottom panel shows the solution for an initial condition near the full-support fixed point. Interestingly, even though the initial conditions for \(x_{1},x_{2},x_{3}\) and \(x_{4}\) are lower than those of the other nodes, the solution quickly converges to the attractor corresponding to the core motif \(G|_{1234}\) (same as in the top panel). **Theorem 6.6**.: [13, Theorem 11] _If \(G\) is a disjoint union of subgraphs \(\{G|_{\tau_{i}}\}_{i=1}^{N}\), with \(\mathcal{U}=\{\tau_{i}\}_{i=1}^{N}\), then_ \[\operatorname{FP}(G) = \mathcal{F}_{G}(\mathcal{U})\] \[= \{\cup_{i=1}^{N}\sigma_{i}\mid\sigma_{i}\in\widetilde{ \operatorname{FP}}(G|_{\tau_{i}})\:\forall\:i\in[N]\}\setminus\{\emptyset\}.\] Note that this looks identical to the result for connected unions, Theorem 6.4. One difference is that compatibility of \(\sigma_{i}\)'s need not be checked, since the \(\tau_{i}\)'s are disjoint, so \(\mathcal{F}_{G}(\mathcal{U})\) is particularly easy to compute. In this case the size of \(\operatorname{FP}(G)\) is also the maximum possible for a graph with a simply-embedded cover \(\mathcal{U}\): \[|\operatorname{FP}(G)|=\prod_{i=1}^{N}(|\operatorname{FP}(G|_{\tau_{i}})|+1)-1.\] On the other hand, for clique unions, we must apply Lemma 5.8(ii), which shows that only gluings involving a _nonempty_\(\sigma_{i}\) from each component are allowed. Hence \(\operatorname{FP}(G)=\mathcal{F}_{G}^{*}(\mathcal{U})\). Interestingly, the same result holds for cyclic unions, but the proof is different because the simply-embedded structure does _not_ get preserved under unions, and hence Lemma 5.8 cannot be iterated. These results are combined in the next theorem. **Theorem 6.7**.: [13, Theorems 12 and 13] _If \(G\) is a clique union or a cyclic union of subgraphs \(\{G|_{\tau_{i}}\}_{i=1}^{N}\), with \(\mathcal{U}=\{\tau_{i}\}_{i=1}^{N}\), then_ \[\operatorname{FP}(G) = \mathcal{F}_{G}^{*}(\mathcal{U})\] \[= \{\cup_{i=1}^{N}\sigma_{i}\mid\sigma_{i}\in\operatorname{FP}(G|_ {\tau_{i}})\:\forall\:i\in[N]\}.\] In this case, \(|\operatorname{FP}(G)|=\prod_{i=1}^{N}|\operatorname{FP}(G|_{\tau_{i}})|\). Finally, we consider linear chain architectures. \begin{table} \begin{tabular}{l|l|l|l} s-e architecture & fixed point supports & \(|\operatorname{FP}(G)|\) & theorem \\ \hline \hline connected union & \(\operatorname{FP}(G)=\mathcal{F}_{G}(\mathcal{U})\) & depends on overlaps & Thm 6.4 \\ \hline disjoint union & \(\operatorname{FP}(G)=\mathcal{F}_{G}(\mathcal{U})\) & \(\prod_{i=1}^{N}(|\operatorname{FP}(G|_{\tau_{i}})|+1)-1\) & Thm 6.6 \\ & \(=\{\cup_{i}\sigma_{i}\mid\sigma_{i}\in\widetilde{\operatorname{FP}}(G|_{ \tau_{i}})\:\forall i\}\setminus\{\emptyset\}\) & & \\ \hline clique union & \(\operatorname{FP}(G)=\mathcal{F}_{G}^{*}(\mathcal{U})\) & \(\prod_{i=1}^{N}|\operatorname{FP}(G|_{\tau_{i}})|\) & Thm 6.7 \\ & \(=\{\cup_{i}\sigma_{i}\mid\sigma_{i}\in\operatorname{FP}(G|_{\tau_{i}})\: \forall i\in[N]\}\) & & \\ \hline linear chain & \(\operatorname{FP}(G)=\operatorname{FP}(G|_{\tau_{N}})\) & \(|\operatorname{FP}(G|_{\tau_{N}})|\) & Thm 6.8 \\ \hline cyclic union & \(\operatorname{FP}(G)=\mathcal{F}_{G}^{*}(\mathcal{U})\) & \(\prod_{i=1}^{N}|\operatorname{FP}(G|_{\tau_{i}})|\) & Thm 6.7 \\ & \(=\{\cup_{i}\sigma_{i}\mid\sigma_{i}\in\operatorname{FP}(G|_{\tau_{i}})\: \forall i\in[N]\}\) & & \\ \hline \end{tabular} \end{table} Table 2: **Summary of gluing rules.** For each simply-embedded architecture, \(\operatorname{FP}(G)\) is given in terms of the \(\operatorname{FP}(G|_{\tau_{i}})\)’s for component subgraphs. Figure 23: **Disjoint unions, clique unions, cyclic unions, and linear chains.** In each architecture, the \(\{\tau_{i}\}\) form a simply-embedded partition of \(G\). Thick edges between components indicate directed edges between every pair of nodes in the components. In the case of a linear chain (Figure 23C), the gluing sequence must respect the ordering \(\tau_{1},\ldots,\tau_{N}\) in order to guarantee that the unions \(\tau_{1}\cup\tau_{2}\cup\cdot\cdot\cup\tau_{\ell}\) are all simply-embedded. (In the case of disjoint and clique unions, the order didn't matter.) Now consider the first pairwise gluing, with \(\tau_{1}\) and \(\tau_{2}\). Each \(\sigma_{1}\in\operatorname{FP}(G|_{\tau_{1}})\) has a target in \(\tau_{2}\), and hence does not survive to \(\operatorname{FP}(G|_{\tau_{1}\cup\tau_{2}})\) (by Rule 5(ii)). On the other hand, any \(\sigma_{2}\in\operatorname{FP}(G|_{\tau_{2}})\) has no outgoing edges to \(\tau_{1}\), and is thus guaranteed to survive. Elem Rule 6(b) thus tells us that \(\sigma_{1}\cup\sigma_{2}\notin\operatorname{FP}(G|_{\tau_{1}\cup\tau_{2}})\) unless \(\sigma_{1}=\emptyset\). Therefore, \(\operatorname{FP}(G|_{\tau_{1}\cup\tau_{2}})=\operatorname{FP}(G|_{\tau_{2}})\). Iterating this procedure, adding the next \(\tau_{i}\) at each step, we see that \(\operatorname{FP}(G|_{\tau_{1}\cup\cdot\cup\tau_{\ell}})=\operatorname{FP}(G |_{\tau_{\ell}})\). In the end, we obtain our fourth gluing theorem: **Theorem 6.8**.: _[_11_]_ _If \(G\) is a linear chain of subgraphs \(\{G|_{\tau_{i}}\}_{i=1}^{N}\), then_ \[\operatorname{FP}(G)=\operatorname{FP}(G|_{\tau_{N}}).\] Clearly, \(|\operatorname{FP}(G)|=|\operatorname{FP}(G|_{\tau_{N}})|\) in this case. Table 2 summarizes the gluing rules for connected unions, disjoint unions, clique unions, cyclic unions, and linear chains. ### Applications of gluing rules to core motifs Using the above results, it is interesting to revisit the subject of core motifs. Recall that core motifs of CTLNs are subgraphs \(G|_{\sigma}\) that support a unique fixed point, which has full-support: \(\operatorname{FP}(G|_{\sigma})=\{\sigma\}\). We denote the set of surviving core motifs by \[\operatorname{FP}_{\operatorname{core}}(G)\stackrel{{\text{def }}}{{=}}\{\sigma\in\operatorname{FP}(G)\mid G|_{\sigma}\text{ is a core motif of }G\}.\] For small CTLNs, we have seen that core motifs are predictive of a network's attractors [13]. We also saw this in Figure 22, with attractors corresponding to the core motifs in a CTLN for a connected union. What can gluing rules tell us about core motifs? Consider the architectures in Table 2. In the case of disjoint unions, we know that we can never obtain a core motif, since \(|\operatorname{FP}(G)|=|\mathcal{F}_{G}(\mathcal{U})|\geq 3\) whenever there is more than one component subgraph. In the case of connected unions, however, we have a nice result in the situation where all components \(\tau_{i}\) are core motifs. In this case, the additional compatibility requirement on overlaps forces \(\operatorname{FP}(G)=\mathcal{F}_{G}(\mathcal{U})=\{[n]\}\). **Corollary 6.9**.: _If \(G\) is a connected union of core motifs, then \(G\) is a core motif._ Proof.: Let \(G|_{\tau_{1}},\ldots,G|_{\tau_{N}}\) be the component core motifs for the connected union \(G\), a graph on \(n\) nodes. Since \(\mathcal{U}=\{\tau_{i}\}\) is a connected cover, and each component has \(\operatorname{FP}(G|_{\tau_{i}})=\{\tau_{i}\}\), the only possible \(\sigma\in\mathcal{F}_{G}(\mathcal{U})\) arises from taking \(\sigma_{i}=\tau_{i}\) in each component, so that \(\sigma=[n]\). (By compatibility, taking an empty set in any component forces choosing an empty set in all components, yielding \(\sigma=\cup\sigma_{i}=\emptyset\), which is not allowed in \(\mathcal{F}_{G}(\mathcal{U})\).) Applying Theorem 6.4, we see that \(\operatorname{FP}(G)=\mathcal{F}_{G}(\mathcal{U})=\{[n]\}\). Hence, \(G\) is a core motif. As of this writing, we have no good reason to believe the converse is true. However, we have yet to find a counterexample. In the case of clique unions and cyclic unions, however, \(\operatorname{FP}(G)=\mathcal{F}_{G}^{*}(\mathcal{U})\), and gluing in empty sets is again not allowed on components. In these cases, we obtain a similar result, and the converse is also true. **Corollary 6.10**.: _Let \(G\) be a clique union or a cyclic union of components \(\tau_{1},\ldots,\tau_{N}\). Then_ \[\operatorname{FP}_{\operatorname{core}}(G)=\{\cup_{i=1}^{N}\sigma_{i}\ |\ \sigma_{i}\in \operatorname{FP}_{\operatorname{core}}(G|_{\tau_{i}})\}.\] _In particular, \(G\) is a core motif if and only if every \(G|_{\tau_{i}}\) is a core motif._ Proof.: We will prove the second statement. The expression for \(\operatorname{FP}_{\operatorname{core}}(G)\) easily follows from this together with Elem Rule 6(c). Let \(G\) be a clique union or a cyclic union for a simply-embedded partition \(\mathcal{U}=\{\tau_{i}\}\). Theorem 6.7 tells us that \(\operatorname{FP}(G)=\mathcal{F}_{G}^{*}(\mathcal{U})\). Observe that any \(\sigma\in\mathcal{F}_{G}^{*}(\mathcal{U})\) must have nonempty \(\sigma_{i}=\sigma\cap\tau_{i}\in\operatorname{FP}(G|_{\tau_{i}})\) for each \(i\). (\(\Leftarrow\)) If each \(G|_{\tau_{i}}\) is a core motif, it follows that \(\sigma_{i}=\tau_{i}\) for each \(i\), and hence \(\operatorname{FP}(G)=\{[n]\}\). (\(\Rightarrow\)) If the component graphs are not all core, then \(\operatorname{FP}(G)\) will necessarily have more than one fixed point and \(G\) cannot be core. Going back to Figure 12, we can now see that all core motifs up to size \(n=4\) are either clique unions, cyclic unions, or connected unions of smaller core motifs. For example, the 4-cycu graph is the cyclic union of a singleton (node 1), a 2-clique (nodes 2 and 3), and another singleton (node 4). The fusion 3-cycle is a clique union of a 3-cycle and a singleton. Finally, the 4-ufd is the connected union of a 3-cycle and a 2-clique. Infinite families of core motifs can be generated in this way, each having their own particular attractors. ### Modeling with cyclic unions The power of graph rules is that they enable us to reason mathematically about the graph of a CTLN and make surprisingly accurate predictions about the dynamics. This is particularly true for cyclic unions, where the dynamics consistently appear to traverse the components in cyclic order. Consequently, these architectures are useful for modeling a variety of phenomena that involve sequential attractors. This includes the storage and retrieval of sequential memories, as well as CPGs responsible for rhythmic activity, such as locomotion [13, 14]. Recall that the attractors of a network tend to correspond to core motifs in \(\mathrm{FP}_{\mathrm{core}}(G)\). Using Corollary 6.10, we can easily engineer cyclic unions that have multiple sequential attractors. For example, consider the cyclic union in Figure 24A, with \(\mathrm{FP}_{\mathrm{core}}(G)\) comprised of all cycles of length 5 that contain exactly one node per component. For parameters \(\varepsilon=0.75\), \(\delta=4\), the CTLN yields a limit cycle (Figure 24B), corresponding to one such core motif, with sequential firing of a node from each component. By symmetry, there must be an equivalent limit cycle for every choice of 5 nodes, one from each layer, and thus the network is guaranteed to have \(m^{5}\) limit cycles. Note that this network architecture, increased to 7 layers, could serve as a mechanism for storing phone numbers in working memory (\(m=10\) for digits \(0-9\)). As another application of cyclic unions, consider the graph in Figure 25A which produces the quadruped gait 'bound' (similar to gallop), where we have associated each of the four colored nodes with a leg of the animal. Notice that the clique between pairs of legs ensures that those nodes co-fire, and the cyclic union structure guarantees that the activity flows forward cyclically. A similar network was created for the 'trot' gait, with appropriate pairs of legs joined by cliques. Figure 25B shows a network in which both the Figure 24: **The phone number network.** (A) A cyclic union with \(m\) neurons per layer (component), and all \(m^{2}\) feedforward connections from one layer to the next. (B) A limit cycle for the corresponding CTLN (with parameters \(\varepsilon=0.75\), \(\delta=4\)). Figure 25: **A Central Pattern Generator circuit for quadruped motion.** (A) (Left) A cyclic union architecture on 6 nodes that produces the ‘bound’ gait. (Right) The limit cycle corresponding to the bound gait. (B) The graph on 8 nodes is formed from merging together architectures for the individual gaits, ‘bound’ and ‘trot’. Note that the positions of the two hind legs (LH, RH) are flipped for ease of drawing the graph. 'bound' and 'trot' gaits can coexist, with the network selecting one pattern (limit cycle) over the other based solely on initial conditions. This network was produced by essentially overlaying the two architectures that would produce the desired gaits, identifying the two graphs along the nodes corresponding to each leg. Notice that within this larger network, the induced subgraphs for each gait are no longer perfect cyclic unions (since they include additional edges between pairs of legs), and are no longer core motifs. And yet the combined network still produces limit cycles that are qualitatively similar to those of the isolated cyclic unions for each gait. It is an open question when this type of merging procedure for cyclic unions (or other types of subnetworks) will preserve the original limit cycles within the larger network. ## 7 Conclusions Recurrent network models such as TLNs have historically played an important role in theoretical neuroscience; they give mathematical grounding to key ideas about neural dynamics and connectivity, and provide concrete examples of networks that encode multiple attractors. These attractors represent the possible responses, e.g. stored memory patterns, of the network. In the case of CTLNs, we have been able to prove a variety of results, such as graph rules, about the fixed point supports \(\mathrm{FP}(G)\) - yielding valuable insights into the attractor dynamics. Many of these results can be extended beyond CTLNs to more general families of TLNs, and potentially to other threshold nonlinearities. The reason lies in the combinatorial geometry of the hyperplane arrangements. In addition to the arrangements discussed in Section 2, there are closely related hyperplane arrangements given by the _nullclines_ of TLNs, defined by \(dx_{i}/dt=0\) for each \(i\). It is easy to see that fixed points correspond to intersections of nullclines, and thus the elements of \(\mathrm{FP}(W,b)\) are completely determined by the combinatorial geometry of the nullcline arrangement. Intuitively, the combinatorial geometry of such an arrangement is preserved under small perturbations of \(W\) and \(b\). This allows us to extend CTLN results and study how \(\mathrm{FP}(W,b)\) changes as we vary the TLN parameters \(W_{ij}\) and \(b_{i}\). These ideas, including connections to oriented matroids, were further developed in [13]. In addition to gluing rules, we have also studied graphs with simply-embedded covers and related structures in order to predict the sequential attractors of a network [13]. This has led us to introduce the notions of _directional graphs_ and _directional covers_, allowing us to generalize cyclic unions and DAGs. In particular, we were able to prove various _nerve theorems_ for CTLNs, wherein the dynamics of a network with a directional cover can be described via the dynamics of a reduced network defined on the nerve [14]. Finally, although the theory of TLNs and CTLNs has progressed significantly in recent years, many open questions remain. We end with a partial list. ### Open Questions We group our open questions into four categories. The first category concerns the bifurcation theory of TLNs, focusing on changes in \(\mathrm{FP}(W,b)\) as one varies \(W\) or \(b\): 1. Recall the definition, in equation (4), of \(\mathrm{FP}(W,b)\) for an arbitrary TLN \((W,b)\). How does the set of fixed point supports change as we vary \(W\) or \(b\)? What are the possible bifurcations? For example, what pairs of supports, \(\{\sigma,\tau\}\), can disappear or co-appear at the same time? This first question is very general. The next two questions focus on special cases where partial progress has already been made. 2. If we look beyond CTLNs, but constrain the \(W\) matrix to respect a given architecture \(G\), how does this constrain the possibilities for \(\mathrm{FP}(W,b)\)? In the case of constant \(b_{i}=\theta\) across neurons, we have identified _robust motifs_, graphs for which \(\mathrm{FP}(W,b)\) is invariant across all compatible choices of \(W\)[13]. What graphs allow only a few possibilities for \(\mathrm{FP}(W,b)\)? What are the most flexible graphs for which \(\text{FP}(W,b)\) can vary the most? 3. What happens if we fix \(W\) and vary \(b\in\mathbb{R}^{n}\)? What features of the connectivity matrix \(W\) control the repertoire of possible fixed point regimes, \(\text{FP}(W,b)\)? What \(W\) matrices allow a _core motif region_, for which \(\text{FP}(W,b)=\{[n]\}\)? And how do the dynamic attractors of a network change as we transition between different regions in \(b\)-space? The second category concerns the relationship between TLNs and the geometry of the associated hyperplane arrangements: 1. To what extent does the hyperplane arrangement of a TLN, as described in Section 2, determine its dynamics? What are all the \((W,b)\) choices that have the same hyperplane arrangement? Same nullcline arrangement? 2. What happens if we change the nonlinearity in equation (1) from \(\varphi(y)=[y]_{+}\) to a sigmoid function, a threshold power-law nonlinearity [14], or something else? Can we adapt the proofs and obtain similar results for \(\text{FP}(W,b)\) and \(\text{FP}(G)\) in these cases? Note that the combinatorial geometry approach in [13] suggests that the results should not depend too heavily on the details of the nonlinearity. Instead, it is the resulting arrangement of nullclines that is essential for determining the fixed points. The third category concerns graph rules, core motifs, and the corresponding attractors: 1. What other graph rules or gluing rules follow from the elementary graph rules? We believe our current list is far from exhaustive. 2. Classify all core motifs for CTLNs. We already have a classification for graphs up to size \(n=5\)[12], but beyond this little is known. Note that gluing rules allow us to construct infinite families of core motifs from gluing together smaller component cores (see Section 6.3). Are there other families of core motifs that cannot be obtained via gluing rules? What can we say about the corresponding attractors? 3. Computational evidence suggests a strong correspondence between core motifs and the attractors of a network, at least in the case of small CTLNs [12, 12]. Can we make this correspondence precise? Under what conditions does the correspondence between surviving core fixed points and attractors hold? 4. How does symmetry affect the attractors of a network? The automorphism group of a graph \(G\) naturally acts on an associated CTLN by permuting the variables, \(\{x_{i}\}\). This translates to symmetries of the defining vector field (2), and a group action on the set of attractors. The automorphism group can either fix attractors or permute them. Moreover, a network may also have "surprise symmetry," as in Figure 9, where the attractors display additional symmetry that was not present in the original graph \(G\). How do we make sense of these various phenomena? Finally, the fourth category collects various conjectures about dynamic behaviors that we have observed in simulations. 1. In [12, 13] we conjectured that all stable fixed points of a CTLN correspond to _target-free_ cliques. While [13] provides proofs of this conjecture in special cases, the general question remains open. 2. The Gaudi attractor from Figure 8 appears to have constant total population activity. In other words, \(\sum_{i=1}^{5}x_{i}(t)\) appears to be constant in numerical experiments, once the trajectory has converged to the attractor. Can we prove this? For what other (non-static) TLN/CTLN attractors is the total population activity conserved? 3. Prove that the "baby chaos" network in Figure 11D-F is chaotic. I.e., prove that the individual attractors are chaotic (or strange), in the same sense as the Lorenz or Rossler attractors. 13. A _proper source_ of a graph \(G\) is a source node \(j\) that has at least one outgoing edge, \(j\to k\) for \(k\neq j\). In numerical experiments, we have observed that proper sources of CTLNs always seem to "die" - that is, their activity \(x_{j}(t)\) tends to zero as \(t\to\infty\), regardless of initial conditions. Can we prove this? Some progress on this question was made in [10], but the general conjecture remains open. Note that although the sources Rule 4(i) guarantees that proper sources do not appear in any fixed point support of \(\text{FP}(G)\), this alone does not imply that the activity at such nodes converges to zero. 14. In our classification of attractors for small CTLNs, we observed that if two CTLNs with distinct graphs have the "same" attractor, as in Figure 14, then this attractor is preserved for the entire family of TLNs whose \(W\) matrices linearly interpolate between the two CTLNs (and have the same constant \(b_{i}=\theta\) for all \(i\)). In other words, the attractor persists for all TLNs \((W_{t},\theta)\) with \(W_{t}=(1-t)W_{0}+tW_{1}\) and \(t\in[0,1]\), where \(W_{0}\) and \(W_{1}\) are the two CTLN connectivity matrices. (Note that the interpolating networks \(W_{t}\) for \(t\in(0,1)\) are _not_ CTLNs.) Can we prove this? More generally, we conjecture that if the same attractor is present for a set of TLNs \((W_{1},b),\ldots,(W_{m},b)\), then it is present for all TLNs \((W,b)\) with \(W\) in the convex hull of the \(W_{i}\) matrices. ## Acknowledgments We would like to thank Zelong Li, Nicole Sanderson, and Juliana Londono Alvarez for a careful reading of the manuscript. We also thank Caitlyn Parmelee, Caitlin Lienkaemper, Safaan Sadiq, Anda Degeratu, Vladimir Itskov, Christopher Langdon, Jesse Geneson, Daniela Egas Santander, Stefania Ebli, Alice Patania, Joshua Paik, Samantha Moore, Devon Olds, and Joaquin Castaneda for many useful discussions. The first author was supported by NIH R01 EB022862, NIH R01 NS120581, NSF DMS-1951165, and a Simons Fellowship. The second author was supported by NIH R01 EB022862 and NSF DMS-1951599.
2303.06746
DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing
Extracting the architecture of layers of a given deep neural network (DNN) through hardware-based side channels allows adversaries to steal its intellectual property and even launch powerful adversarial attacks on the target system. In this work, we propose DNN-Alias, an obfuscation method for DNNs that forces all the layers in a given network to have similar execution traces, preventing attack models from differentiating between the layers. Towards this, DNN-Alias performs various layer-obfuscation operations, e.g., layer branching, layer deepening, etc, to alter the run-time traces while maintaining the functionality. DNN-Alias deploys an evolutionary algorithm to find the best combination of obfuscation operations in terms of maximizing the security level while maintaining a user-provided latency overhead budget. We demonstrate the effectiveness of our DNN-Alias technique by obfuscating the architecture of 700 randomly generated and obfuscated DNNs running on multiple Nvidia RTX 2080 TI GPU-based machines. Our experiments show that state-of-the-art side-channel architecture stealing attacks cannot extract the original DNN accurately. Moreover, we obfuscate the architecture of various DNNs, such as the VGG-11, VGG-13, ResNet-20, and ResNet-32 networks. Training the DNNs using the standard CIFAR10 dataset, we show that our DNN-Alias maintains the functionality of the original DNNs by preserving the original inference accuracy. Further, the experiments highlight that adversarial attack on obfuscated DNNs is unsuccessful.
Mahya Morid Ahmadi, Lilas Alrahis, Ozgur Sinanoglu, Muhammad Shafique
2023-03-12T20:43:38Z
http://arxiv.org/abs/2303.06746v1
# DNN-Alias: Deep Neural Network Protection Against Side-Channel Attacks via Layer Balancing ###### Abstract Extracting the architecture of layers of a given deep neural network (DNN) through hardware-based side channels allows adversaries to steal its intellectual property and even launch powerful adversarial attacks on the target system. In this work, we propose _DNN-Alias_, an obfuscation method for DNNs that forces all the layers in a given network to have similar execution traces, preventing attack models from differentiating between the layers. Towards this, DNN-Alias performs various layer-obfuscation operations, e.g., layer branching, layer decepening, etc, to alter the run-time traces while maintaining the functionality. DNN-Alias deploys an evolutionary algorithm to find the best combination of obfuscation operations in terms of maximizing the security level while maintaining a user-provided latency overhead budget. We demonstrate the effectiveness of our DNN-Alias technique by obfuscating the architecture of 700 randomly generated and obfuscated DNNs running on multiple Nvidia RTX 2080 TI GPU-based machines. Our experiments show that state-of-the-art side-channel architecture stealing attacks cannot extract the original DNN accurately. Moreover, we obfuscate the architecture of various DNNs, such as the VGG-11, VGG-13, ResNet-20, and ResNet-32 networks. Training the DNNs using the standard CIFAR10 dataset, we show that our DNN-Alias maintains the functionality of the original DNNs by preserving the original inference accuracy. Further, the experiments highlight that adversarial attack on obfuscated DNNs is unsuccessful. ## I Introduction Deep neural networks (DNNs) have experienced rapid advancements in the past decade, leading to their application to many areas of human endeavor and various fields of science [1, 2]. The deployment of DNNs in mission-critical applications, such as healthcare systems [3] and anomaly detection in cyber-physical systems [4], raises concerns about the safety and security of these networks. Moreover, building and training DNNs require expert knowledge and costly resources. Thus, DNNs for a certain application are considered sensitive and expensive intellectual property (IP) that require protection from malicious users and/or market competitors. To facilitate the application of DNNs, model developers offer machine learning as a service (MLaaS), which includes machine learning (ML) models and services running on the cloud or edge devices [5]. In the MLaaS business model, the end-user receives input/output access to the DNN, i.e., _black-box access_. An untrusted user would want to extract the IP of the underlying DNN, for launching _white-box adversarial attacks_ on the target system and/or for stealing the IP without incurring high research and development costs.1 Note, it is difficult to launch such attacks with only back-box access to the system, e.g., it takes 30 GPU days to launch an adversarial attack on Lenet+ (7 layers) using input/output queries [9]. Footnote 1: An adversarial attack performs subtle perturbations to the input samples of an ML model, causing the model to predict incorrect outputs [6, 7]. A white-box attack model assumes access to the inputs, architecture, and internal details, e.g., weights. Conversely, in the black-box model, the attacker lacks access to these details. A gray-box attack trains a substitute model to generate adversarial samples and attack the target model [8]. Additional information about the target DNN could be extracted through _hardware-based side-channels_, such as the DNN architecture (i.e., the number, type, dimension, and connectivity of layers), enabling advanced adversarial attacks. For example, _DeepSniffer_[10] is a DNN side-channel-based architecture stealing (SCAS) attack that learns the correlation between the architecture hints (such as volumes of memory writes/reads) and the DNN architecture.2 DeepSniffer showed that including the architecture information increases the success rate of an adversarial attack by \(\approx 3\times\). Fig. 1 shows an example of such SCAS attack flow. Footnote 2: SCAS attacks can be physical (edge) or remote (cloud) [11]. To demonstrate the potent nature of such SCAS attacks, we plot the run-time traces of the AlexNet DNN trained on the CIFAR10 dataset in terms of computation latency (cycles) and memory access time (read and write) per layer in Fig 2. It can be observed that each type of layer has a unique execution signature. As a result, SCAS attacks learn the execution pattern of the layer functions and extract the layer sequence. Researchers have studied SCAS attacks based on various side-channels, e.g., power [12]. We focus on timing (execution time and memory access) side-channels, which are more stealthy, as they exploit an operational part of system (system profilers) for measuring and do not require extra equipment. Researchers have developed several protection techniques Fig. 1: Hardware-based side-channel leakage facilitates DNN architecture stealing, leading to white-grey-box adversarial attacks. to thwart SCAS attacks. Some _hiding_ techniques decrease the signal-to-noise ratio in side-channel traces, e.g., via dummy memory operations [13]. Other methods necessitate hardware modifications to encrypt memory and other side-channel leakage [14]. More recently, DNN obfuscation has been proposed to alter the run-time traces of a given DNN while preserving its functionality, thwarting SCAS attacks [15, 16]. Nevertheless, these methods suffer at least from one of the drawbacks discussed next and summarized in Table I. ### _State-of-the-Art (SOTA) and their Limitations_ **Overhead:** Oblivious random access memory (ORAM) schemes encrypt and shuffle the memory read/writes, reducing memory leakage [18]. However, ORAM-protected designs suffer from high overhead, e.g., \(\approx 10\times\) latency cost [20]. **Infectiveness:** Introducing noise to the execution traces thwarts statistical SCAS attacks, but fails to mitigate ML-based SCAS attacks.3 Such ML-based attacks are trained to be resilient to noisy data. For example, the success rate of the ML-based _DeepSniffer_[10] attack remains the same even with 30% of amplitude noise. Footnote 3: In first, the attacker applies statistical methods, i.e., correlation analysis, to distinguish the correct secret value among the hypotheses [12], while in second, the attacker trains an ML model to classify the traces. **Limited and Hardware-Specific Security:** Current DNN obfuscation methods offer limited security. It has been shown that advanced attacks, such as _NeuroUnlock_[17], can learn the obfuscation procedure and automatically revert it, thereby recovering the original DNN architecture. Further, existing obfuscation techniques, such as _NeuroObfuscator_[15], depend on extensive hardware profiling. This profiling step makes the defense mechanism dependent on the underlying hardware. ### _Key Research Challenges Targeted in this Work_ The above discussion shows that there is still a gap in designing secure DNN architectures. _Developing an efficient and cost-effective defense mechanism imposes the following important research questions and challenges._ 1. **What makes SCAS attacks successful?** To design secure DNNs, we need to identify the conditions that make the SCAS attacks successful and eliminate them during the development stage of the DNN. 2. **Generic security metric.** Defense mechanisms that focus solely on reducing an attack-specific metric, such as the layer error rate (LER),4 cannot mitigate further attack vectors. Thus, devising a generic security metric is required to evaluate the security of DNNs at design time. Footnote 4: The editing distance between extracted and original layer sequence. 3. **Performance overhead.** A generic and adaptive defense mechanism is required, which can be tailored per target DNN, hardware implementation, and overhead. ### _Our Novel Concept and Contributions_ To address the above challenges, we propose _DNN-Alias_, a DNN obfuscation methodology to protect the architecture of DNNs against static and ML-based SCAS attacks. We argue that SCAS attacks are successful because each layer has a unique run-time trace signature (see Fig. 2). Therefore, we demonstrate that if the signatures of the different layers overlap, it will be difficult for any SCAS attack to differentiate between them. Deterministic DNN obfuscation alters the run-time traces of the layers. However, it does not guarantee overlapping signatures. Our proposed DNN-Alias employs, for the first time, a generic security metric, which measures the overlap between layer signatures by computing the standard deviation. DNN-Alias performs various DNN obfuscation operations to minimize the standard deviation, i.e., resulting in a more secure DNN architecture. Note that the security objective does not depend on any specific attack output but rather on the features of the DNN itself. Furthermore, DNN-Alias does not profile the underlying hardware. Hence, DNN-Alias is a generic defense methodology that is applicable to any DNN running on any hardware. Our novel contributions are summarized in Fig. 3 and discussed below. 1. **General solution to measure the diversity of DNN layers (Sec. III-C):** DNN-Alias defines a novel security metric based on the distribution of run-time parameters to measure the overlap between layer signatures. Fig. 3: Our contributions presented in this work are in blue box. Fig. 2: The run-time trace of AlexNet. The unique pattern of the memory access bytes and computational cycles per each kernel allows SCAS attacks to learn these features and extract the target DNN architecture successfully. Specifically, DNN-Alias calculates the standard deviation of features in the run-time trace and reduces it. By utilizing this metric, DNN-Alias becomes independent of the outcome of any particular attack. 2. **Balancing the execution trace (Sec. III-B):** DNN-Alias presents a novel DNN obfuscation technique that performs layer balancing, limiting information leakage. 3. **Efficient DNN obfuscation (Sec. III-D):** DNN-Alias employs a genetic algorithm to find an effective combination of obfuscation operations. The reward function of this algorithm minimizes the standard deviation in run-time trace while maintaining cost budget. **Key Results:** We have comprehensively evaluated the efficacy of DNN-Alias with a broad set of random and standard DNNs on image classification, running on the Nvidia RTX 2080 TI GPU. We demonstrate that DNN-Alias increases the LER (between the original and obfuscated DNN) by \(2.5\times\) compared to SOTA techniques, resulting in higher obfuscation. We evaluate the security of DNN-Alias by launching; (i) an ML-based SCAS attack and (ii) NeuroUnlock on the obfuscated DNNs. We measure the difference between the architecture of the original DNN and the recovered DNN by the attacks. The LER obtained on the DNN-Alias networks is 2 on average (an LER higher than 1 is considered secure). Further, NeuroUnlock fails to de-obfuscate the networks, reporting an LER of 1.1 on DNN-Alias networks. Further, DNN-Alias preserves the training accuracy while protecting the DNN against gray-box adversarial attacks. ## II Background In this section, we provide the necessary background required to protect DNNs against SCAS attacks. ### _Side-channel-based Architecture Stealing (SCAS) Attacks_ Adversaries can compromise the security of a DNN system by uncovering its confidential model components, such as its architecture and parameters. Extracting an exact copy of the DNN is challenging in a black-box setup, where access to the victim model is limited [21]. However, physical access to the DNN's hardware platform can lead to the exposure of confidential information through side-channel attacks, such as power analysis or timing analysis [22] (see 1 in Fig. 4). Through a SCAS attack, the adversary builds a substitute model using the extracted information and trains it by querying the victim DNN or using a publicly available labeled dataset (see 1 and 1 in Fig. 4). The attacker can then use the substitute model to launch adversarial attacks against the original DNN system (see 1 in Fig. 4). In this study, we focus on attacks that exploit memory, cache, and timing-based information leaks to reveal the architecture of DNNs running on GPU devices. For example, the DeepSniffer attack [10], depicted in Fig. 5, employs a long-short-term-memory (LSTM) model [24] to deduce the layer arrangement of a targeted DNN based on its runtime trace. The run-time trace is a time-series collection of various characteristics, such as execution time and dynamic random access memory (DRAM) access time. The training process for the LSTM model involves profiling randomly generated DNNs on the target GPU. The layer sequence of each generated DNN is encoded as a vector and its runtime trace is extracted. Subsequently, a runtime profile dataset is constructed based on kernel-aware architectural hint, and used to train the LSTM. Attacker utilizes this LSTM as a predictor model, to detect the layer sequence of the DNN from run-time traces. Once the attacker obtains the layer sequence from predictions and construct the model, the dimensions of each layer are determined based on the predicted operation and its position in the time-step and final model is extracted. To evaluate the accuracy of the SCAS attack, the LER metric has been adopted in literature to measure the difference between the original and extracted DNN layer sequence, which we also use in our analysis. LER is calculated as follows: \[LER=\frac{ED(L,L^{*})}{|L^{*}|} \tag{1}\] where \(L\) represents the predicted layer sequence, \(L^{*}\) represents the ground-truth, and \(|.|\) denotes the length of a sequence. \(ED(p,q)\) denotes the edit distance between the \(p\) and \(q\) sequences, i.e., the minimum number of insertions, substitutions and deletions required to change \(p\) into \(q\) (also referred to as the _Levenshtein_ distance [25]). ## III Our Proposed Defense Mechanism: DNN-Alias We propose a DNN obfuscation method, DNN-Alias, to thwart SCAS attacks. In this section, we explain the steps of DNN-Alias in detail and summarize them in Fig. 6. Further, we discuss the attack model and its assumptions in Sec. III-A. DNN-Alias takes an unprotected DNN as input and modifies its architecture by applying layer obfuscation techniques (Sec. III-B). The goal is to reduce the diversity in the run-time behavior of layers in the given DNN. To measure the diversity of the DNN's run-time behavior, DNN-Alias automatically Fig. 4: Side-channel-based architecture stealing (SCAS) attack. Fig. 5: Flow of the DeepSniffer attack [23]. analyzes its profile during one inference execution (Sec. III-C). DNN-Alias employs a genetic algorithm to guide the obfuscation and balance the DNN layers, taking into account the overhead budget (Sec. III-D). ### _Threat Model and Assumptions_ Consistent with most recent related works [26], we assume that the adversary has no prior knowledge of the victim DNN architecture, parameters, training algorithms, or hyper-parameters. We focus on edge security, in which the attacker has (i) system privilege access to the GPU platform encapsulating the victim DNN, (ii) the inputs and outputs (labels) of the DNN, and (iii) a publicly available training dataset. _We show that even with such a powerful threat model, attackers cannot steal the architecture of the DNNs obfuscated via our proposed DNN-Alias._ ### _DNN Layer Sequence Obfuscation Operations_ DNN-Alias uses function-preserving obfuscation operations to protect the architecture of DNNs. DNN-Alias carefully apply one (or more) operation to each layer of the original model, making the target DNN more difficult to reverse engineer or attack (Step 1 in Fig. 6). Please note that DNN obfuscation operations have been used before to protect the DNN architecture [27, 15]. DNN-Alias applies the same obfuscation knobs to change the run-time profile of each layer function and hide the architecture of original DNN. However, existing solutions either randomly obfuscate the layers or focus on a specific SCAS attack, resulting in weak protection. DNN-Alias guides the obfuscation differently, thwarting all SCAS attacks. Next, we explain the obfuscation operations used by DNN-Alias. Let the matrix \(\mathbf{W}_{k1,k2,c,j}^{(i)}\) represent the \(i^{th}\) convolutional layer to be modified. \(k1\) and \(k2\) represent the height and width of the convolution kernel, respectively, while \(c\) and \(j\) denote the input and output channel size, respectively. \(\mathbf{X}^{(i)}\) and \(\sigma(\cdot)\) denote the input of the layer and the activation function (e.g., _ReLU_), respectively. Fig. 7 illustrates the original operator. **Layer Branching.** dividing a single layer operator into smaller, partial operators, as demonstrated in Fig. 7. For example, a 2-D convolution layer (_Conv2D_) \(\mathbf{W}_{k_{1},k_{2},c,j}^{(i)}\) can be separated into two partial convolutions, as follows. \[\begin{split}&\mathbf{U}_{k_{1},k_{2},c,j/2}^{(i)}=\mathbf{W}_{k_{1},k_{2},c,m}^{(i)}\quad m\in\left[0,\lfloor\frac{j}{2}\rfloor\right),\\ &\mathbf{V}_{k_{1},k_{2},c,j/2}^{(i)}=\mathbf{W}_{k_{1},k_{2},c,m}^{(i)} \quad m\in\left[\lfloor\frac{j}{2}\rfloor,j\right)\end{split} \tag{2}\] The final output is obtained by combining the two partial results. \[\mathbf{U}_{k_{1},k_{2},c,j/2}^{(i)}*\mathbf{X}^{(i)}||\mathbf{V}_{k_{1},k_{2},c,j/2}^{(i) }*\mathbf{X}^{(i)} \tag{3}\] The splitting can also be performed in the input channel dimension. The final output in this case is the addition of the two, as follows, where \(\mathbf{X}^{(i)}\) is sliced into \(\mathbf{A}^{(i)}\) and \(\mathbf{B}^{(i)}\). \[\mathbf{U}_{k_{1},k_{2},c/2,j}^{(i)}*\mathbf{A}^{(i)}+\mathbf{V}_{k_{1},k_{2},c/2,j}^{(i) }*\mathbf{B}^{(i)} \tag{4}\] **Layer Skipping.** An additional Conv2D layer \(\mathbf{U}_{k_{1},k_{2},j,j}^{(i+1)}\) with all its parameters set to \(0\) is inserted to retain the original functionality. An illustration of this is shown in Fig. 7. The Conv2D layer can be expressed as follows, with \(\sigma\left(\mathbf{X}^{(i+1)}\right)\) representing the activation output of \(i^{th}\) original layer. \[\sigma\left(\mathbf{X}^{(i+1)}\right)+\sigma\left(\mathbf{U}^{(i+1)}*\mathbf{X}^{(i+1)} \right)=\sigma\left(\mathbf{X}^{(i+1)}\right) \tag{5}\] **Layer Deepening.** adds a new computational layer to the sequence. The new layer is inserted after the activation of the current layer and before the _batch normalization_ (BN) step as shown in Fig. 7. If the previous layer is linear, the newly added layer \(\mathbf{U}^{(i+1)}\) is initialized as an identity matrix \(\mathbf{I}\) to preserve the function of the model. Otherwise, \(\mathbf{U}_{k_{1},k_{2},j,j}^{(i+1)}\) can be generalized as: \[\mathbf{U}_{a,b,c,d}^{(i+1)}=\left\{\begin{array}{cc}1&a=\frac{k_{1}+1}{2} \wedge b=\frac{k_{2}+1}{2}\wedge c=d\\ 0&\mathrm{otherwise}\end{array}\right. \tag{6}\] Layer deepening is effective as long as the activation function satisfies the following condition, like the ReLU function. \[\forall x:\sigma(x)=\sigma\left(\mathbf{I}*\sigma(x)\right) \tag{7}\] Post obfuscation, the computation graph is extracted and fed to the TVM\({}^{\text{\textregistered}}\) compiler [28]. The compiler performs optimizations at the graph and operator levels, generating low-level optimized code for GPU execution.5 Footnote 5: DNN obfuscation can be categorized into sequence and dimension obfuscation. We focus on sequence obfuscation since the sequence identification stage is the most fundamental step in SCAS attacks. ### _Measuring the difference of the layers_ DNN-Alias guides the obfuscation algorithm to assemble a configuration of obfuscation knobs on a given DNN (Sec. III-D). For each obfuscated DNN, DNN-Alias analyzes its run-time execution trace in inference mode and computes the difference of the levels in the trace using a generic Fig. 6: Proposed DNN-Alias methodology for DNN architecture protection. Fig. 7: Visualization of the employed obfuscation operations [27, 15]. measurement technique (See 1 in Fig. 6). In this technique, the run-time traces are collected and analyzed online and target hardware is not required to be profiled in advance. DNN-Alias measures the difference of the levels in a run-time trace using the standard deviation (\(St.D\)) of values in a kernel per each feature in the run-time profile of the DNN.6 Footnote 6: A standard deviation is a measure of how dispersed the data is in relation to the mean. A low standard deviation means data are clustered around the mean, and a high standard deviation indicates data are more spread out. \[St.D=\sqrt{\frac{1}{N-1}\sum_{i=1}^{N}(x_{i}-\overline{x})^{2}} \tag{8}\] \(N\) is number of kernels, \(x_{i}\) is the value in run-time trace and \(\overline{x}\) is the mean of the values. First, the original DNN is configured by incorporating the obfuscation operations. Then, the obfuscated DNN is executed on the hardware platform and \(St.D\) of the above features is calculated. DNN-Alias uses a genetic algorithm to find the best configuration of the obfuscation operations, by decreasing the \(St.D\) value so that the run-time traces for all layers are similar. Decreasing the \(St.D\) (difference) enables layer balancing and hides the function of the kernel from the attacker's predictors. In Fig. 8, we present an example of DNN-Alias obfuscation by comparing the execution trace of the unprotected (black) and obfuscated (blue) ResNet-20. The execution of each layer in the network involves reading weights and inputs from memory through a kernel process, executing the layer's function, and writing the output back to memory. The combination of these three values for each trace per kernel creates a unique pattern, which reveals information to SCAS attacks and allow them to predict the function of the layer. To counter this, DNN-Alias reduces the difference between the traces and forces all layers to have similar run-time traces. ``` 1:procedureOfuscation(\(DNN\), \(budget\), \(size_{p}\), \(generations\)) 2:\(population\leftarrow\) generate \(size_{p}\) variations of \(DNN\) 3:for\(generation_{index}\) in \(0...generations\)do 4:\(list_{rewards}\leftarrow\) empty list 5:for\(population_{element}\) in \(population\)do 6:\(p **Random DNN obfuscation.** A random selection process is utilized to determine the insertion and configuration of obfuscation operations within each layer of the specified DNN. For each obfuscation operation, a binary random decision is made with a \(50\%\) likelihood to determine if the operation will be utilized or disregarded. As a result, a layer may incur the application of 1 and 2 (each \(37.5\%\) probability), 3 or none (each \(12.5\%\) probability) obfuscation operations. **Fitness function.** Then, we evaluate each member of the population using the fitness score, which is determined by the \(St.D\) and a scaled value of the cost budget, as defined below. \[Fitness=\quad\sum_{i=1}^{N}St.D_{i}(S).\bigg{[}\bigg{(}\frac{T-(1+B)T^{*}}{T^ {*}}\bigg{)}^{2}\bigg{]} \tag{10}\] **Crossover and mutation.** For the mating process, we rank the population based on their fitness scores and select the top half of individuals with the best scores as parents and add them to the next candidate pool. Also, parents are brought to the crossover process, where they combine to form an equal number of offspring that are added to the pool of candidates. The offspring are built from the crossover of the parents' obfuscation list, then mutated by adding Gaussian noise. These methods are known as 1-point crossover and Gaussian mutation in literature. As shown in Fig. 9, the trend of the fitness score gradually decreases with each pool, and the best candidates are carried forward to the next generation. **Final solution.** The mutation process continues until the fitness score converges and stabilizes, which we found to occur after 20 generations in our case. ## IV Experimental Setup In this section, we present our experimental setup in evaluating the effectiveness and security of DNN-Alias. **Hardware.** For our experiments, we utilized the Nvidia RTX 2080 Ti GPU as our experimental platform. However, our proposed method is generic and applicable to other GPUs and hardware platforms. We employ Nsight(r) Compute [29] for profiling the GPU and launching the ML-based SCAS attack, which requires privileged access to the performance counters. A dataset of randomly generated DNNs was generated to profile them on the GPU and train the attack model predictors. Next, we explain how we create the dataset of random DNNs. **Obfuscation Algorithm.** We implement DNN-Alias using the _PyTorch_ deep learning framework [30]. The DNNs (original and obfuscated) are described as Python model files. The genetic algorithm is implemented by using _pymoo_[31]. **Random DNN Creation.** To train the predictors of SCAS attack, we use the following method for generating 5,000 different DNNs for image classification, using the CIFAR-10 dataset as a reference. The number of Conv2D layers in each DNN is randomly selected from the range [4, 12], and the number of fully connected (FC) layers is randomly selected from the range [1, 4]. The output channel sizes of Conv2D layers and the dimensions of FC layers are also randomly chosen from a range of predefined values. Some Conv2D layers are randomly replaced with blocks from the ResNet and MobileNet networks, and some Conv2D layers are changed to pooling layers. Batch normalization (BN) layers are added after each Conv2D and FC layer. All DNNs have 3 input channels, width and height of 32, and 10 output classes. Using this technique, we also generate 700 random DNNs to analyze the performance of DNN-Alias in obfuscating random DNNs. **SCAS-based Adversarial Attack.** In an adversarial attack scenario, the attacker manipulates the output of a DNN by adding subtle, almost imperceptible alterations to the input images. The objective of the attack is to find the smallest possible changes in the input that can cause the DNN to produce incorrect output, either arbitrarily (in case of untargeted attack) or as pre-determined (in case of targeted attack). To launch an adversarial attack on a DNN that operates as a gray box, the attacker often develops a substitute model by examining the input and output of the victim DNN. With the help of the SCAS attack, the adversary has access to the details of target DNN with high accuracy to build the substitute model. Then, adversarial samples are created using the white-box substitution technique. Finally, these adversarial samples are utilized to disrupt the workings of the target DNN. In summary, Fig. 10 depicts the transfer-based adversarial attack flow, which includes the following steps: #### Iv-1 Substitute Models In this step, we train substitute models to closely mimic the target model's behavior. For black-box adversarial attacks, the substitute models are selected from publicly available DNN families. In SCAS-based adversarial attacks, the substitute model is obtained from runtime traces. We compare the success rate of adversarial attacks in both scenarios. #### Iv-2 Adversarial Sample Generation The most advanced methods utilize an ensemble approach to increase the likelihood of a successful attack, based on the idea that if an adversarial image is able to fool multiple models, it is more likely to have a similar effect on the black-box model. We Fig. 10: Adversarial attack flow on the extracted DNN architecture. Fig. 9: Fitness score trend of DNN-Alias on ResNet-20 with a population of 16. Here, we have 320 elements and 20 mutations. follow the same procedure to produce adversarial images for our target DNNs. #### Iv-A3 Deployment of Adversarial Samples We pass the generated adversarial examples as input data to launch an attack on the gray-box DNN. ## V Security Analysis and Overhead Results In this section, we present the security evaluation and performance analysis of the DNN-Alias compared to SOTA. ### _Effectiveness of DNN-Alias_ In Fig. 12, we show the obfuscation and SCAS attack evaluation procedure. The original DNN is first obfuscated by DNN-Alias (step ). Next, the runtime traces are collected to extract the DNN architecture via an ML-based SCAS attack (step ). By comparing the LER between the original and extracted DNNs, we can analyze the effectiveness of the obfuscation method (step ). The best case scenario for the SCAS attack is to obtain an LER close to \(0\). #### V-A1 Effectiveness on Random DNNs We obfuscate \(700\) randomly generated DNNs (explained in Sec. IV) using DNN-Alias, launch the SCAS attack, and measure the security in terms of the LER (Extracted_Obf, Original). These results are shown in Fig. 11. The cost budget considered in this experiment is set to \(0.2\). The minimum LER value observed in this experiment is \(0.3\) and the maximum is \(3.5\). The FORECAST calculation of LER (predicts a future value by using linear regression) shows \(1.2\). The average of LER for SOTA techniques, is shown in Fig. 11, where LER for NeurObfuscator [15] is \(0.62\) and for ReDLock [17] is \(0.73\). Therefore, _DNN-Alias obfuscation is \(\approx 2\times\) more resilient against SCAS attacks compared to SOTA._ #### V-A2 Effectiveness on Publicly Available DNNs Further, we analyzed the effectiveness of DNN-Alias on a set of real DNNs as a case study. The results are shown in Fig. 13. First, we launch the ML-based SCAS attack on the original DNNs, i.e., without any obfuscation. The red bar for LER (Extracted_org, Original) shows the ML-based SCAS predictor errors with an average of \(0.05\), indicating that the original DNNs are completely vulnerable to SCAS attacks. Our goal is to increase this LER value over \(1\). Next, each DNN was obfuscated using DNN-Alias for two cost budgets (\(0.2\) and \(0.6\)). We launch the same ML-bases SCAS attack on the obfuscated DNNs. The blue bar LER (Extracted_obf, Original) represents the difference between the extracted DNN by the SCAS attack and the original DNN, which averages \(1.8\) for DNN-Alias. The increase of the LER from an average of \(0.05\) to \(1.8\) demonstrates that DNN-Alias is highly effective in protecting the DNN against SCAS. #### V-A3 Obfuscation Overhead and Cost Budget In all cases presented in Fig. 13, we can see that increasing the cost budget (from \(0.2\) to \(0.6\)) increases the LER (from an average of \(1.7\) to \(1.96\)), i.e., leads to stronger obfuscation. The gray bar LER (Obfuscated, Original) in Fig. 13 represents the difference between the original DNN and the obfuscated DNN. The LER, in this case, has an average of \(0.1\), demonstrating that _DNN-Alias effectively thwarts SCAS attacks through minor changes in the original DNN_, avoiding high overhead costs. #### V-A4 NeuroUnlock Attack on DNN-Alias We evaluated the SOTA NeuroUnlock attack [17] on DNN-Alias and NeurObfuscator [15]. NeuroUnlock attempts to reverse the obfuscation of the extracted DNN from the SCAS attack using sophisticated ML-based models. The green bar labeled "LER (Recovered, Original)" in Fig.13 shows the difference between the original DNN and the recovered DNN using NeuroUnlock when DNN-Alias is in place. With an average LER of \(0.9\), the results indicate that NeuroUnlock failed to accurately recover the original DNN. In comparison, recovering the obfuscated DNN using NeuroUnlock with NeurObfuscator in place resulted in an average LER of \(0.31\). This suggests that _DNN-Alias is \(\approx 3\times\) more robust against de-obfuscation techniques than the current SOTA methods._ #### V-A5 Comparison with SOTA We compare the effectiveness of DNN-Alias to NeurObfuscator [15] considering the target real DNNs. We obfuscate the DNNs with \(0.2\) and \(0.6\) latency budgets using DNN-Alias and NeurObfuscator (the code is open-sourced). We apply the SCAS attack on the obfuscated DNNs and compare each extracted DNN to the original DNN. The results in Fig. 14 show that the LER of obfuscated DNNs using DNN-Alias is \(2.5\times\) (on average) more than NeurObfuscator. _In summary, DNN-Alias is more effective in hiding the layer sequence of DNNs compared to SOTA_. ### _Performance Analysis_ In this section, we examine the effect of DNN-Alias on the DNN training and the overhead involved in its design. Last but not least, we study the success rate of adversarial attacks against DNN-Alias networks. Fig. 11: LER of random DNNs obfuscated by DNN-Alias and SOTA techniques. Fig. 12: Measuring the accuracy of the SCAS attack, when the DNN is protected by obfuscation methods. #### V-B1 Training Performance To assess the effect of DNN-Alias on the functionality of the DNN, we compare the validation accuracy of the original and obfuscated DNNs. Further, we train the DNNs recovered by the SCAS attack and by NeuroUnlock attack to see if the high LER values map to loss in DNN performance. Fig. 15 shows the validation accuracy of the VGG-11 DNN over \(30\) epochs of training on the CIFAR-10 dataset [32]. The results demonstrate that DNN-Alias (blue line) maintains the functionality of the DNN and does not impact the training performance. However, the recovered DNN by the SCAS attack (green line) does not work and simply does not converge, with a \(75\%\) drop in performance. Further, even if NeuroUnlock was launched after the SCAS attack to revert the obfuscation, the recovered model (black line) still shows a lower validation accuracy, with a drop of \(10\%\), and converges about \(8\) epochs later. Therefore, _DNN-Alias forces the SCAS attack to recover a DNN with worse performance compared to the original DNN_. #### V-B2 Adversarial Attack We launch the SCAS-based adversarial attacks (discussed in Sec. IV) on VGG-11 DNN obfuscated by DNN-Alias as target model. In this experiment, the adversarial samples are generated from the CIFAR10 dataset and the label output of VGG-11. We study this attack on the original (un-protected) DNN and then compare it with the obfuscated DNN. Also, we show the success rate of this attack on similar DNN families. **1. Original Model:** To validate the adversarial attack implementation, we tested adversarial samples generated from the unprotected model extracted by SCAS. The results, represented by 1 in Fig. 16, show a success rate of \(98\%\) because the unprotected model closely resembles the original model. **2. Obfuscated and Recovered Models:** Next, we generate the adversarial samples using the DNN obfuscated by DNN-Alias and test the target model. The results in 1 in Fig. 16 show that the adversarial attack on DNN-Alias is unsuccessful (success rate \(0.2\%\)). Furthermore, we launch NeuroUnlock [17] after the SCAS attack and we show 1 in Fig. 16) that the success rate increases to \(51\%\) on average. Although NeuroUnlock enhances the performance of the adversarial attack, the attack is still ineffective due to the errors in the de-obfuscation process. **3. Public DNN Families:** We report the success rate of the adversarial attack on target DNN when the adversarial samples are generated using standard DNN families. The success rate for GoogleNet 1 is \(14\%\), Inception-V3 is \(48\%\), and ResNet-34 is \(88\%\). Since the attacker is unaware of the architecture of the target DNN to choose a similar DNN family, the results present that DNN-Alias successfully protects the DNN against SCAS-based adversarial attacks. ### _Overhead Analysis_ The results in Fig. 17 show that while DNN-Alias on average, increases memory access time for both read (\(13\%\)) and write (\(40\%\)) operations, the computation latency decreases (\(25\%\)). Thus, memory access time presents the primary bottleneck for further increasing the obfuscation level. This observation opens up opportunities for future research on the optimization of obfuscation techniques through the use of efficient memory protocols. ## VI Conclusion In this paper, we present a novel obfuscation method called DNN-Alias to protect deep neural networks (DNNs) against Fig. 14: The LER for the DNN-Alias and NeuroOfwscator obfuscated DNNs compared to the original model. Fig. 13: The LER for the DNN-Alias obfuscated DNNs. Fig. 15: Validation accuracy after training the models. side-channel attacks. Our proposed method forces all the layers in a DNN to have similar execution traces, making it difficult for attackers to differentiate between the layers and extract the architecture. DNN-Alias employs a genetic algorithm to find the best combination of layer obfuscation operations to maximize the security level while maintaining a user-specified latency overhead budget. The effectiveness of DNN-Alias is demonstrated through experiments on various randomly generated and publicly available DNNs. We show that DNN-Alias can successfully prevent state-of-the-art side-channel architecture stealing attacks and adversarial attacks while preserving the original functionality of the DNNs. Our results highlight the potential of DNN-Alias as a generic and hardware-independent defense mechanism for DNNs against side-channel attacks.
2304.12794
Expand-and-Cluster: Parameter Recovery of Neural Networks
Can we identify the weights of a neural network by probing its input-output mapping? At first glance, this problem seems to have many solutions because of permutation, overparameterisation and activation function symmetries. Yet, we show that the incoming weight vector of each neuron is identifiable up to sign or scaling, depending on the activation function. Our novel method 'Expand-and-Cluster' can identify layer sizes and weights of a target network for all commonly used activation functions. Expand-and-Cluster consists of two phases: (i) to relax the non-convex optimisation problem, we train multiple overparameterised student networks to best imitate the target function; (ii) to reverse engineer the target network's weights, we employ an ad-hoc clustering procedure that reveals the learnt weight vectors shared between students -- these correspond to the target weight vectors. We demonstrate successful weights and size recovery of trained shallow and deep networks with less than 10\% overhead in the layer size and describe an `ease-of-identifiability' axis by analysing 150 synthetic problems of variable difficulty.
Flavio Martinelli, Berfin Simsek, Wulfram Gerstner, Johanni Brea
2023-04-25T13:14:20Z
http://arxiv.org/abs/2304.12794v4
# Expand-and-Cluster: ###### Abstract Can we recover the hidden parameters of an Artificial Neural Network (ANN) by probing its input-output mapping? We propose a systematic method, called 'Expand-and-Cluster' that needs only the number of hidden layers and the activation function of the probed ANN to identify all network parameters. In the expansion phase, we train a series of networks of increasing size using the probed data of the ANN as a teacher. Expansion stops when a minimal loss is consistently reached in networks of a given size. In the clustering phase, weight vectors of the expanded students are clustered, which allows structured pruning of superfluous neurons in a principled way. We find that an overparameterization of a factor four is sufficient to reliably identify the minimal number of neurons and to retrieve the original network parameters in \(80\%\) of tasks across a family of 150 toy problems of variable difficulty. Furthermore, shallow and deep teacher networks trained on MNIST data can be identified with less than \(5\%\) overhead in the neuron number. Thus, while direct training of a student network with a size identical to that of the teacher is practically impossible because of the highly non-convex loss function, training with mild overparameterization followed by clustering and structured pruning correctly identifies the target network. ## 1 Introduction It is known since the 1980s that finding a solution to the XOR problem with gradient descent is easier with a larger hidden layer, even though a minimal network with two hidden neurons is theoretically sufficient to solve the problem [1]. Indeed, even very small networks have a non-convex loss function [2; 3; 4]. During the last decades, advances in the theory of artificial neural networks indicate that the loss function is rough for networks of a minimal size, but becomes effectively convex in the limit of infinitely large hidden layers [5; 6; 7; 8; 9]. In a teacher-student setup, the complexity of the landscape can roughly be estimated from the ratio between the number of symmetry-induced critical-point manifolds at a positive loss and the number of manifolds at zero loss [10]; at zero loss, teacher and student are functionally equivalent. Importantly, as the degree of overparameterization is increased, the loss landscape undergoes a qualitative change from a ratio larger than one to a ratio smaller than one, suggesting that already for mild overparameterization the landscape is dominated by multi-dimensional zero-loss manifolds [10; 11]. Here we ask whether we can use these theoretical insights to construct a network identification algorithm. Network parameter identification requires the data to be generated by a neural network; therefore we work in the teacher-student framework. It is arduous to train to zero loss if the student has the same number of hidden neurons as the teacher, however, if the size of the student network is increased by a factor of two to four, training becomes reliable enough to find a solution close to zero loss. Therefore, we propose to first **expand** the number of neurons in the hidden layer of multiple (\(N\)) students until we can train to very low, ideally zero, loss. Using insights on the structure of the zero-loss manifold [10], we then **cluster** similar neurons between different students and prune back to the minimal network size. Thus, the detour via mildly overparameterized networks enables us to reliably find the solution to the original non-convex parameter identification problem (Figure 1). We present a procedure to extract a functionally-equivalent model of minimal size; this is more than merely matching the teacher's accuracy on a test set. The desired result is a minimal list of parameters that one can send to a friend, and the friend can consult the teacher to verify that the parameters are correct and the list is minimal. To test the Expand-and-Cluster algorithm, we propose a family of problems with artificial data that enable us to generate hundreds of different regression tasks of variable difficulty, dimensionality, and number of parameters; including generalized XOR-like problems. These tasks are harder than random-teacher models which rarely lead to XOR-like situations. To simplify the procedure, we assume that we know the number of hidden layers and the activation function. For larger-scale applications, we use a regression task to extract parameters of a teacher network optimized on MNIST data. Overall, our examples show that the non-convex network identification problem can be successfully addressed. Our contributions can be summarized as follows: * We demonstrate that we can achieve exact, functionally-equivalent _and_ minimal size parameter recovery with learning-based methods on toy-sized problems, despite the extreme non-convexity of the problem. * Mild overparameterization is enough to solve the non-convex problem due to the combinatorial proliferation of global minima predicted by Simsek et al. [10]; * Our method is orthogonal to other parameter recovery works that are not learning-based, but either rely on special properties of ReLU [12; 13] or perform reconstruction in restricted setups such as committee machines with known layer size [14]. We show successful reconstruction for smooth activation functions on experiments of a similar scale and expand with results on deep fully connected networks. ## 2 Background and Methods ### Theoretical foundations Overparameterization consists in increasing the number of parameters of a neural network such that its expressivity is larger than necessary for representing a given dataset [15; 16]. For teacher-student setups, we call a student 'overparameterized' if it has more hidden neurons than the teacher in at least one layer. If an overparameterized student network replicates the teacher mapping with zero loss, the space of all possible solutions is fully described by the geometry of the global minima manifold [10]. The global minima manifold contains only two types of hidden units, namely duplicate and zero-type neurons (see theorem 4.2 of Simsek et al. [10] and Fig. 2A); under the following assumptions: one-hidden layer network \(\sum_{i=1}^{m}a_{i}\sigma(w_{i}x)\), infinite input data support, population loss limit, zero bias of all teacher neurons and analytical activation function \(\sigma\) with infinite non-zero even _and_ odd derivatives evaluated at zero. The last assumption guarantees that the activation function has no obvious or hidden symmetries around zero. We call activation functions that satisfy these assumptions'symmetry-free'. Figure 1: **Expand-Cluster**: we overcome the non-convex problem of recovering the \(q\) parameters of an unknown network by: **(i)** expanding the parameter space by a factor \(\rho\) to relax the optimization problem, \(\Theta\rightarrow\hat{\Theta}\); and **(ii)** map the solution to the original parameter space through theory-informed clustering, \(\hat{\theta}^{*}\rightarrow\theta^{*}\). The dataset \(\mathcal{D}\) is generated by an unknown network. The intuition for the result of Simsek et al. [10] is that, for zero-loss solutions, each teacher hidden neuron \(a^{*}\sigma(w^{*}x)\) must be replicated in the student by a _duplicate-type_ group of one or more units, contributing \(\sum_{i}a_{i}\sigma(w_{i}x)\). The duplicates' input weight vectors are all aligned with the teacher neuron, \(w_{i}=w^{*}\), while their weighted contribution equals the teacher neuron's output weight \(\sum_{i}a_{i}=a^{*}\). In the same student there can also exist _zero-type_ neuron groups with a null contribution to the student input-output mapping, characterized by \(w_{1}=\cdots=w_{q}\), \(\sum_{i}a_{i}=0\). Neuron types from Simsek et al. [10] but in the presence of biases are summarized in Figure 2A. The assumption of symmetry-free activation function \(\sigma\) is critical because, for example, a student network with tanh units could also contain anti-symmetric input vectors with a switch of output signs. In this paper, we drop several of the above assumptions to fully reconstruct a teacher network by identifying duplicate neurons while pruning all other superfluous hidden neurons. ### Artificial data in teacher-student networks To investigate the advantages of overparameterization as a function of layer size, we devised a series of very challenging tasks inspired by the parity-bit problem (or multidimensional XOR), known as a difficult problem for neural networks [1]. To create tasks of variable difficulty, we adapted the problem into a regression format using the following procedure: All tasks with artificial data have \(d\)-dimensional uniformly distributed input data in the range \(x_{i}\in[-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}]\). A specific task is defined by the parameters of a teacher network. Each hidden neuron \(i\) of the teacher is randomly sampled from a set of input weights \(w_{i}\in\{-1,0,1\}^{d_{in}}\), output weights \(a_{i}\in\{-1,1\}\) and biases \(b_{i}\in\{-\frac{2}{3}\sqrt{3},-\frac{1}{3}\sqrt{3},0,\frac{1}{3}\sqrt{3}, \frac{2}{3}\sqrt{3}\}\). We repeat the sampling if two hidden neurons are identical up to output weights signs to avoid two hidden neurons canceling each other. The resulting input weight vectors \(w\) are first normalized to unity and then both \(w\) and \(b\) are multiplied by a factor of 3. The above procedure yields hyperplanes in direction \(w\) located at a distance \(|b|/||w||\) from the origin, and a steeply rising (or falling) activation on the positive side of the hyperplane. Finally, analogous to batch normalization, the output weights and biases are scaled such that the output has zero-mean and unit variance when averaged over the input distribution: \(a\leftarrow a/\mathrm{std}(y)\) and \(b_{2}=-\langle y\rangle/\mathrm{std}(y)\), where \(y\) is the output vector of the network. We study teachers with input dimensionality \(d_{in}\in\{2,4,8,16,32\}\) and hidden layer size \(r\in\{2,4,8\}\). Figure 4A shows examples of different teachers with input dimension \(d_{in}=2\) and a single hidden layer. We use the symmetry-free activation function \(\sigma=\sigma_{sig}(4x)+\sigma_{soft}(x)\), where \(\sigma_{sig}=\frac{1}{1+e^{-x}}\) and \(\sigma_{soft}=log(1+e^{x})\) for all our simulations unless specified otherwise. The above construction of hidden neuron parameter vectors can be generalized to multi-layer teachers by stacking the procedure (further details in appendix A.5.2). Our construction yields XOR-like and checkerboard-like structures where hyperplanes are parallel to each other and divide the input space into separate regions. In contrast to our approach, constructing shallow networks with randomly drawn input weight vectors yields easy tasks since all weights tend to be orthogonal [17; 18; 19] and it is known that randomly initialized deep networks tend to behave as constant random functions [20], yielding uninteresting data-generator models. In contrast to committee machines [14], our artificial tasks are more difficult since the output neuron is not merely averaging the contributions of the hidden neurons. ### MNIST regression To extend the procedure beyond artificial tasks, we also recover parameters of networks trained on the MNIST dataset [21]. We pre-trained teacher networks of either 10, 30, or 60 hidden neurons on MNIST (after removal of the uninformative input pixels, Fig. S5) with standard training procedures. We then used input-output pairs generated by the last layer of the teacher network to define a regression task on which we trained student networks. For our method to work it is crucial that we have access to the classifier's probabilistic output (e.g. the values after the softmax operation or multiple accumulated decisions of a stochastic classifier) and not only the most probable class (e.g. values after argmax operation) which would make parameter recovery impossible. ### Related Work **1. Pruning:** Commonly used pruning methods are heuristic and based on pruning weights [22, 23, 24], whereas we propose a theory-based [10] approach that removes entire units (structured pruning) potentially relevant for hardware implementations [25]. Our Expand-and-Cluster method reconstructs a minimal network by positively selecting commonly occurring hidden neurons across a set of overparameterized student networks, as opposed to other structured pruning algorithms [26, 27, 28] which remove putatively redundant units; Srinivas and Babu [29] also prune neurons based on weight similarity. In contrast to Ankner et al. [30], who link prunability to data dimensionality, we show a link to data complexity for fixed dimensionalities. **2. Loss landscape and overparameterization:** Neural network landscapes exhibit a cusp-like transition from under- to over-parameterization [31, 32], related to double-descent phenomena [33]. Overparameterized solutions found by different random initializations are similar to each other in the function space [34] and have been _approximately_ mapped to each other by permutation of hidden neurons [35, 36, 37, 38, 39, 40]. In the teacher-student setting, different zero-loss solutions are _exactly identical_ to each other up to duplicate-type neurons, zero-neuron addition and permutation symmetry in the population loss limit [10]. However, convergence to a zero-loss solution can be hindered either due to the emergence of local minima [41] or the flatness at the bottom of the landscape [42]. **3. Non-convex optimization:** Many non-convex optimization problems are tackled with the following strategy: (i) expand, or _lift_, to a higher dimensional space in order to relax the problem and guarantee convergence to global minima; (ii) map, or _project_, the relaxed solution to the original space by exploiting the problem's intrinsic symmetries and geometry [43]. This approach is used in applied mathematics [44, 45], computer vision [46], nonlinear programming [47, 48, 49], control theory [50], machine learning [43, 51] and many others3. Despite the above-mentioned achievements, for neural networks, the picture is far from complete. Unlike infinitely-wide neural networks [5], the loss functions of finite-width neural networks exhibit essential non-convexity so that the gradient flow converges to many fundamentally different solutions depending on the initialization [52, 41, 53]. Shallow networks with polynomial activation functions can be globally optimized under certain guarantees by lifting the optimization problem to tensor decomposition [51, 54]. Even though the mildly overparameterized regime is non-convex, we show that we can exploit its reduced complexity (compared to non-overparameterized problems [10]) to find zero-loss solutions. Footnote 3: A curated list of solvable non-convex optimization problems can be found here: [https://sunju.org/research/nonconvex](https://sunju.org/research/nonconvex) **4. Interpretability:** Explaining in qualitative terms the behavior of single neurons embedded in deep networks is a challenging task [55, 56]. For example, in symbolic regression, small networks with vanishing training loss are desirable for interpretability [57]. We provide precise mathematical explanations of all hidden neurons found in zero-loss overparameterized student networks in relation to a teacher network of minimal size; going beyond the notion of'superimposed' features described in Elhage et al. [58]. We follow the terminology of [10] and use the terms duplicate or zero type neurons which are related to 'Monosemantic' [59] and 'Frivolous' [60] units found in various settings. **5. Functionally Equivalent Model Extraction:** Our paper focuses on functionally equivalent extractions, that is retrieving a model \(\mathcal{M}\) such that \(\forall x\in X,\mathcal{M}(x)=\mathcal{M}^{*}(x)\), where \(\mathcal{M}^{*}(x)\) is the target model. This type of extraction is the hardest achievable goal in the field of Model Stealing Attacks [61], using only input-output pairs [13]. In addition, out of all the functionally equivalent models that we describe in Section 3.1, we extract the one of _minimal size_. Conditions for neural network identifiability and their symmetries have been studied theoretically for different activation functions [62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72], although overparameterized solutions were not considered. To the best of our knowledge, existing functionally equivalent extractions of _trained networks_ rely on identifying boundaries between linear regions of shallow ReLU networks [13, 73], with partial success for deep ReLU networks [12]. Janzamin et al. [51] show a theoretical reconstruction based on third-order derivatives. Fornasier et al. [14], building upon [74, 75, 76, 77], propose an identification method for wide committee machines (shallow networks with unitary second layer weights) and unit norm teacher weights, where knowledge of the teacher layer size is necessary. We are the first to propose an exact recovery method for arbitrary activation functions on shallow and deep fully connected networks of unknown layer widths, with successful reconstructions of networks trained on MNIST. Our method is learning-based and fundamentally different from the known approaches. However, none of these methods were shown to work in large-scale applications. ## 3 Results ### Neuron types near zero-loss We extend the categorization of neuron types of Simsek et al. [10] to neurons with _bias_, _finite_ input data support and analytical activation functions that contain one _symmetry_ due to infinite even or odd non-zero derivatives. This generalization includes commonly used activation functions such as GELU, sigmoid, tanh, softplus. The catalog of new neuron types is sketched in Figure 2B. In the case of a teacher neuron with bias \(a^{*}\sigma(w^{*}x+b^{*})\), a group of _duplicate-type_\(\sum_{i}a_{i}\sigma(w_{i}x+b_{i})\) has input weight vectors and biases aligned to the teacher: \(w_{i}=w^{*},\ b_{i}=b^{*},\ \sum_{i}a_{i}=a^{*}\), while the new _zero-type_ group has aligned, but arbitrary, weights and biases \(w_{1}=\cdots=w_{q},\ b_{1}=\cdots=b_{q},\sum_{i}a_{i}=0\). With biases, _constant-type_ student neurons can also arise: they have vanishing input weights \(w=0\) and contribute a constant amount of \(a\sigma(b)\) to the next layer; to keep an exact mapping of the teacher, this constant contribution must be accounted for in the next-layer biases. **GELU or softplus** teacher neurons can be written as a combination of a linear and an even function: \(a^{*}\sigma(w^{*}x+b^{*})=a^{*}c_{1}\cdot(w^{*}x+b^{*})+a^{*}\sigma_{\text{ even}}(w^{*}x+b^{*})\), where \(c_{1}\) is the slope of the linear approximation around \(0\). The even symmetry allows student neurons to combine in groups of aligned, \(w=w^{*}\), and opposite, \(w=-w^{*}\), input weight vectors and biases (see appendix A.1.1 for details). If in such a group the output weights sum to zero, we obtain a _linear-type_ group that contributes a linear function to the next layer. To guarantee an exact mapping of the teacher, there must be another _linear-type_ group in the same layer contributing the exact opposite linear term. Alternatively, if the sum of the output weights matches the teacher output weight, then the neuron group is a _linear duplicate-type_, replicating a teacher neuron up to a misaligned linear contribution; the latter can be accounted for by another linear group in the same layer. A numerical example of different neurons found by a softplus student is shown in Figure S6. **Tanh and Sigmoid** can be treated similarly: student neurons can group with aligned and opposite weight vectors and combine to form constant types or duplicate plus constant. They are described in detail in the appendix along with a categorization of all commonly used activation functions (appendix A.1.2). Finally, in near zero-loss solutions, students may have _offbound-type_ neurons characterized by being almost constant, linearly increasing or decreasing (depending on the asymptotic behavior of the activation function) in regions of the input space with actual data, which is reminiscent of the dead ReLU phenomenon. The output of groups of _offbound-type_ neurons can synergize so as to contribute constant amounts, which can be compensated by bias adjustments in the next layer; see Figure 2. Figure 2: **Catalog of neuron types near zero loss:** with \(\sigma\) analytical, near-zero loss overparameterized students can only contain a handful of neuron types. On the left, a teacher neuron is defined along with a sketch of its output, the grey bar indicates the finite input support \(x\) to the neuron; the same color-coded letters indicate equal quantities. **A) Neuron types from Simsek et al. [10] adapted with biases:**_duplicate-type_ neurons combine together to replicate a teacher neuron by copying its weight vector \(w^{*}\) and bias \(b^{*}\), their activations \(a_{i}\) sum up to the teacher activation \(a^{*}\). _Zero-types_ have aligned weight vectors and biases but cancel each other via output weights. **B) Additional neuron types:**_constant-types_ contribute a fixed amount to the next layer by learning a null vector. Only in the presence of even + linear activation functions, such as GELU, _linear-type_ groups combine in a way to contribute a linear function, while _linear duplicate types_ replicate the teacher neuron summed to an extra linear component. _Offbound-types_ appear at non-exact zero loss and are characterized by having the nonlinearity placed away from their input support. ### Expand-and-Cluster algorithm We recall that the only way an overparameterized student can reach zero loss is by representing all the neurons at least once; see Section 3.1. If a student can be trained to exact zero loss, which is possible in small toy setups, it is almost trivial to identify the different neuron types, including the teacher neurons (see a numerical example in Fig. S6). However, in practice and even in larger tasks with artificial data, overparameterized networks are difficult to train to exact zero loss because of computational and memory budgets. Therefore we need ways to identify the teacher neurons from imperfectly trained students. These students present approximate duplicates of teacher neurons and neurons of other types that point in arbitrary directions. In a group of \(N\) imperfectly trained students, neurons that approximate teacher duplicates form clusters while other neurons do not, since they are not aligned between students. Therefore we propose the following clustering procedure (Fig. 3): **Step 1: Expansion phase.** Rapidly train a sequence of networks with increasing sizes of hidden layers to a fixed convergence criterion. To do so, use teacher-generated input-output pairs, \(\mathcal{D}=\{\mathbf{X},y\}\), and minimize the mean square error loss with standard gradient descent methods. This expansion phase allows finding a network width \(m\) at which convergence to nearly zero loss is possible (Fig. S3B). **Step 2: Training phase.** Train \(N\) students of \(L\) layers, width \(m\), on \(\mathcal{D}=\{\mathbf{X},y\}\) to minimize the mean square error to the lowest value achievable by the chosen optimizer (Fig. 3A). **Step 3: Clustering phase.** Collect the first hidden layer neurons of the \(N\) students, then cluster the input weight vectors with hierarchical clustering on the L2 distance. With a threshold selection criteria that maximizes the number of large clusters (size \(\geq\gamma N,\,0\,<\,\gamma\,\leq 1\)), obtain groups of aligned weight vectors; these clusters should include all duplicate teacher neurons. Proceed to filter out clusters whose elements are not aligned in angle (median alignment \(\geq\beta\)), removing eventual zero or constant type neuron clusters. Then, merge each remaining cluster of duplicate neurons into single Figure 3: **Parameter identification with Expand-and-Cluster.****A) Training scheme:** once an overparameterization factor yields near-zero training losses, train N overparameterized students on the teacher-generated dataset \(\mathcal{D}(\mathbf{X},y)\); **B) Similarity matrix**: \(\ell_{2}\)-distance between hidden neurons’ input weight vectors of layer \(l\) for all \(N\) students. Large-sized clusters (red and green) are good candidate weight vectors. **C) Dendrogram obtained with hierarchical clustering:** the selected linkage threshold is shown in orange. Clusters are eliminated if too small (blue) or unaligned (red), the remaining clusters are shown in green. hidden neurons to reconstruct the layer. We noticed that higher layers align with the teacher weights only at prohibitively low losses [78]. Therefore, if the student networks have more than one hidden layer left to reconstruct, we go back to Step 2 and train again \(N\) overparameterized students of \(L\gets L-1\) layers, using as input \(\mathbf{X}\) the output of the last reconstructed layer. Repeat this procedure until the last hidden layer is reconstructed (Fig. 3 Algorithm 1, more details in appendix A.5.2). **Step 4: Fine-tuning phase.** Adjust the final parameters using the training data. ### Artificial data experiments We trained overparameterized students on the family of teachers of Section 2.2 with a single hidden layer of \(r\) neurons (Figs. 4 and S7). For an overparameterization factor \(\rho\), the student hidden layer has \(\rho r\) neurons. To stay close to the theoretical setting of near-zero loss and to obtain perfect parameter recovery, we trained all the networks with the package MLPGradientFlow.jl [79]. This allowed us to find global and local minima with machine precision accuracy for networks without overparameterization (Fig. 4B, \(\rho=1\)). However, even with second-order ODE solvers and slightly larger networks, it becomes challenging to converge fully to global minima within a reasonable amount of time (Fig. 4C, \(\rho\in\{2,4,8\}\)). Hence, methods to deal with imperfectly trained students are needed. Since training is full-batch (30k data points), the only source of randomness is in the initialization; see Figure 4: **Artificial data define tasks of variable difficulty.****A) For fixed \(\mathbf{d_{in}}\), teacher complexity increases with number \(\mathbf{r}\) of hidden neurons:** contourplot of the teacher network output \(y=b^{2}+\sum_{i}^{r}a_{i}\ g(w_{i}{}^{T}x+b_{i}^{1})\) for \(d_{in}=2\) input dimensions and different \(r\), \(b^{2}\) denotes the bias of the output unit. Each hidden neuron generates a hyperplane, \(w_{i}{}^{T}x+b_{i}^{1}=0\) (dashed lines); the direction of the weight vector \(w_{i}\) is indicated by an arrow and the sign of the output weight \(a_{i}\) by its color. Top left: generalization of the XOR or parity-bit problem to a regression setting. From left to right: As the number of hidden neurons increases the level lines become more intricate. **B) Non-convexity prevents training to zero loss:** for each combination of \(d_{in}=2,4,8,16,32\) and \(r=2,4,8\) we generated 10 teachers; for each teacher, we trained 20 or 10 students (for \(r=8\)) with different seeds. Each teacher corresponds to one row of dots while each dot corresponds to one seed (see inset bottom right). Dark blue dots indicate loss below \(10^{-14}\). Student networks of the same size as the teacher (\(\rho=1\)) get often stuck in local minima. The effect is stronger for larger ratios \(r/d_{in}\). **C) Effects of overparameterization on convergence:** student networks with overparameterization \(\rho\geq 2\) are more likely to converge to near-zero loss than those without. We report the following general trends: (i) overparameterization avoids high loss local minima, (ii) the dataset complexity, i.e. number of hidden neurons per input dimension \(r/d_{in}\), determines the amount of overparameterization needed for reliable convergence to near-zero loss. For difficult teachers, i.e. overcomplete (\(r/d_{in}\geq 1\)), training is very slow and convergence is not guaranteed in a reasonable amount of time (see Fig. S4). appendix A.5 for more details. Figure 4C shows a beneficial trend as overparameterization increases, but also highlights a strong dependence on the dataset (or teacher) complexity \(r/d_{in}\): as the number of hyperplanes per input dimension increases, it becomes harder to train overparameterized students to global minima. We find that direct training of 20 student networks without overparameterization (data generated from a teacher network with \(r=4\) hidden neurons and input dimensionality \(d_{in}=4\)) does not yield a single case of convergence to zero loss (Fig. 5A). For the same teacher, the application of the Expand-and-Cluster algorithm yields student networks that achieve zero loss and hidden-layer size equal to that of the teacher if an overparameterization of \(\rho=2,4\) or 8 is used in Step 2. This suggests that successful retrieval of all parameters of the teacher network is possible (Fig. 5A). We tested the quality of parameter identification with Expand-and-Cluster for each teacher network of Figure 4 and illustrate the final loss of the reconstructed networks in Figure 5. For example, of the 30 teacher networks with input dimensionality \(d_{in}=8\), all except 2 networks were correctly identified as indicated by a zero-loss solution (for \(\rho=4,8\) and RMSE \(\leq 10^{-14}\), dark blue in Fig. 5B). Of 150 different teacher networks, 118 (\(\sim 80\%\)) were correctly identified with \(\rho=4\). For all correctly identified networks, the maximum angle between the weight vectors of the teacher network and the corresponding vector of the reconstructed student network was less than \(3\cdot 10^{-8}\) radians. In all but 7 out of 118 successful recoveries, the number of neurons found matches that of the teacher; the other cases have at most up to 4 neurons in excess (these can be easily categorized into zero-type and constant-type neurons, see e.g. Fig. S6). ### MNIST shallow and deep experiments Now we turn to applications using bigger networks where zero-loss solutions are not attainable because of computational time and memory budget, and noise in the optimizer (SGD). Expand-and-Cluster can discern duplicates from zero or constant type neurons that would not be otherwise distinguishable with a simple hard threshold on the weight values and therefore outperforms'magnitude pruning' [23]. We explore the sensitivity of Expand-and-Cluster to the number of students \(N\) on the MNIST regression task described in Section 2.3 (Figure 6B). We found that with two students Expand-and-Cluster works reasonably well but does not reliably identify the minimal network. However, with \(N\geq 4\) student networks, the Expand-and-Cluster algorithm enabled us to identify the teacher network with at most 5 percent of additional neurons. Figure 5: **A) Expand-and-Cluster applied to mildly overparameterized students reaches zero loss: a total of 80 student networks with 4, 8, 16 or 32 hidden neurons have been trained using data generated by a teacher with \(r=4\) hidden neurons and \(d_{in}=4\) input dimensions. None of the 20 students with 4 hidden neurons reached zero loss (orange dots, \(\rho=1\)), while all overparameterized student networks have zero loss with 4 hidden neurons after reconstruction (large colored stars). B) Loss after Expand-and-Cluster for all teacher networks and student sizes from Figure 4: the color of each small horizontal bar represents the final loss. Only a small fraction of teacher networks (i.e., those in yellow) were not identified correctly.** Unlike other methods [12; 13; 14], Expand-and-Cluster identifies deep fully connected networks trained on artificial data (see appendix A.3) and larger networks trained on MNIST, Figure 6B. We show a successful reconstruction of three hidden layers of \(30\) neurons each by applying Expand-and-Cluster to students overparameterized by a factor 3. The reconstruction identifies layer sizes up to \(4\) neurons in excess for the last hidden layer and achieves a loss of \(3\) orders of magnitude lower than similar-sized networks trained from random initialization (Fig. 6B). The reconstructed network has a fidelity (fraction of labels matching with the teacher) of \(100\%\) and \(96.75\%\) on train and test set, respectively. ## 4 Conclusions Even if the data is generated by a teacher of known size and architecture, recovering the parameters of the teacher network by training a student network of the same size is difficult to achieve with gradient-descent methods [13]. Indeed, if a student network of the same size as the teacher is trained, it is rare to find the global minimum because of the extreme non-convexity of the loss function: across 1024 training runs of students that contain the same number of hidden neurons as the teacher the global minimum was found only in 2 cases (Fig. S3A). However, using the same computation budget to train 128 students containing \(\rho=8\) times more neurons than the teacher, we reached near zero-loss solutions in 72 cases (\(56\%\)). For a successful application of the Expand-and-Cluster algorithm, it is desirable to have at least \(N=3\) students near a global minimum. Thus, with the above probability of convergence with an overparameterization of \(\rho=8\), ten students are sufficient to find at least three low-loss solutions with \(\sim 95\%\) probability - yet, using the same computation time for 80 runs of the student with the minimal number of hidden units is unlikely to yield even one zero-loss solution. In summary, the detour of expansion-convergence-clustering is a computationally efficient way to find the unique (up to permutations) global minima of a non-convex loss function where standard solution methods rarely work. Four generalizations of our approach should be straightforward. First, Expand-and-Cluster should work on convolutional layers just as well as it does on dense layers. Second, we obtained our results in a regression setting with a mean square error loss, but the method should also work with the cross-entropy loss as is commonly used in classification settings. Third, our simulation results are obtained with a symmetry-free activation function but, as discussed in Section 3.1, our results can be generalized to common activation functions, if we correctly account for symmetries. Fourth, the choice of dataset should not bring any difference in difficulty as we are dealing with a regression task of data generated by a smaller teacher network. At this stage, our results are limited to simple setups, primarily due to the high computational budget required to reach low losses in more complicated setups. For shallow relu networks, appendix A.4, our method does not reach the same exact extraction accuracy as other methods [12; 13], probably due to the slow finetuning required. In contrast, our method works for arbitrary smooth activation functions and is extendable to deeper networks. Figure 6: **Parameter recovery for larger networks**. **A) MNIST shallow teachers**: fraction of excess neurons with respect to the teacher size) clustered as a function of the \(N\) students used for Expand-and-Cluster(\(N,\gamma=0.5,\beta=\pi/6\)). Combined statistics across three shallow teachers of sizes \(r\in\{10,30,60\}\) pre-trained on MNIST data. **B) MNIST deep teacher**: A fully connected teacher of layer sizes 784-30-30-30-10 trained on MNIST is reconstructed with Expand-and-Cluster(\(N=50,\gamma=0.5,\beta=\pi/5\)) applied to students of factor \(\rho=3\) overparameterization (magenta dots) with 4 excess neurons in the last hidden layer (magenta star). Direct training with the teacher network architecture never reached losses below \(0.1\) (orange dots). ## Acknowledgments and Disclosure of Funding This work was supported by Sinergia Project CRSII5_198612 and SNF Project 200020_207426.
2306.03786
Residual-based error bound for physics-informed neural networks
Neural networks are universal approximators and are studied for their use in solving differential equations. However, a major criticism is the lack of error bounds for obtained solutions. This paper proposes a technique to rigorously evaluate the error bound of Physics-Informed Neural Networks (PINNs) on most linear ordinary differential equations (ODEs), certain nonlinear ODEs, and first-order linear partial differential equations (PDEs). The error bound is based purely on equation structure and residual information and does not depend on assumptions of how well the networks are trained. We propose algorithms that bound the error efficiently. Some proposed algorithms provide tighter bounds than others at the cost of longer run time.
Shuheng Liu, Xiyue Huang, Pavlos Protopapas
2023-06-06T15:37:03Z
http://arxiv.org/abs/2306.03786v1
# Residual-Based Error Bound for Physics-Informed Neural Networks ###### Abstract Neural networks are universal approximators and are studied for their use in solving differential equations. However, a major criticism is the lack of error bounds for obtained solutions. This paper proposes a technique to rigorously evaluate the error bound of Physics-Informed Neural Networks (PINNs) on most linear ordinary differential equations (ODEs), certain nonlinear ODEs, and first-order linear partial differential equations (PDEs). The error bound is based purely on equation structure and residual information and does not depend on assumptions of how well the networks are trained. We propose algorithms that bound the error efficiently. Some proposed algorithms provide tighter bounds than others at the cost of longer run time. ## 1 Introduction Differential equations (DEs) are a useful mathematical tool for describing various phenomena in natural sciences, engineering, and humanity studies. As universal approximators, neural networks are powerful in approximating unknown functions. With back-propagation and modern computing devices, neural networks are convenient to differentiate, making them an ideal choice for solving differential equations. However, a major criticism of neural network solutions to DEs is the lack of error bound. Traditional numerical methods, such as the finite difference method (FDM) and the finite element method (FEM), compute numerical solutions with known error bounds. Unlike traditional methods, the error bounds of neural network solutions are not well-studied. Therefore, solving DEs with neural networks requires ad hoc customization and empirical hyperparameter finetuning. If the error of _any_ given network can be bounded, we can train neural networks until the error falls below a specified tolerance threshold. Our contribution is that we propose rigorous error-bounding algorithms for any neural network solution to certain classes of equations, including linear ODEs, certain nonlinear ODEs, and first-order linear PDEs. These algorithms can also be extended to bound the error of other classes of equations as well. The proposed algorithms only use residual information and equation structure as inputs and do not rely on assumptions of finetuning. Section 2 introduces the symbols and notations adopted in this paper. Section 3 reviews the literature on emerging areas of research that are relevant to solving DEs with neural networks. Section 4 explains the existing effort to bounding the error of neural network DE solutions. Sections 5 and 6 propose various algorithms for the error bound of ODEs and PDEs, respectively. Section 7 uses the method of manufactured solution to verify the validity of each error-bounding algorithm and provides visualization of the tightness of the bounds. ## 2 Symbols and Notations DEs in this paper are posed w.r.t. unknown function \(v\), \[\mathcal{D}v=f,\] where \(\mathcal{D}\) is a possibly nonlinear differential operator and \(f\) is some forcing function. Unlike the exact solution \(v(\cdot)\), a neural network solution \(u(\cdot)\) does not strictly satisfy the equation. Instead, it incurs an additional residual term, \(r\), which the network aims to minimize, to the equation, \[\mathcal{D}u=f+r.\] The input to \(v\), \(u\), \(f\), and \(r\) is time \(t\) for ODEs and spatial coordinates \((x,y)\) for PDEs. We limit our reasoning to 2-dimensional PDEs in this work. In cases with multiple unknown functions, we use vector notations \(\mathbf{v}\), \(\mathbf{u}\), and \(\mathbf{r}\) instead of the scalar notations \(v\), \(u\), and \(r\). The loss function of the network solution is defined as the \(L^{2}\) norm of residual \(r\) over the domain of interest, \[\mathrm{Loss}(u):=\frac{1}{|I|}\int_{I}\|r\|^{2}\mathrm{d}I=\frac{1}{|I|}\int_{I }\|\mathcal{D}u-f\|^{2}\mathrm{d}I, \tag{1}\] where a spatial domain \(\Omega\) is substituted for the temporal domain \(I\) in the case of a PDE. ### Initial and Boundary Conditions For a neural network to satisfy initial or boundary conditions, we apply a technique called _parametrization_. As an intuitive example, the parametrization \(u(t)=(1-e^{-t})\mathrm{Net}(t)+v(0)\) guarantees that \(u(t)\) satisfies the initial condition \(u(0)=v(0)\) regardless of the network \(\mathrm{Net}(\cdot)\). This does not affect the capability of \(\mathrm{Net}(\cdot)\) to learn any solution. The parametrization is more complicated for higher-order ODEs and most PDEs and has been extensively studied by Lagaris et al. (1998); Lagaris et al. (2000); McFall and Mahan (2009); Lagari et al. (2020), and Sukumar and Srivastava (2021). In this work, we assume all initial and boundary conditions are exactly satisfied. ### Error and Error Bound The error of a network solution \(u\) is defined as \[\eta:=u-v. \tag{2}\] We are interested in _bounding_ the error with a scalar function \(\mathcal{B}\) such that \[\|\eta(t)\|\leq\mathcal{B}(t)\quad\text{or}\quad\|\eta(x,y)\|\leq\mathcal{B}( x,y) \tag{3}\] where \(\|\eta\|=\|u-v\|\) is the _absolute error_. If \(\mathcal{B}\) takes on the same value \(B\in\mathbb{R}^{+}\) over the domain, it can be replaced with a constant \(B\). Notice that multiple bounds \(\mathcal{B}\) exist for the same network solution \(u\). For example, \(|\eta(t)|\leq\mathcal{B}^{(1)}(t)\leq\mathcal{B}^{(2)}(t)\leq\cdots\leq B\) are bounds in decreasing order of tightness. Tighter bounds incur a higher computational cost, and looser bounds (such as constant \(B\)) are faster to compute. A summary of the applicability, restraints, run-time complexity, and relative tightness of all proposed algorithms is listed in Table 1. ## 3 Literature Review Hornik et al. (1989) showed that neural networks are universal function approximators. Lagaris et al. (1998) first studied the application of neural networks in solving DEs. The term _physics-informed neural networks_, or PINNs, was first introduced by Raissi et al. (2019) to name neural networks that satisfy DEs while fitting observed data points. Although we train PINNs only to solve DEs without any observed data in this work, the error-bounding algorithms we propose work for any given neural network, regardless of the training process. Flamant et al. (2020) and Desai et al. (2021) showed that one main advantage of neural networks over traditional numerical methods, such as FDM and FEM, is that neural networks can potentially learn the structure of the solution space and give a bundle of solutions \(u(\mathbf{x};\Theta)\) for different equation setup and initial/boundary conditions parameterized by \(\Theta\). For traditional methods, a new solution must be recomputed for any slight changes in equation setup or initial/boundary conditions. Some effort has been made to redefine the objective loss function. Yu et al. (2017) applied the Ritz method to a particular class of variational problems. Mattheakis et al. (2020) incorporated an additional constraint to force the network to learn solutions with energy conservation. Parwani and Protopapas (2021) used an adversarial network for sampling in particular areas of the domain where the residual is large. There are also works that study the failure modes of PINNs and quantify the error of PINN solutions in recent years. Graf et al. (2021) worked on quantifying the uncertainty of PINNs using the Bayesian framework. Krishnapriyan et al. (2021) characterized possible failure modes of PINNs by studying the performance of PINNs on simple problems and analyzing their loss landscape. Krishnapriyan et al. (2021) also concluded that optimization difficulty is the essential cause of failure. Our work uncovers the mathematical relationship between residual information and the error of PINNs on several classes of ODEs and PDEs. We propose different algorithms for various classes of equations and experimentally validate these algorithms. ## 4 Existing Work Sirignano and Spiliopoulos (2018) showed that for a class of quasi-linear parabolic PDEs, a neural network with a single hidden layer and sufficiently many hidden units could arbitrarily approximate the exact solutions. Guo and Haghighat (2022) proposed an energy-based _constitutive relation error_ bound for elasticity problems. De Ryck and Mishra (2022) derived an error bound for ReLU networks on parametric hyperbolic conservation laws. De Ryck and Mishra (2022) showed that there exists some PINN with arbitrarily small residual for Kolmogorov PDEs. De Ryck and Mishra (2022) derived an error bound for operator learning with PINNs. The works of De Ryck and Mishra mentioned above did not bound the error of every given network. Instead, they mathematically proved the existence of a network with errors below a specified bound, under certain assumptions of network architecture, including width, depth, and activation functions. The question remaining to be answered is how to overcome optimization difficulties and find such a neural network. Our work differs from the above in that we bound the error of _any_ neural network regardless of finetuning, even networks with randomly initialized weights. Our algorithms only depend on inputs of residual information \(r\), often used as training loss, and equations structure \(\mathcal{D}v=f\). The output is a (possibly constant) function that guarantees to bound the error at any point in domain. ## 5 Error Bound for ODE This section considers both linear and nonlinear ODEs over the temporal domain \(I=[0,T]\). Initial conditions are imposed on \(\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}v(t=0)\) for \(k=0,\ldots,(n-1)\), where \(n\) is the highest order of derivative terms in ODE. ### Error Bound for Linear ODE Consider the linear ODE \(\mathcal{L}v(t)=f(t)\), where \(\mathcal{L}\) is a linear differential operator. Its neural network solution \(u\) satisfies \(\mathcal{L}u(t)=f(t)+r(t)\). Since error \(\eta:=u-v\), there is \[\mathcal{L}\eta(t)=r(t). \tag{4}\] With the assumption in Section 2.1 that \(u\) satisfies the initial conditions at \(t=0\), there is \[\eta(0)=0,\quad\frac{\mathrm{d}}{\mathrm{d}t}\eta(0)=0,\quad\frac{\mathrm{d} ^{2}}{\mathrm{d}t^{2}}\eta(0)=0,\quad\ldots \tag{5}\] With initial conditions 5 known, a unique inverse transform \(\mathcal{L}^{-1}\)to \(\mathcal{L}\) exists. Applying \(\mathcal{L}^{-1}\) to Eq. 4, there is \[\eta(t)=\mathcal{L}^{-1}r(t). \tag{6}\] Hence, bounding the absolute error \(|\eta|\) is equivalent to bounding \(\big{|}\mathcal{L}^{-1}r\big{|}\). Notice that only a) the equation structure \(\mathcal{L}\) and b) the residual information \(r\) are relevant to estimating the error bound. All other factors, including parameters of the neural network \(u\), forcing function \(f\), and initial conditions, do not affect the error bound at all. #### 5.1.1 Single Linear ODE with Constant Coefficients Consider the case where \(\mathcal{L}=\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}+\sum_{j=0}^{n-1}a_{j}\frac {\mathrm{d}^{j}}{\mathrm{d}t^{j}}\) consists of only constant coefficients \(a_{0},a_{1},\ldots,\in\mathbb{R}\). Its characteristic polynomial (defined below) can be factorized into \[\lambda^{n}+a_{n-1}\lambda^{n-1}+\cdots+a_{0}=\prod_{j=1}^{n}(\lambda-\lambda _{j}), \tag{7}\] where \(\lambda_{1},\ldots,\lambda_{n}\in\mathbb{C}\) are the characteristic roots. It can be shown that, for a semi-stable system (\(\mathcal{R}e\left(\lambda_{j}\right)\leq 0\) for all \(\lambda_{j}\)), an error bound can be formulated as \[|\eta(t)|\leq\mathcal{B}_{loose}(t):=C_{\lambda_{1:n}}\;R_{\max}\,t^{Z}, \tag{8}\] where \(0\leq Z\leq n\) is the number of \(\lambda_{j}\) whose real part is \(0\), \(C_{\lambda_{1:n}}:=\frac{1}{Z!}\prod_{j=1;\lambda_{j}\neq 0}^{n}\frac{1}{ \mathcal{R}e(-\lambda_{j})}\) is a constant coefficient, and \(R_{\max}:=\max_{t\in I}|r(t)|\) is the maximum absolute residual. Knowing bound 8 is sufficient to qualitatively estimate the error for applications where only the order of error is concerned. See Alg. 1 for reference. An issue with Eq. 8 and Alg. 1 is that they assume \(\mathcal{R}e\left(\lambda_{j}\right)\leq 0\) for all characteristic roots \(\lambda_{j}\). To address this issue, we propose an alternative error-bounding Alg. 2, which requires more computation but does not require the system to be semi-stable and provides a tighter bound. Notice that the bounds of \(\eta\) in Eq. 6 can be estimated if the inverse operator \(\mathcal{L}^{-1}\) is known. Let Eq. 7 be the factorization of characteristic polynomial of \(\mathcal{L}\). Define operator \(\mathcal{I}_{\lambda}\) as 1 Footnote 1: This paper assumes the network solution exactly satisfies the initial conditions as discussed in Section 2.1. However, all our algorithms can be extended to cases where the network solution differs from the exact solution by some value. This is achieved by replacing \(\mathcal{I}_{\lambda}\psi(t)\) with \(\mathcal{I}_{\lambda,\delta}\psi(t)=\mathcal{I}_{\lambda}\psi(t)+\delta e^{ \lambda t}\) in Eq. 9. \[\mathcal{I}_{\lambda}\psi(t):=e^{\lambda t}\int_{0}^{t}e^{-\lambda\tau}\psi(\tau )\mathrm{d}\tau,\quad\forall\psi:I\to\mathbb{C}. \tag{9}\] We show in supplementary material that \(\mathcal{L}^{-1}=\mathcal{I}_{\lambda_{n}}\circ\mathcal{I}_{\lambda_{n-1}}\circ \cdots\circ\mathcal{I}_{\lambda_{1}}\) and that \(|\mathcal{I}_{\lambda}\psi|\;\leq\mathcal{I}_{\mathcal{R}e(\lambda)}|\psi|\) for any \(\lambda\in\mathbb{C}\) and function \(\psi\). Hence, another error bound can be formulated as \[\mathcal{B}_{tight}(t):=\left(\mathcal{I}_{\mathcal{R}e(\lambda_{n})}\circ \cdots\circ\mathcal{I}_{\mathcal{R}e(\lambda_{1})}\right)|r(t)|. \tag{10}\] \begin{table} \begin{tabular}{l c c c c} \hline \hline **Algorithm** & **Applicable to** & **Restraint** & **Run-Time** & **Comment** \\ \hline Algorithm 1 & Linear ODE & Semi-stable & \(O(L)\) & Looser than Alg 2 \\ Algorithm 2 & Linear ODE & & \(O(nL)\) & Tighter than Alg 1 \\ Algorithm 3 & Linear ODE System & & \(O(n^{3}L)\) & Norm and elementwise bounds \\ Algorithm 4 & Nonlinear ODE & Nonlinear term is \(\varepsilon v^{k}\) & \(O(JnL)\) & Bounded solution for family of DEs \\ Algorithm 5 & Linear 1st-Order PDE & Coeff. \(c\neq 0\) over domain & \(O(\text{mesh})\) & Constant bound; Looser than Alg 6 \\ Algorithm 6 & Linear 1st-Order PDE & Solvable characteristics & \(O(KL)\) & Tighter than Alg 5 if computable \\ \hline \hline \end{tabular} \end{table} Table 1: **Overview of Proposed Algorithms.** The symbols in run-time analysis are defined and explained in detail in Sections 5 and 6, with the exception of \(K\), which is the number of steps used in each numerical integration. It is also proven in supplementary material that \(\mathcal{B}_{tight}\) is tighter than \(\mathcal{B}_{loose}\) when \(\mathcal{B}_{loose}\) is applicable, \[|\eta(t)|\leq\mathcal{B}_{tight}(t)\leq\mathcal{B}_{loose}(t)\quad\forall t\in I. \tag{11}\] Based on Eq. 10, we propose Alg. 2, which computes \(\mathcal{B}_{tight}\) by repeatedly evaluating integrals in 9 using the cumulative trapezoidal rule. ``` Input: Coefficients \(\{a_{j}\}_{j=0}^{n-1}\) for operator \(\mathcal{L}\), residual information \(r(\cdot)\), domain of interest \(I=[0,T]\), and a sequence of time points \(\{t_{\ell}\}_{\ell=1}^{L}\) where error bound is to be evaluated Output: Error bound at given time points \(\{\mathcal{B}(t_{\ell})\}_{\ell=1}^{L}\) \(\mathcal{L meantime, a _norm bound_ (scalar) \(\mathcal{B}(t)\) also exists \[\|\boldsymbol{\eta}(t)\|\leq\mathcal{B}(t):=\mathrm{cond}(P)\left\|\boldsymbol{ \mathcal{I}}\big{[}\|\mathbf{r}\|\mathbf{1}\big{]}(t)\right\| \tag{16}\] where \(\mathrm{cond}(P)\) is the conditional number of \(P\) w.r.t. induced matrix norm and \(\mathbf{1}\) is an \(n\times 1\) column vector of \(1\)s. Proof of Eq. 15 and Eq. 16 can be found in in supplementary material. See Alg. 3 for implementation. **Input:** Coefficient matrix \(A\in\mathbb{R}^{n\times n}\), residual vector \(\mathbf{r}(t)\), and a sequence of points \(\{t_{\ell}\}_{\ell=1}^{L}\) where error is to be bounded **Output:** Norm bound (scalar) \(\{\mathcal{B}(t_{\ell})\}_{\ell=1}^{L}\) and component-wise bound (vector) \(\{\boldsymbol{\mathcal{B}}(t_{\ell})\}_{\ell=1}^{L}\) at given time points **Ensure:**: \(\|\eta(t_{\ell})\|\leq\mathcal{B}(t_{\ell})\) and \(\eta(t_{\ell})\preceq\boldsymbol{\mathcal{B}}(t_{\ell})\) for all \(\ell\) \(J,P\gets\) Jordan canonicalization of \(A=PJP^{-1}\) **for** each Jordan block \(J_{k}\) of shape \(n_{k}\times n_{k}\)**do** \(\mathbf{I}_{k}\leftarrow\) construct operator block using Eq. 14 **end for** \(\boldsymbol{\mathcal{I}}\leftarrow\) diag\((\mathbf{I}_{1},\mathbf{I}_{2},\ldots)\) \(\{\boldsymbol{\mathcal{B}}(t_{\ell})\}_{\ell=1}^{L}\leftarrow\{P^{|\cdot|} \boldsymbol{\mathcal{I}}\big{[}(P^{-1})^{|\cdot|}\boldsymbol{\cdot}^{|\cdot|} \big{]}(t_{\ell})\}_{\ell=1}^{L}\) \(\{\mathcal{B}(t_{\ell})\}_{\ell=1}^{L}\leftarrow\{\mathrm{cond}(P)\left\| \boldsymbol{\mathcal{I}}\big{[}\|\mathbf{r}\|\mathbf{1}\big{]}(t_{\ell})\right\| \}_{\ell=1}^{L}\) **return**\(\{\mathcal{B}(t_{\ell})\}_{\ell=1}^{L}\), \(\{\boldsymbol{\mathcal{B}}(t_{\ell})\}_{\ell=1}^{L}\) **Algorithm 3** ODE System Bound (norm and elementwise) ### Nonlinear ODE Nonlinear ODEs are hard to solve in general. In this work, we only deal with nonlinear ODEs with a single nonlinear term of the form \(\varepsilon v^{k}(t)\), where \(\varepsilon\in\mathbb{R}\) is a small number. Ideally, \(|\varepsilon|\ll 1\). The exact requirement for \(\epsilon\) is given in Section 5.2.2. The value of \(\varepsilon\) can vary within a certain range or be fixed. With the perturbation technique, we obtain a family of solutions \(v(t;\varepsilon)\) parameterized by \(\varepsilon\) at the cost of solving a (countable) collection of equations. As explained below in Section 5.2.1, we train finitely many networks, each approximately solving an equation in the collection. #### 5.2.1 Perturbation Theory Consider the nonlinear ODE with nonlinear term \(\varepsilon v^{k}(t)\), \[\mathcal{L}v(t)+\varepsilon v^{k}(t)=f(t), \tag{17}\] where \(\mathcal{L}\) is a linear differential operator discussed in 5.1 and initial conditions are specified for the system at time \(t=0\). Notice that each \(\varepsilon\in\mathbb{R}\) corresponds to a solution \(v(t;\varepsilon)\). We expand the solution \(v(t;\varepsilon)\) in terms of \(\varepsilon\) \[v(t;\varepsilon)=\sum_{j=0}^{\infty}\varepsilon^{j}v_{j}(t)=v_{0}(t)+ \varepsilon v_{1}(t)+\ldots \tag{18}\] Only \(v_{0}(t)\) is subject to the original initial conditions at \(t=0\), while other components, \(v_{1}\), \(v_{2}\),..., have initial conditions of \(0\) at \(t=0\). Substituting Eq. 18 into Eq. 17, \[\mathcal{L}\sum_{j=0}^{\infty}\varepsilon^{j}v_{j}+\varepsilon \left(\sum_{j=0}^{\infty}\varepsilon^{j}v_{j}\right)^{k}=f \tag{19}\] \[\sum_{j=0}^{\infty}\varepsilon^{j}\mathcal{L}v_{j}+\sum_{j=0}^{ \infty}\varepsilon^{j+1}\sum_{\begin{subarray}{c}j_{1}+\cdots j_{k}=j\\ j_{1},\ldots,j_{k}\geq 0\end{subarray}}v_{j_{1}}\ldots v_{j_{k}}=f\] (20) \[\mathcal{L}v_{0}+\sum_{j=1}^{\infty}\varepsilon^{j}\bigg{(} \mathcal{L}v_{j}+\sum_{\begin{subarray}{c}j_{1}+\cdots+j_{k}=j-1\\ j_{1},\ldots,j_{k}\geq 0\end{subarray}}v_{j_{1}}\ldots v_{j_{k}}\bigg{)}=f. \tag{21}\] In order for Eq. 21 to hold true for all \(\varepsilon\), the coefficients for each \(\varepsilon^{j}\) must match on both sides of Eq. 21. Hence, \[\mathcal{L}v_{0} =f \tag{22}\] \[\mathcal{L}v_{1}+v_{0}^{k} =0\] (23) \[\mathcal{L}v_{2}+kv_{0}^{k-1}v_{1} =0\] (24) \[\mathcal{L}v_{3}+\frac{k(k-1)}{2}v_{0}^{k-2}v_{1}^{2}+kv_{0}^{k-1 }v_{2} =0\] (25) \[\vdots\] For \(\varepsilon=0\), Eq. 18 is reduced to \(v_{0}(t)\), which solves the linear problem \(\mathcal{L}v=f\). The above system can be solved in a _sequential_ manner, either analytically or using neural networks, 1. Eq. 22 is linear in \(v_{0}\) and can be solved first. 2. With \(v_{0}\) known, Eq. 23 is linear in \(v_{1}\) and can be solved for \(v_{1}\). 3. Similarly, with \(v_{0}\) and \(v_{1}\) known, Eq. 24 is linear in \(v_{2}\) and can be solved for \(v_{2}\). 4. The process can be repeated for Eq. 25 and beyond. Only a linear ODE is solved each time. To solve the system with PINNs, we approximate exact solutions \(\{v_{j}(t)\}_{j=1}^{\infty}\) with neural network solutions \(\{u_{j}(t)\}_{j=0}^{J}\) trained sequentially on Eq. 22, Eq. 23, and beyond. In practice, we only consider components up to order \(J\) to avoid the infinity in expansion 18. Ideally, \(J\) should be large enough so that higher order residuals in expansion 18 can be neglected. After obtaining \(\{u_{j}(t)\}_{j=0}^{J}\), we can reconstruct the solution \(u(t;\varepsilon)=\sum_{j=0}^{J}\varepsilon^{j}u_{j}(t)\) to the original nonlinear equation 17 for varying \(\varepsilon\). See Alg. 4 for details. #### 5.2.2 Expansion of Bounds The absolute error \(|\eta(t;\varepsilon)|=|u(t;\varepsilon)-v(t;\varepsilon)|\)is given by \[|\eta(t;\varepsilon)|=\left|\sum_{j=0}^{J}\varepsilon^{j}\Big{(}u_{j}(t)-v_{j}(t) \Big{)}-\sum_{j=J+1}^{\infty}\varepsilon^{j}v_{j}(t)\right|\] \[\leq\sum_{j=0}^{J}\Big{|}\eta_{j}(t)\Big{|}|\varepsilon|^{j}+\left|\sum_{j=J+1}^{ \infty}\varepsilon^{j}v_{j}(t)\right| \tag{26}\] where \(\eta_{j}(t):=u_{j}(t)-v_{j}(t)\) is the _component error_ between \(u_{j}(t)\) and \(v_{j}(t)\). Let \(\mathcal{B}_{j}\) denote the _bound component_ such that \(|\eta_{j}(t)|\leq\mathcal{B}_{j}(t)\). Assuming \(J\) is large and \(\varepsilon\) is small such that higher order terms \(\left|\sum_{j=J+1}^{\infty}\varepsilon^{j}v_{j}(t)\right|\) are negligible, there is \[\Big{|}\eta(t;\varepsilon)\Big{|}\leq\mathcal{B}(t;\varepsilon):=\sum_{j=0}^{J }\mathcal{B}_{j}(t)\,|\varepsilon|^{j} \tag{27}\] where each bound component \(\mathcal{B}_{j}\) can be evaluated using the techinque in Section 5.1. See Alg. 4 for details. **Input:** Linear operator \(\mathcal{L}\), nonlinear degree \(k\), domain \(I=[0,T]\), highest order \(J\) for expansion, and a sequence \(\{(t_{\ell},\varepsilon_{\ell})\}_{\ell=1}^{L}\) where solution \(u(t;\varepsilon)\) and error bound \(\mathcal{B}(t;\varepsilon)\) are to be evaluated **Output:** Solution \(\{u(t_{\ell};\varepsilon_{\ell})\}_{\ell=1}^{L}\) and error bound \(\{\mathcal{B}(t_{\ell};\varepsilon_{\ell})\}_{\ell=1}^{L}\) **Require:**\(t_{\ell}\in I\), and \(|\varepsilon_{\ell}|\) to be small (ideally \(|\varepsilon_{\ell}|\ll 1\)) **Ensure:**\(\eta(t_{\ell};\varepsilon_{\ell})\leq\mathcal{B}(t_{\ell};\varepsilon_{\ell})\) \(u_{0},r_{0},\leftarrow\) net solution, residual of \(\mathcal{L}u_{0}=\underset{\{\mathcal{B}(0_{\ell})\}_{\ell=1}^{L}}{\mathcal{B }(t_{\ell})}\leftarrow\) bound of \(\left|\mathcal{L}^{-1}r_{0}\right|\) at \(\{t_{\ell}\}_{\ell=1}^{L}\) **for**\(j\gets 1\ldots J\)**do** Macro \(\text{NL}_{j}[\phi]\leftarrow\sum_{\begin{subarray}{c}j_{1}+\cdots+j_{k}=j-1 \end{subarray}}\phi_{j_{1}}\ldots\phi_{j_{k}}\) \(u_{j},r_{j}\leftarrow\) net solution, residual of \(\mathcal{L}u_{j}+\text{NL}_{j}[u]=0\) \(\mathcal{B}_{\text{NL}}\leftarrow\) upper bound of \(\left|\text{NL}_{j}[u]-\text{NL}_{j}[v]\right|\) \(\left\{\mathcal{B}_{j}(t_{\ell})\right\}_{\ell=1}^{L}\leftarrow\) bound of \(\left|\mathcal{L}^{-1}r_{j}\right|+\left|\mathcal{L}^{-1}\mathcal{B}_{\text{NL}}\right|\) **end for** \(\{u(t_{\ell};\varepsilon_{\ell})\}_{\ell=1}^{L}\leftarrow\)\(\big{\{}\sum_{j=0}^{J}\varepsilon_{\ell}^{j}u_{j}(t_{\ell})\big{\}}_{\ell=1}^{L}\) \(\{\mathcal{B}(t_{\ell};\varepsilon_{\ell})\}_{\ell=1}^{L}=\)\(\big{\{}\sum_{j=0}^{J}\varepsilon_{\ell}^{j}\mathcal{B}_{j}(t_{\ell})\big{\}}_{ \ell=1}^{L}\) **return \(\{u(t_{\ell};\varepsilon_{\ell})\}_{\ell=1}^{L},\{\mathcal{B}(t_{\ell}; \varepsilon_{\ell})\}_{\ell=1}^{L}\)** **Note 1**: \(\mathcal{B}_{0}\) and \(\mathcal{B}_{1:J}\) can be evaluated using Alg. 1 or 2. **Note 2**: \(\mathcal{B}_{\text{NL}}\) can be estimated even though exact solutions \(v_{0:j-1}\) are unknown. This is because \(v_{i}\in[u_{i}-\mathcal{B}_{i},u_{i}+\mathcal{B}_{i}]\) for all \(i\), and \(u_{0:j-1}\), \(\mathcal{B}_{0:j-1}\) are known from previous iterations. ## 6 Error Bound for PDE This section considers first-order linear PDEs defined on a 2-dimensional spatial domain \(\Omega\),2 Footnote 2: Similar techniques can be used for other classes of PDEs and higher dimensions where the method of characteristics applies. \[a(x,y)\frac{\partial v}{\partial x}+b(x,y)\frac{\partial v}{\partial y}+c(x,y)v =f(x,y) \tag{28}\] with Dirichlet boundary constraints defined on \(\Gamma\subset\partial\Omega\), \[v\big{|}_{(x,y)\in\Gamma}=g(x,y). \tag{29}\] We partition the domain into infinitely many characteristic curves \(\mathcal{C}\), each passing through a point \((x_{0},y_{0})\in\Gamma\). The resulting curve is a parameterized integral curve \[\mathcal{C}:\begin{cases}x^{\prime}(s)=a(x,y)\\ y^{\prime}(s)=b(x,y)\end{cases}\text{where}\,(\cdot)^{\prime}=\frac{\text{d}}{ \text{d}s}\text{ and }\begin{array}{c}x(0)=x_{0}\\ y(0)=y_{0}.\end{array}\] For any \((x(s),y(s))\) on \(\mathcal{C}\), functions \((v,a,b,c,f)\) can be viewed as univariate functions of \(s\). By chain rule, there is \[a(x,y)\frac{\partial v}{\partial x}+b(x,y)\frac{\partial v}{\partial y}=x^{ \prime}(s)\frac{\partial v}{\partial x}+y^{\prime}(s)\frac{\partial v}{\partial y }=v^{\prime}(s).\] Hence, Eq. 28 is reformulated as an ODE along curve \(\mathcal{C}\), \[v^{\prime}(s)+c(s)v(s)=f(s)\quad\text{s.t. }v(0)=g(x_{0},y_{0}), \tag{30}\] where \(v(s)\), \(c(s)\), and \(f(s)\) are shorthand notations for \(v(x(s),y(s))\), \(c(x(s),y(s))\), and \(f(x(s),y(s))\), respectively. In particular, if \(c(x,y)\neq 0\) for all \((x,y)\in\Omega\), both sides of Eq. 28 can be divided by \(c(x,y)\), resulting in a residual of \(r(x,y)/c(x,y)\) where \(r(x,y)\) is the residual of the original problem. By Eq. 8, a constant error bound on \(\mathcal{C}\) is \(|\eta(s)|\leq\max_{s}|r(s)/c(s)|\). Hence, a (loose) constant error bound \(B\) (see Alg. 5) over the entire domain \(\Omega\) is \[|\eta(x,y)|\leq B:=\max_{(x,y)\in\Omega}\left|\frac{r(x,y)}{c(x,y)}\right|. \tag{31}\] **Algorithm 5** Constant Err Bound for Linear 1st-Order PDE **Input:** Coefficient \(c(x,y)\) in Eq. 28, residual information \(r(x,y)\) and domain of interest \(\Omega\) **Output:** A constant error bound \(B\in\mathbb{R}^{+}\) **Require:**\(c(x,y)\neq 0\) for all \((x,y)\in\Omega\) **Ensure:**\(|\eta(x,y)|\leq B\) for all \((x,y)\in\Omega\) \(\{(x_{k},y_{k})\}_{k}\leftarrow\) sufficiently dense mesh grid over \(\Omega\) \(B\leftarrow\max_{k}\left|\frac{r(x_{k},y_{k})}{c(x_{k},y_{k})}\right|\) **return \(B\)** **Algorithm 6** Constant Err Bound for Linear 1st-Order PDE ## 7 Relevant Experiments In this section, we perform experiments on equations with manufactured solutions using the _NeuroDiffEq_ library [Chen et al., 2020], which provides convenient tools for training PINNs. First, we train networks to solve equations and collect their residual information \(r\). Then, we apply Alg. 1-6 (where applicable) to derive error bounds using only residual information \(r\) and equation structure, characterized by its differential operator \(\mathcal{D}\). Lastly, we show that the absolute error strictly falls within the bounds, regardless of how well the networks are trained. Throughout this section, we always use networks with two hidden layers, each consisting of 32 hidden units. Depending on whether the problem is an ODE or PDE, a network can have a single input \(t\) or two inputs \((x,y)\), but always have a single output. The activation function is \(\tanh\). Unless otherwise noted, the training domain is \(I=[0,1]\) for ODEs and \(\Omega=[0,1]^{2}\) for PDEs. We use a _PyTorch_ Adam optimizer with default hyperparameters to train networks for 1000 epochs. Notice that we list these configurations only for the reproducibility of visualizations. Our error-bounding algorithm works under any other configurations. ### Single Linear ODE with Constant Coefficients Here, we study three equations \(v^{\prime\prime}+3v^{\prime}+2v=f(t)\), \(v^{\prime\prime}+v=g(t)\), and \(v^{\prime\prime}-v^{\prime}=h(t)\), whose characteristic roots are \(\{-1,-2\}\), \(\{\pm i\}\), and \(\{0,1\}\) respectively. By Section 5.1.1, the first two equations can be bounded with either Alg. 1 or Alg. 2, while the last must be bounded with Alg. 2. They all satisfy initial conditions \(v(0)=v^{\prime}(0)=1\). We pick \(f(t)=2t^{2}+8t+7\), \(g(t)=t^{2}+t+3\), and \(h(t)=1-2t\), so that the manufactured solution is \(v(t)=t^{2}+t+1\) for all three equations. Fig. 1 shows that both \(\mathcal{B}_{loose}\) (Alg. 1) and \(\mathcal{B}_{tight}\) (Alg. 2) strictly bounds the absolute error. ### Linear ODE System with Constant Coefficients In this subsection, we train \(6\) networks to solve a \(6\)-dimensional linear system of ODEs with constant coefficients, namely, \(\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{v}+A\mathbf{v}=\mathbf{f}\). We pick \(A=PJP^{-1}\) where \(J=\begin{pmatrix}J_{1}&J_{2}&\\ &J_{3}&\\ &&J_{1}=\begin{pmatrix}4&1\\ &4&1\\ &&4\end{pmatrix}\), \(J_{2}=\begin{pmatrix}3&1\\ &3\end{pmatrix}\), \(J_{3}=2\), and \(P\) is a random orthogonal matrix. We pick the initial conditions to be \(\mathbf{v}(0)=P(0\,0\,1\,0\,1\,1)^{T}\) and the forcing function to be \(\mathbf{f}(t)=P(\cos t+4\sin t+\ln(1+t),\,\frac{1}{1+t}+4\ln(1+t)+(t+1),\,4t+5, \,2t+3t^{2}+e^{t},\,4e^{t},\,2\cos t-\sin t)^{T}\), so that the manufactured exact solution is \(\mathbf{v}(t)=P(\sin t,\,\ln(t+1),\,t+1,\,t^{2},\,e^{t},\,\cos t)^{T}\). After obtaining the residual information \(\mathbf{r}(t)=\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{u}(t)+A\mathbf{u}(t)- \mathbf{f}(t)\), we apply Alg. 3 to obtain componentwise bound and norm bound of \(\boldsymbol{\eta}=\mathbf{u}-\mathbf{v}\). It is shown in Fig. 2 that the bounds hold over the domain. Figure 1: Loose bound (Alg. 1) and tight bound (Alg. 2) for 3 second-order linear ODE with constant coefficients. Notice that the loose bound cannot be applied to the third equation since it has characteristic roots with positive real part. Figure 2: _Componentwise_ bound (upper) and _norm_ bound (lower) for linear ODE system with constant coefficients ### Nonlinear ODE - Duffing Equation In this subsection, we consider a Duffing oscillator, which is characterized by the following \(2\)nd order nonlinear ODE: \[\frac{\mathrm{d}^{2}v}{\mathrm{d}t^{2}}+3\frac{\mathrm{d}v}{\mathrm{d}t}+2v+ \varepsilon v^{3}=\cos t, \tag{32}\] under initial conditions \(v(0)=1\) and \(v^{\prime}(0)=1\), where \(\varepsilon\) controls the nonlinearity of the equation. Using Alg. 4, we solve the equation on \(I=[0,2]\) for linspaced \(\varepsilon\in(-0.9,0.9)\) using neural networks and bound the errors. The input \(J\) to Alg. 4 is chosen to be \(6\). Namely, we expand the solution and bound components from degree \(0\) to \(6\). The analytical solution to Eq. 32 is complicated. Hence, we use the RKF4(5) method to compute numerical solutions that are close enough to exact solutions for visualization purposes. See Fig. 3 for network solutions against RKF4(5) solutions and Fig. 4 for error bounds against absolute error. ### Linear PDE System with Nonconstant Coefficients #### 7.4.1 PDE Error Bound Evaluation Using Alg. 6 We try to solve the following first-order linear PDE, \[(-x-y)\frac{\partial v}{\partial x}+(x-y)\frac{\partial v}{\partial y}+v=3x-2y \tag{33}\] in spatial domain \(\Omega=[-1,1]^{2}\). The boundary constraints are \(v(x,\pm 1)=2x\pm 3\) and \(v(\pm 1,y)=3y\pm 2\). The manufactured solution is given by \(v(x,y)=2x+3y\). The characteristic curves are integral curves \(\mathcal{C}:\begin{cases}x^{\prime}(s)=-x-y\\ y^{\prime}(s)=x-y\end{cases}\), or \[\mathcal{C}:\begin{cases}x(s)=R_{0}e^{-s}\cos(s+\theta_{0})\\ y(s)=R_{0}e^{-s}\sin(s+\theta_{0})\end{cases}\text{, where }R_{0}=\sqrt{x_{0}^{2}+y_{0} ^{2}}\] and \(\theta_{0}=\mathrm{atan2}(y_{0},x_{0})\) are constants determined by the starting point \((x_{0},y_{0})\in\Gamma=\partial\Omega\). See Figure 5 for visualization. Since the analytical expression of the characteristic curves is known, Alg. 6 can be applied to evaluate the bound on each curve. We choose \(16\) characteristic curves with starting points \(A\), \(B\),..., \(P\), equidistantly placed on the boundary (Fig. 4(a)). We plot the absolute error and the computed error bound along these characteristic curves in Fig. 6. It can be seen that absolute error lies strictly within the bounds. #### 7.4.2 PDE Error Bound Evaluation Using Alg. 5 Consider the following PDE \[(x^{2}+y^{2}+1)\frac{\partial v}{\partial x}+(x^{2}-y^{2}+2)\frac{\partial v }{\partial y}+(3-2x)v=f \tag{34}\] over domain \(\Omega=[-1,1]^{2}\), where \(f(x,y)=6-4x\). The boundary constraints are \(v(-1,y)=2\) and \(v(x,1)=2\) Figure 4: True Error vs. error bound (max-degree 0\(\sim\)6) of neural network solution to Duffing Equation 32 for \(\varepsilon\in(-0.9,0.9)\) Figure 5: Characteristics curves of Eq. 33 (left) and Eq. 34 (right). The red curves, with staring points \(A\) to \(P\), are selected for visualization of absolute error and error bound in Fig. 6. Figure 3: RKF45 and Network Solutions (max-degree 0\(\sim\)6) to Duffing equation 32 for \(\varepsilon\in(-0.9,0.9)\) and the manufactured solution is \(v(x,y)=2\). The characteristic curves \(\mathcal{C}:\begin{cases}x^{\prime}(s)=x^{2}+y^{2}+1\\ y^{\prime}(s)=x^{2}-y^{2}+2\end{cases}\) are given by a nonlinear ODE, which is hard to solve analytically. (See Fig. 5b for visualization) Therefore, Alg. 6 cannot be applied to evaluate the error bound. However, the coefficient \((3-2x)\) is nonzero over domain \(\Omega\). Hence, we can use Alg. 5 to compute a constant error bound \(|\eta(x,y)|\leq\mathcal{B}(x,y)\equiv B\) for all \((x,y)\in\Omega\). We visualize the bound and the maximum absolute error \(\max_{(x,y)\in\Omega}|\eta|\) after each training epoch in Fig. 7. As expected, the bound is loose, which is about an order of magnitude larger than the max absolute error. Yet, it consistently holds true for every epoch, even during the early stages of training, when the network performs poorly. ## 8 Conclusion and Future Work This paper proposes various error-bounding algorithms for any PINN solution to certain classes of ODEs and PDEs. These algorithms only require the residual information \(r(\cdot)\) and the equation structure \(\mathcal{D}v=f\) as input. There are many real-world applications for which the exact solution \(v(\cdot)\) is unknown or hard to compute. However, the residual information \(r(\cdot)\) is usually, if not always, readily available. With our proposed algorithms, PINNs can be trained until the error is gauranteed to fall below a specified tolerance threshold. The mathematical relationship between residual and error bound also sheds light on optimizing PINN solutions for future studies. The error-bounding algorithms proposed in this paper only apply to certain classes of ODEs and PDEs. However, the insights of this paper can be beneficial to future work that extends to more general classes of ODEs and PDEs, especially nonlinear ones. We also plan to apply these algorithms stochastic differential equations, where the error bound is a probabilistic tail bound.
2310.10791
Neural Tangent Kernels Motivate Graph Neural Networks with Cross-Covariance Graphs
Neural tangent kernels (NTKs) provide a theoretical regime to analyze the learning and generalization behavior of over-parametrized neural networks. For a supervised learning task, the association between the eigenvectors of the NTK kernel and given data (a concept referred to as alignment in this paper) can govern the rate of convergence of gradient descent, as well as generalization to unseen data. Building upon this concept, we investigate NTKs and alignment in the context of graph neural networks (GNNs), where our analysis reveals that optimizing alignment translates to optimizing the graph representation or the graph shift operator in a GNN. Our results further establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN and these guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data. The theoretical insights drawn from the analysis of NTKs are validated by our experiments focused on a multi-variate time series prediction task for a publicly available dataset. Specifically, they demonstrate that GNNs with cross-covariance as the graph shift operator indeed outperform those that operate on the covariance matrix from only the input data.
Shervin Khalafi, Saurabh Sihag, Alejandro Ribeiro
2023-10-16T19:54:21Z
http://arxiv.org/abs/2310.10791v1
# Neural Tangent Kernels Motivate Graph Neural Networks with Cross-Covariance Graphs ###### Abstract Neural tangent kernels (NTKs) provide a theoretical regime to analyze the learning and generalization behavior of over-parametrized neural networks. For a supervised learning task, the association between the eigenvectors of the NTK kernel and given data (a concept referred to as alignment in this paper) can govern the rate of convergence of gradient descent, as well as generalization to unseen data. Building upon this concept, we investigate NTKs and alignment in the context of graph neural networks (GNNs), where our analysis reveals that optimizing alignment translates to optimizing the graph representation or the graph shift operator in a GNN. Our results further establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN and these guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data. The theoretical insights drawn from the analysis of NTKs are validated by our experiments focused on a multi-variate time series prediction task for a publicly available dataset. Specifically, they demonstrate that GNNs with cross-covariance as the graph shift operator indeed outperform those that operate on the covariance matrix from only the input data. ## 1 Introduction The remarkable success of deep learning frameworks for numerous inference tasks is well established LeCun et al. (2015). Motivated by the practical implications of the gaps between the empirical observations and theoretical foundations of deep learning, many recent works have explored various approaches to rigorously understand the theory of deep learning models. Multi-layer neural networks have been analyzed extensively in the mean-field regime Mei et al. (2018, 2019); Sirignano & Spiliopoulos (2020). The random features model has also been studied to capture the effects of the regime of parameterization and study phenomenon such as generalization, and "double descent" (see e.g., Mei & Montanari (2022);Lin & Dobriban (2021);Adlam & Pennington (2020)). Among such approaches, the NTKs, first introduced in Jacot et al. (2018), have commonly been used to study the behavior of over-parameterized neural networks Cao et al. (2021); Bietti & Mairal (2019); and are informally defined next. **Neural Tangent Kernel.** For any given predictor \(f(\mathbf{x};\mathbf{w}):\mathbb{R}^{n\times 1}\times\mathbb{R}^{p}\to \mathbb{R}\) the NTK is the kernel matrix \(\Theta\) defined by the gradient of the predictor output, \(f(\mathbf{x};\mathbf{w})\), with respect to its learnable parameters, \(\mathbf{w}\), as \[\Theta_{(\mathbf{x}_{i},\mathbf{x}_{j})}(\mathbf{w}):=\langle\nabla_{\mathbf{ w}}f(\mathbf{x}_{i};\mathbf{w}),\nabla_{\mathbf{w}}f(\mathbf{x}_{j};\mathbf{w}) \rangle\, \tag{1}\] where \(f(\mathbf{x};\mathbf{w})\) represents the predictor output for input data point \(\mathbf{x}\in\mathbb{R}^{n\times 1}\) with the learnable parameters represented by \(\mathbf{w}\in\mathbb{R}^{p}\). The typical setting to study NTKs is that of neural networks in the asymptote of infinite width, where the NTK is constant with respect to the learnable parameters during training, in contrast to the finite-width scenario Jacot et al. (2018). This constancy of the NTK is a result of certain neural networks transitioning to linearity as their width goes to infinity Liu et al. (2020). NTKs have been leveraged to gain theoretical insights on the behavior of neural networks such as over-parameterized neural networks achieving zero training loss over a non-convex loss landscape Du et al. (2018), the spectral bias of neural networks Cao et al. (2021) and the inductive biases of different neural network architectures Bietti & Mairal (2019). In particular, the eigenspectrum of the NTK kernel has been linked with the convergence rate of gradient descent for an over-parameterized deep learning model Liu et al. (2022); Arora et al. (2019); Wang et al. (2022a). For instance, gradient descent can achieve faster convergence for a supervised learning problem if the vector of output labels, \(\mathbf{y}\), aligns well with the dominant eigenvectors of the NTK matrix \(\Theta\) Arora et al. (2019); Wang et al. (2022a). For the regression problem pertaining to predicting \(\mathbf{y}\) from \(\mathbf{x}\), our analysis in Section 2 demonstrates that \[\text{Convergence of gradient descent}\propto\mathbf{y}^{\mathsf{T}}\Theta \mathbf{y} \tag{2}\] By leveraging the observation above as a motivation, we define \(\mathbf{y}^{\mathsf{T}}\Theta\mathbf{y}\) as Alignment \(\mathcal{A}\). **GNNs and NTKs.** Given that the NTK \(\Theta\) depends on input data \(\mathbf{x}\) (see equation 1), the alignment \(\mathcal{A}\) inherently captures some version of covariance between output \(\mathbf{y}\) and input \(\mathbf{x}\). Thus, if the NTK \(\Theta\) is'structured' for a given predictor \(f\), the alignment \(\mathcal{A}\) could be leveraged to provide further insights into the design of the predictor \(f\). GNNs are an example of a class of predictors for which the NTK is a function of the graph representation or the graph shift operator (GSO) and input data \(\mathbf{x}\) Krishnagopal & Ruiz (2023). GNNs operating on covariance matrices derived from the input data have been studied previously in Sihag et al. (2022), albeit without any consideration of the insights that could be drawn from the NTKs regarding the choice of graph derived from the data for a supervised learning problem. Many of the existing works that analyze NTKs for GNNs focus on explaining empirically observed trends for GNNs (see Appendix B for expanded literature review). In this paper, we leverage the structure of NTKs for GNNs to theoretically motivate the choice of a particular GSO. Specifically, if the NTK \(\Theta\) for a GNN is considered to be a function of the form \(\Theta(S,\mathbf{x})\) for a GSO \(S\), the alignment can be represented as \(\mathcal{A}(S,\mathbf{x},\mathbf{y})\), i.e., as a function of the input data \(\mathbf{x}\), output data \(\mathbf{y}\) and \(S\). It is then apparent that optimizing the alignment \(\mathcal{A}\) for a GNN can inform the choice of the GSO \(S\) for a given dataset. A key observation made in this paper is that the alignment \(\mathcal{A}\) is characterized by the cross-covariance between the input and the output and as a result, the optimal GSO for statistical inference is closely related to the cross-covariance. **Contributions.** In this paper, we consider the setting where the predictor \(f\) is a GNN with graph filter as the convolution operator Gama et al. (2020). Our theoretical contributions in this context are summarized next. * Our analysis of alignment \(\mathcal{A}\) with graph filter as the predictor motivates cross-covariance between the input and output data as the graph. More precisely, we pose an optimization problem with alignment \(\mathcal{A}\) as the objective function and demonstrate that using the cross-covariance as the GSO maximizes a lower bound on this objective. * We further extend the results from the graph filter to the scenario of a two-layer GNN as the predictor. Our results show that under certain assumptions, the cross-covariance between the input and the oupt optimizes a lower bound on the alignment for the GNN that has \(\tanh\) activation function. Thus, our analysis motivates using cross-covariance based graphs for a GNN as well. We validated the insights drawn fron our theoretical results via experiments on the publicly available resting state functional magnetic resonance imaging (rfMRI) data from the Human Connectome Project-Young Adult (HCP-YA) dataset Van Essen et al. (2012). In particular, we considered the task of time series prediction and observed that the GNNs that operated on the cross-covariance between the input and output data achieved better convergence and generalization than those that used the covariance matrix only from the input data. ## 2 Alignment and Convergence of Gradient Descent In this section, we formalize the concept of alignment \(\mathcal{A}\) and demonstrate its relationship with the convergence of gradient descent for a regression problem. Consider a dataset \(\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{M}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{n\times 1}\), \(\mathbf{y}_{i}\in\mathbb{R}^{n\times 1}\). We aim to leverage the inputs \(\mathbf{x}_{i}\) to estimate the outputs \(\mathbf{y}_{i}\) using a predictor denoted by \(\mathbf{f}:\mathbb{R}^{n\times 1}\times\mathbb{R}^{p}\rightarrow\mathbb{R}^{n}\). We use the notation \(\mathbf{h}\in\mathbb{R}^{p}\) to denote the vector of all learnable parameters of the predictor. To emphasize the dependence of the predictor \(\mathbf{f}\) on the parameters \(\mathbf{h}\), we use the notation \(\mathbf{f}_{\mathbf{x}_{i}}(\mathbf{h})\) for \(\mathbf{f}(\mathbf{x}_{i},\mathbf{h})\) subsequently. Also the parameters are initialized randomly from a Gaussian distribution \(\mathbf{h}^{(0)}\sim\mathcal{N}(0,\kappa^{2}I)\), where the constant \(\kappa\) controls the magnitude of the initialized parameters. The objective is to minimize the mean squared error (MSE) loss function, defined as \[\Phi(\mathbf{h})\triangleq\min_{\mathbf{h}\in\mathbb{R}^{p}}\frac{1}{2}\sum_{i=1}^{M} ||\mathbf{y}_{i}-\mathbf{f}_{\mathbf{x}_{i}}(\mathbf{h})||_{2}^{2}. \tag{3}\] For this purpose, we consider a gradient descent based optimization framework with a learning rate \(\eta>0\). The evolution of the predictor output for a single input \(\mathbf{x}_{i}\) is given by \[\mathbf{f}_{\mathbf{x}_{i}}(\mathbf{h}^{(t+1)})=\mathbf{f}_{\mathbf{x}_{i}}(\mathbf{h}^{(t)}- \eta\cdot\nabla\Phi(\mathbf{h}^{(t)})) \tag{4}\] where \(t\) denotes the \(t\)-th step or epoch of gradient descent. To characterize the evolution of the predictor output over the entire dataset, we provide the following definitions \[\tilde{\mathbf{f}}_{\mathbf{x}}(\mathbf{h})\triangleq\left[\left[\mathbf{f}_{\mathbf{x}_ {1}}(\mathbf{h})\right]^{\mathsf{T}},\,\left[\mathbf{f}_{\mathbf{x}_{2}}(\mathbf{h}) \right]^{\mathsf{T}},\cdots,\,\left[\mathbf{f}_{\mathbf{x}_{M}}(\mathbf{h})\right]^{ \mathsf{T}}\right]^{\mathsf{T}}, \tag{5}\] \[\tilde{\mathbf{x}}\triangleq\left[\mathbf{x}_{1}^{\mathsf{T}},\,\,\mathbf{x}_ {2}^{\mathsf{T}},\,\cdots,\,\mathbf{x}_{M}^{\mathsf{T}}\right]^{\mathsf{T}},\, \,\,\tilde{\mathbf{y}}\triangleq\left[\mathbf{y}_{1}^{\mathsf{T}},\,\,\mathbf{ y}_{2}^{\mathsf{T}},\,\cdots,\,\mathbf{y}_{M}^{\mathsf{T}}\right]^{\mathsf{T}} \tag{6}\] where \(\tilde{\mathbf{f}}_{\mathbf{x}}(\mathbf{h}),\tilde{\mathbf{x}}\), and \(\tilde{\mathbf{y}}\) are vectors of length \(nM\). We also define the NTK matrix \(\tilde{\mathbf{\Theta}}(\mathbf{h})\in\mathbb{R}^{nM\times nM}\), which consists of \(M^{2}\) number of \(n\times n\) blocks, such that, the \((i,j)\)-th block is the matrix \(\Theta(\mathbf{x}_{i},\mathbf{x}_{j})\in\mathbb{R}^{n\times n}\) and is given by \[\Theta(\mathbf{x}_{i},\mathbf{x}_{j})\triangleq\mathrm{J}_{\mathbf{f}_{\mathbf{x} _{i}}}(\mathbf{h}^{(t)})\big{(}\mathrm{J}_{\mathbf{f}_{\mathbf{x}_{j}}}(\mathbf{h}^{(t)}) \big{)}^{\mathsf{T}}. \tag{7}\] In equation 7, \(\mathrm{J}_{\mathbf{f}_{\mathbf{x}_{i}}}(\mathbf{h})\) denotes the Jacobian matrix with its \((a,b)\)-th entry being \((\mathrm{J}_{\mathbf{f}_{\mathbf{x}_{i}}}(\mathbf{h}))_{ab}=\frac{\partial(\mathbf{f}_{ \mathbf{x}_{i}}(\mathbf{h}))_{a}}{\partial h_{b}}\). If the step size \(\eta\) from equation 4 is sufficiently small, the function \(\mathbf{f}_{\mathbf{x}_{i}}(\mathbf{h}^{(t)})\) can be linearized at each step. In this scenario, the linearized version of the evolution of the predictor output in equation 4 is \[\tilde{\mathbf{f}}_{\mathbf{x}}(\mathbf{h}^{(t+1)})=\tilde{\mathbf{f}}_{\mathbf{x}}(\mathbf{h} ^{(t)})-\eta\cdot\tilde{\mathbf{\Theta}}(\mathbf{h}^{(t)})\cdot(\tilde{\mathbf{f}}_{ \mathbf{x}}(\mathbf{h}^{(t)})-\tilde{\mathbf{y}}). \tag{8}\] A typical setting of interest in the existing literature is that of the NTK \(\tilde{\mathbf{\Theta}}(\mathbf{h}^{(t)})\) being a constant with respect to \(\mathbf{h}^{(t)}\). This is because the NTK converges to a constant for many neural networks in the infinite width limit Liu et al. (2020). Theorem 1 characterizes the convergence of gradient descent for the considered multivariate regression problem in this setting (also see Arora et al. (2019), Wang et al. (2022a)). The NTK that is constant with respect to \(\mathbf{h}^{(t)}\) is denoted by \(\tilde{\mathbf{\Theta}}\). **Theorem 1**.: _In the multivariate regression setting, as described in the beginning of section 2, if the NTK \(\tilde{\mathbf{\Theta}}(\mathbf{h}^{(t)})\) is constant during training and \(\kappa=\mathcal{O}(\varepsilon\sqrt{\frac{\delta}{nM}})\), then with probability at least \(1-\delta\), the training error after \(t\) steps of gradient descent is bounded as_ \[\tilde{\mathbf{y}}^{\mathsf{T}}\left(I-2t\eta\cdot\tilde{\mathbf{\Theta}}\right) \tilde{\mathbf{y}}\pm\mathcal{O}(\varepsilon)\leq||\tilde{\mathbf{f}}_{\mathbf{x} }(\mathbf{h}^{(t)})-\tilde{\mathbf{y}}||_{2}^{2}\leq\tilde{\mathbf{y}}^{\mathsf{T} }\left(I-\eta\cdot\tilde{\mathbf{\Theta}}\right)\tilde{\mathbf{y}}\pm\mathcal{O}(\varepsilon)\] **Remark 1**.: _In this paper, we primarily consider two classes of predictors. The first class is that of a linear predictor, for which the NTK is a constant given the definition in equation 7. The second class of predictors is that of infinitely wide neural networks (GNNs in particular). We refer the reader to Appendix F and Liu et al. (2020) for a detailed discussion of when and why the NTK can be a constant for neural networks._ Since the term \(\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{\mathbf{\Theta}}\tilde{\mathbf{y}}\) characterizes the upper and lower bounds, the loss \(||\tilde{\mathbf{f}}_{\mathbf{x}}(\mathbf{h}^{(t)})-\tilde{\mathbf{y}}||_{2}^{2}\) is proportional to this term. Based on Theorem 1, we formalize the alignment in Definition 1. A similar definition can be found in Wang et al. (2022a) in the context of active learning. **Definition 1** (Alignment).: _The alignment between the output \(\tilde{\mathbf{y}}\) and NTK \(\tilde{\mathbf{\Theta}}\) is defined as_ \[\mathcal{A}\triangleq\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{\mathbf{\Theta}}\tilde{ \mathbf{y}}\] The alignment \(\mathcal{A}\) can be perceived as a metric of correlation between output data and the NTK, and is a characteristic of learning with gradient descent. Using Definition 1, Theorem 1 can be restated as \[\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{\mathbf{y}}-2t\eta\cdot\mathcal{A}\pm \mathcal{O}(\varepsilon)\leq||\tilde{\mathbf{f}}_{\mathbf{x}}(\mathbf{h}^{(t)})- \tilde{\mathbf{y}}||_{2}^{2}\leq\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{\mathbf{ y}}-\eta\cdot\mathcal{A}\pm\mathcal{O}(\varepsilon) \tag{9}\] Equation (9) shows that the convergence of gradient descent is positively correlated with \(\mathcal{A}\). Recall that the NTK \(\tilde{\mathbf{\Theta}}\) is a function of the input data \(\tilde{\mathbf{x}}\) and the learning model \(\mathbf{f}\), even when constant with respect to \(\mathbf{h}^{(t)}\). Therefore, maximizing \(\mathcal{A}\) is contingent on maximizing some kind of cross-covariance between the output data \(\tilde{\mathbf{y}}\) and a function of the input data \(\tilde{\mathbf{x}}\), where the function depends by the learning model \(\mathbf{f}\). This observation motivates us to study the setting where the predictor \(\mathbf{f}\) is a GNN, as a GNN architecture can provide appropriate structure to analyze the connection between alignment, cross-covariance and the structure of the network. ## 3 Alignment motivates Cross-Covariance in GNNs In this paper, we consider the GNNs for which the convolution operation is a graph filter. A graph filter is characterized by a linear-shift-and-sum operation on the input data and is representative of a large family of convolution operations in GNNs (see the section 'implementation of GCNNs' from Gama et al. (2020)). We begin with the setting where \(\mathbf{f}_{\mathbf{x}}(\mathbf{h})\) is a graph filter. ### NTK and alignment for Graph Filter Model The formal definition for a graph filter is provided in Definition 1 Gama et al. (2020). **Definition 2** (Graph Filter).: _Consider a symmetric GSO \(S\in\mathbb{R}^{n\times n}\). A graph filter processes an input \(\mathbf{x}\in\mathbb{R}^{n}\) via a linear-shift-and-sum operation characterized by \(S\), such that, its output is_ \[\mathbf{f}_{\mathbf{x}}(\mathbf{h})=\sum_{k=0}^{K-1}h_{k}S^{k}\mathbf{x}=H(S)\mathbf{ x}\,\quad\text{where}\quad H(S)\triangleq\sum_{k=0}^{K-1}h_{k}S^{k}\, \tag{10}\] _and \(\mathbf{h}=\{h_{0},h_{1},\cdots,h_{K-1}\}\) is the set of scalars, also referred to as the filter taps or coefficients._ Recall from equation 7 that \(\tilde{\mathbf{\Theta}}(\mathbf{h}^{(t)})\) is a function of the Jacobian matrix \(\mathbf{J}_{\mathbf{f}_{\mathbf{\kappa}_{i}}}(\mathbf{h}^{(t)})\), given by \[\mathbf{J}_{\mathbf{f}_{\mathbf{\kappa}_{i}}}(\mathbf{h}^{(t)})=\left[\mathbf{x}_{i}|S \mathbf{x}_{i}|S^{2}\mathbf{x}_{i}|\cdots|S^{K-1}\mathbf{x}_{i}\right]. \tag{11}\] Using equation 11, for any pair of input vectors \((\mathbf{x}_{i},\mathbf{x}_{j})\), the \((i,j)\)-the block of the NTK \(\tilde{\mathbf{\Theta}}(\mathbf{h}^{(t)})\) for a graph filter is given by \(\Theta_{(\mathbf{x}_{i},\mathbf{x}_{j})}(\mathbf{h}^{(t)})=\sum_{k=0}^{K-1}S^{k} \mathbf{x}_{i}(S^{k}\mathbf{x}_{j})^{\mathsf{T}}\). Since the graph filter is a linear model, \(\Theta_{(\mathbf{x}_{i},\mathbf{x}_{j})}(\mathbf{h}^{(t)})\) is independent of \(\mathbf{h}^{(t)}\). Consequently, the NTK \(\Theta_{(\mathbf{x}_{i},\mathbf{x}_{j})}(\mathbf{h}^{(t)})\) for a graph filter is a constant with respect to \(\mathbf{h}^{(t)}\). Next, we provide the NTK for a graph filter. **Proposition 1** (NTK for a graph filter).: _The NTK for a graph filter is given by_ \[\tilde{\mathbf{\Theta}}_{\text{fit}}(\mathbf{h}^{(t)})=\sum_{k=0}^{K-1}\tilde{S}^{k} \tilde{\mathbf{x}}\tilde{\mathbf{x}}^{\mathsf{T}}\tilde{S}^{k}. \tag{12}\] _where \(\tilde{S}\in\mathbb{R}^{nM\times nM}\) is a block diagonal matrix consisting of \(M\) blocks of matrix \(S\) on the diagonal and zeros everywhere else._ Given equation 12, we further investigate the impact of shift operator \(S\) on the alignment \(\mathcal{A}\). Also we define the data matrices \(X,Y\) where \(X\) is the input data matrix where the \(i\)-th column is equal to \(\mathbf{x}_{i}\) and similarly for \(Y\). From equation 12, note that the NTK is independent of the filter coefficients \(\mathbf{h}\). As a consequence, \(\mathcal{A}\) for a graph filter (denoted by \(\mathcal{A}_{\text{fit}}\)) depends on the shift operator \(S\) and dataset \((X,Y)\) as follows \[\mathcal{A}_{\text{fit}}(S,X,Y)=\tilde{\mathbf{y}}^{\mathsf{T}}\left(\sum_{k=0} ^{K-1}\tilde{S}^{k}\tilde{\mathbf{x}}\tilde{\mathbf{x}}^{\mathsf{T}}\tilde{S}^ {k}\right)\tilde{\mathbf{y}}=\sum_{k=0}^{K-1}\left(\tilde{\mathbf{y}}^{\mathsf{ T}}\tilde{S}^{k}\tilde{\mathbf{x}}\right)^{2}=\sum_{k=0}^{K-1}\left(\mathbf{ tr}(Y^{\mathsf{T}}S^{k}X)\right)^{2} \tag{13}\] The equivalence between different terms in equation 13 follows from the symmetry of \(\tilde{S}\) and the fact that \(\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{S}^{k}\tilde{\mathbf{x}}\) is a scalar. Since a larger \(\mathcal{A}_{\text{fit}}\) is correlated with faster convergence of gradient descent (see equation 9), we further investigate whether the alignment \(\mathcal{A}_{\text{fit}}\) can be optimized by appropriate selection of shift operator matrix \(S\). The objective to optimize \(\mathcal{A}_{\text{fit}}\) can be stated as follows. \[S^{*}=\operatorname*{arg\,max}_{S}\;\sum_{k=0}^{K-1}\left(\tilde{\mathbf{y}}^{ \mathsf{T}}\tilde{S}^{k}\tilde{\mathbf{x}}\right)^{2}\;\;\text{s.t.}\;\eta\;.\;||\tilde{\Theta}_{\text{fit}}||_{\text{op}}<\alpha\;. \tag{14}\] The constraint \(||\eta\cdot\tilde{\Theta}_{\text{fit}}||_{\text{op}}<\alpha\), for some \(\alpha>0\), in equation 14 is necessary to ensure the convergence of gradient descent. This constraint also eliminates trivial solutions (such as multiplying a given \(S\) with an arbitrarily large positive constant to inflate \(\mathcal{A}\) in isolation). The optimization problem in equation 14, while meaningful, can be analytically intractable due to complications arising from the polynomial functions of \(\tilde{S}\) and the objective function and the constraint being non-convex. In order to provide an analytically tractable solution to \(S\), we consider the lower bound on \(\mathcal{A}\) next. **Lemma 1**.: _[Lower bound on \(\mathcal{A}_{\text{fit}}\).] The alignment \(\mathcal{A}_{\text{fit}}\) satisfies \(\mathcal{A}_{\text{fit}}(S,X,Y)\geq\mathcal{A}_{\text{L}}(S,X,Y)\), where_ \[\mathcal{A}_{\text{L}}(S,X,Y)\triangleq\left(\frac{1}{\sqrt{K}}\boldsymbol{tr }\Big{(}\Big{(}\sum_{k=0}^{K-1}S^{k}\Big{)}C_{XY}\Big{)}\right)^{2},\quad\text {and}\quad C_{XY}\triangleq\frac{1}{2}(XY^{\mathsf{T}}+YX^{\mathsf{T}})\;. \tag{15}\] Henceforth, we focus on characterizing \(S\) that maximizes \(\mathcal{A}_{\text{L}}(S,X,Y)\). Our experiments in Section 4 also demonstrate that the insights drawn from optimizing \(\mathcal{A}_{\text{L}}\) are practically meaningful. Next, we provide a constraint that depends on the choice of GSO and not on the input data. **Lemma 2**.: _If the degree \(K\) polynomial in the shift operator \(S\) has a bounded Frobenius norm, the operator norm of the NTK matrix is also bounded as follows:_ \[\left\|\sum_{k=0}^{K-1}S^{k}\right\|_{F}\leq\sqrt{\alpha/(\eta M)}\;\;\Rightarrow \;\;\eta\cdot\left\|\sum_{k=0}^{K-1}\tilde{S}^{k}\tilde{\mathbf{x}}\tilde{ \mathbf{x}}^{\mathsf{T}}\tilde{S}^{k}\right\|_{\text{op}}\leq\alpha \tag{16}\] The constraint on the left in equation 16 is more straightforward to work with in the analysis since it only depends on \(S\), while also ensuring that the constraint in equation 14 is satisfied. Putting together \(\mathcal{A}_{\text{L}}(S,X,Y)\) and the revised constraint, we get the following _optimization problem_. \[S^{*}=\operatorname*{arg\,max}_{S}\;\mathcal{A}_{\text{L}}(S,X,Y)\;\;\text{s. t.}\;\left\|\sum_{k=0}^{K-1}S^{k}\right\|_{F}\leq\sqrt{\alpha/(\eta M)}\;. \tag{17}\] **Theorem 2** (GSO in graph filter.).: _A GSO \(S^{*}\) that satisfies_ \[\sum_{k=0}^{K-1}(S^{*})^{k}=\mu\cdot C_{XY}\;,\quad\text{where}\quad\mu=\frac {\sqrt{\alpha/(\eta M)}}{||C_{XY}||_{F}}\;. \tag{18}\] _is the solution to the optimization problem in equation 17._ Theorem 2 clearly demonstrates the association between the optimal GSO that optimizes \(\mathcal{A}_{\text{L}}(S,X,Y)\) and \(C_{XY}\), which is a measure of cross-covariance. For instance, if \(K=2\), then it can be concluded from equation 18 that \[I+S^{*}=\mu\cdot C_{XY}=S^{*}=\mu\cdot C_{XY}-I \tag{19}\] The observation in equation 19 motivates the potential choice of a normalized cross-covariance matrix as a GSO when graph filter is deployed as the predictor \(\boldsymbol{f}_{\mathbf{x}}(\boldsymbol{h})\). Next, we discuss how this observation extends to the setting where \(\boldsymbol{f}_{\mathbf{x}}(\boldsymbol{h})\) is a GNN. ### NTK and Alignment for GNN To start with, we formalize the GNN architecture that is the focus of our analysis. The ability to learn non-linear mappings by GNNs is fundamentally based on concatenating an element-wise non-linearity with a graph filter to form a graph perceptron, which is realized via a point-wise non-linearity \(\sigma(\cdot)\) as \(\sigma(H(S)\mathbf{x})\). In the remainder of this paper, we will focus on a two-layer GNN that admits a single input feature \(\mathbf{x}\in\mathbb{R}^{n}\) and the GNN output is a vector of length \(n\), as dictated by the problem definition in Section 2. The general definition for a GNN, along with additional experimental results for GNNs with depth larger than two layers, have been provided in Appendix H. **Two-Layer GNN Architecture** (see Fig. 3). In the first layer, the input vector \(\mathbf{x}\in\mathbb{R}^{n}\), is processed by \(F\) graph perceptrons to output \(F\)\(n\)-dimensional outputs given by \(\mathbf{q}_{(1)}^{f},\forall f\in\{1,\cdots,F\}\), as follows \[\mathbf{u}_{(1)}^{f}=H_{(1)}^{f}(S)\mathbf{x}=\sum_{k=0}^{K-1}h_{(1),k}^{f}S^{ k}\mathbf{x},\forall f\in\{1,\ldots,F\};\quad\text{and}\quad\mathbf{q}_{(1)}^{f}= \sigma\Big{(}\mathbf{u}_{(1)}^{f}\Big{)}. \tag{20}\] In the second layer, each of the outputs of the previous layer, \(\mathbf{q}_{(1)}^{f}\) are processed by a graph filter as \[\mathbf{u}_{(2)}^{f}=H_{(2)}^{f}(S)\mathbf{q}_{(1)}^{f}=\sum_{k=0}^{K-1}h_{(2),k}^{f}S^{k}\mathbf{q}_{(1)}^{f},\forall f=\in\{1,\ldots,F\}. \tag{21}\] Finally, the terms \(\mathbf{u}_{(2)}^{f}\) are aggregated to get the output at the second layer (also the GNN output) as \[\mathbf{f}_{\mathbf{x}}(\mathbf{h})=\frac{1}{\sqrt{F}}\sum_{f=1}^{F}\mathbf{u}_{(2)} ^{f} \tag{22}\] The absence of a non-linearity in the final layer (equation 22) is necessary for having a constant NTK in the infinite width limit (see Liu et al. (2020)). **Proposition 2** (NTK for a two-layer GNN).: _The NTK for the two-layer GNN is given by_ \[\tilde{\mathbf{\Theta}}_{GNN}(\mathbf{h})=\frac{1}{F}\sum_{f=1}^{F}\sum_{k=0}^{K-1} \left(\mathbf{c}_{f,k}^{(1)}\right)\left(\mathbf{c}_{f,k}^{(1)}\right)^{ \mathsf{T}}+\frac{1}{F}\sum_{f=1}^{F}\sum_{k=0}^{K-1}\left(\mathbf{c}_{f,k}^{ (2)}\right)\left(\mathbf{c}_{f,k}^{(2)}\right)^{\mathsf{T}} \tag{23}\] \[\text{where}\quad\mathbf{c}_{f,k}^{(1)}\triangleq H_{f}(\tilde{S})\cdot\text{diag }(\sigma^{\prime}(G_{f}(\tilde{S})\tilde{\mathbf{x}})\tilde{S}^{k}\tilde{ \mathbf{x}}),\quad\text{and}\quad\mathbf{c}_{f,k}^{(2)}\triangleq\tilde{S}^{k }\sigma\left(G_{f}(\tilde{S})\tilde{\mathbf{x}}\right). \tag{24}\] _In equation 24, \(\mathbf{c}_{f,k}^{(\ell)}\in\mathbb{R}^{M\times 1}\) is the vector determined by picking out the column that pertains to the derivative of the network output with regards to the parameter indexed by \((f,k,\ell)\), namely, the \(k\)-th coefficient of the \(f\)-th filter in layer \(\ell\), from every Jacobian matrices \(J_{\mathbf{f}_{\mathbf{x}_{i}}},\forall i\in\{1,\cdots M\}\) and stacking all these vectors together._ The NTK in equation 23 is an aggregation of two terms, where the first term is associated with the first layer and the second term with the second layer. It follows from Definition 1 and equation 23 that the alignment for a two-layer GNN is also composed of two terms that represent the two layers. Henceforth, we focus on the results pertaining to the second term in equation 23. This implies that our results correspond to a two-layer GNN where only the parameters of the second layer are trained and the parameters of the first layer are fixed. The analysis (and results) if we also consider the first layer in this analysis is similar and has been relegated to Appendix G. In the subsequent discussions, the notation \(\tilde{\mathbf{\Theta}}_{GNN}\) denotes the second term in equation 23 when the width of hidden layer approaches infinity i.e., \(F\rightarrow\infty\). Therefore, \(\tilde{\mathbf{\Theta}}_{GNN}\) is given by \[\tilde{\mathbf{\Theta}}_{GNN}(\mathbf{h}) =\lim_{F\rightarrow\infty}\frac{1}{F}\sum_{f=1}^{F}\sum_{k=0}^{K-1 }\left(\tilde{S}^{k}\sigma\left(G_{f}(\tilde{S})\tilde{\mathbf{x}}\right) \right)\left(\tilde{S}^{k}\sigma\left(G_{f}(\tilde{S})\tilde{\mathbf{x}} \right)\right)^{\mathsf{T}}\] \[=\sum_{k=0}^{K-1}\tilde{S}^{k}\mathop{\mathbb{E}}_{\mathbf{g}\sim \mathcal{N}(0,I)}\left[\sigma\left(G(\tilde{S})\tilde{\mathbf{x}}\right)\left( \sigma\left(G(\tilde{S})\tilde{\mathbf{x}}\right)\right)^{\mathsf{T}}\right] \tilde{S}^{k}=\sum_{k=0}^{K-1}\tilde{S}^{k}E\tilde{S}^{k} \tag{25}\] The expectation matrix \(E\triangleq\mathop{\mathbb{E}}_{\mathbf{g}\sim\mathcal{N}(0,I)}\left[\sigma\left(G (\tilde{S})\tilde{\mathbf{x}}\right)\left(\sigma\left(G(\tilde{S})\tilde{ \mathbf{x}}\right)\right)^{\mathsf{T}}\right]\) is instrumental for the analysis of the alignment. Before proceeding, we provide the following remark pertinent to the analysis. **Remark 2**.: _As a byproduct of the output layer being linear, the NTK \(\tilde{\mathbf{\Theta}}_{GNN}\) in equation 25 does not depend on the parameters of the second layer, i.e., \(\mathbf{h}_{f},\forall f\in\{1,\cdots,F\}\). Hence, the NTK in equation 25 could be considered a constant if only the second layer of GNN is trained. For completeness, our discussion in Appendix F demonstrates further that as \(F\to\infty\), the NTK in equation 23 also approaches a constant behavior._ From equation 25, the alignment can be written in terms of \(E\) as follows \[\mathcal{A}=\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{\mathbf{\Theta}}_{GNN} \tilde{\mathbf{y}}=\tilde{\mathbf{y}}^{\mathsf{T}}\Big{(}\sum_{k=0}^{K-1} \tilde{S}^{k}E\tilde{S}^{k}\Big{)}\tilde{\mathbf{y}}=\mathbf{tr}(QE)\, \tag{26}\] where we have defined the matrix \(Q\) as \(Q\triangleq\sum_{k=0}^{K-1}\tilde{S}^{k}\tilde{\mathbf{y}}\tilde{\mathbf{y}}^ {\mathsf{T}}\tilde{S}^{k}\). Above, we used the cyclic property of the trace and the fact that \(\tilde{\mathbf{y}}^{\mathsf{T}}\tilde{\mathbf{\Theta}}_{GNN}\tilde{\mathbf{y}}\) is a scalar. In order to evaluate \(E\), we define the vectors \(\mathbf{z}^{(\ell)}\in\mathbb{R}^{K\times 1}\) as \(\mathbf{z}^{(\ell)}\triangleq\Big{[}\tilde{\mathbf{x}}_{\ell},\ (\tilde{S}\tilde{ \mathbf{x}})_{\ell},\ \cdots,\ (\tilde{S}^{K-1}\tilde{\mathbf{x}})_{\ell}\Big{]}^{ \mathsf{T}}\) where \(\ell\in\{1,\cdots,nM\}\) and \((\tilde{S}^{k}\tilde{\mathbf{x}})_{\ell}\) denotes the \(\ell\)-th entry of the vector \(\tilde{S}^{k}\tilde{\mathbf{x}}\). Thus, the \((a,b)\)-th entry of \(E\), i.e., \(E_{ab}\), from equation 25 is \[E_{ab}=\underset{\mathbf{g}\sim\mathcal{N}(0,I)}{\mathbb{E}}\Big{[}\sigma\Big{(} \langle\mathbf{g},\mathbf{z}^{(a)}\rangle\Big{)}\cdot\sigma\Big{(}\langle\mathbf{g},\mathbf{z }^{(b)}\rangle\Big{)}\Big{]}. \tag{27}\] **Linear GNNs.** We next discuss the scenario when the function \(\sigma(\cdot)\) is an identity function, i.e., \(\sigma(z)=z\). The results drawn from this setting will be leveraged later in the setting when \(\sigma(\cdot)\) is not an identity function. When \(\sigma(z)=z\), equation 27 reduces to \[E_{ab}=\underset{\mathbf{g}\sim\mathcal{N}(0,I)}{\mathbb{E}}\Big{[}\langle\mathbf{g}, \mathbf{z}^{(a)}\rangle\cdot\langle\mathbf{g},\mathbf{z}^{(b)}\rangle\Big{]}=\langle\mathbf{z }^{(a)},\mathbf{z}^{(b)}\rangle \tag{28}\] We denote the matrix \(E\) in this linear setting by \(B_{\text{lin}}\), which is given by \[(B_{\text{lin}})_{ab}\triangleq\langle\mathbf{z}^{(a)},\mathbf{z}^{(b)}\rangle\Rightarrow B _{\text{lin}}=\sum_{k=0}^{K-1}\tilde{S}^{k}\tilde{\mathbf{x}}\tilde{\mathbf{ x}}^{\mathsf{T}}\tilde{S}^{k} \tag{29}\] Thus, the alignment in the linear setting is given by \[\mathcal{A}_{\text{lin}}\triangleq\mathbf{tr}(QB_{\text{lin}})=\sum_{k=0}^{K -1}\tilde{\mathbf{y}}^{T}\tilde{S}^{k}B\tilde{S}^{k}\tilde{\mathbf{y}}=\sum_{k =0}^{K-1}\sum_{k^{\prime}=0}^{K-1}\tilde{\mathbf{y}}^{T}\tilde{S}^{k+k^{ \prime}}\tilde{\mathbf{x}}\tilde{T}^{k}\tilde{S}^{k+k^{\prime}}\tilde{ \mathbf{y}} \tag{30}\] The analysis of alignment \(\mathcal{A}_{\text{lin}}\) in equation 30 using similar arguments as that for a graph filter in Section 3.1 yields a similar condition on the GSO \(S\) as in Theorem 2. The corollaries provided next formalize this observation. First, the following corollary provides a lower bound on \(\mathcal{A}_{\text{lin}}\). **Corollary 1**.: _[Lower bound on \(\mathcal{A}_{\text{lin}}\)] The term \(\mathcal{A}_{\text{lin}}=\textbf{tr}(QB_{\text{lin}})\) satisfies \(\mathcal{A}_{\text{lin}}\geq\mathcal{A}_{\mathsf{U}^{\prime}}(S,X,Y)\),_ \[\text{where}\quad\mathcal{A}_{\mathsf{U}^{\prime}}(S,X,Y)\triangleq\Big{(} \frac{1}{\sqrt{K}}\textbf{tr}\Big{(}\Big{(}\sum_{k=0}^{K-1}\sum_{k^{\prime}= 0}^{K-1}(S^{*})^{k+k^{\prime}}\Big{)}C_{XY}\Big{)}\Big{)}^{2}\, \tag{31}\] Next, we present an _optimization problem_ similar to the one for the graph filter in equation 17 next. \[S^{*}=\underset{S}{\arg\max}\ \mathcal{A}_{\mathsf{U}^{\prime}}(S,X,Y)\ \textbf{ s.t. }\ \ \left\|\sum_{k=0}^{K-1}\sum_{k^{\prime}=0}^{K-1}S^{k+k^{\prime}}\right\|_{F}\leq \sqrt{\alpha/(\eta M)}. \tag{32}\] The solution to the optimization problem in equation 32 is presented next. **Corollary 2**.: _[Extension of Theorem 1 to linear GNN] The GSO \(S^{*}\) that solves the optimization problem in equation 32 must satisfy_ \[\sum_{k=0}^{K-1}\sum_{k=0}^{K-1}(S^{*})^{k+k^{\prime}}=\mu\cdot C_{XY}\,\quad \text{where}\quad\mu=\frac{\sqrt{\alpha/(\eta M)}}{||C_{XY}||_{F}}. \tag{33}\] Corollary 2 establishes that the cross-covariance \(C_{XY}\) is instrumental to optimizing \(\mathcal{A}_{\mathsf{L}^{\prime}}\) for the considered two-layer GNN architecture when \(\sigma(\cdot)\) is an identity function. In general, this observation holds for linear GNNs of any arbitrary depth. **GNNs with non-linear activation function.** Next, we investigate the conditions under which the observation in Corollary 2 extends to a more general setting, in which \(\sigma(\cdot)\) is not an identity function. We will focus our theoretical analysis on the case where \(\sigma(z)=\text{tanh}(z)\) and from here on \(\mathcal{A}\) will denote the alignment for this case. The experimental results (see Appendix H) show that similar results hold for some other activation functions like ReLU in practice. First, we evaluate the expectation in equation 25. By leveraging the theory of Hermite polynomials 1, the Hermite expansions of \(\sigma\left(\langle\mathbf{g},\mathbf{z}^{(a)}\rangle\right)\) and \(\sigma\left(\langle\mathbf{g},\mathbf{z}^{(b)}\rangle\right)\) enables the expansion of \(E\) and subsequently \(\mathcal{A}\). These expansions are formalized next. Footnote 1: See the proof of Lemma 3 for an overview of the Hermite polynomials and how we utilized the Hermite expansion. **Lemma 3** (Expansion of \(E\) and \(\mathcal{A}\)).: _The Hermite expansion of \(E\) is given by \(E=B+\Delta B\), where \(B\in\mathbb{R}^{nM\times nM}\) represents the first non-zero term in the expansion and \(\Delta B\in\mathbb{R}^{nM\times nM}\) includes all the subsequent terms. For the \((a,b)\)-th element of \(B\) and \(\Delta B\), we have_ \[B_{ab}=\alpha_{1}\beta_{1}\cdot\frac{\langle\mathbf{z}^{(a)},\mathbf{z}^{(b)}\rangle} {||\mathbf{z}^{(a)}||_{2}\cdot||\mathbf{z}^{(b)}||_{2}},\ \text{ and }\ (\Delta B)_{ab}=\sum_{i=3,5,\cdots}^{\infty}\alpha_{i}\beta_{i}\cdot\left( \frac{\langle\mathbf{z}^{(a)},\mathbf{z}^{(b)}\rangle}{||\mathbf{z}^{(a)}||_{2}\cdot||\bm {z}^{(b)}||_{2}}\right)^{i}. \tag{34}\] _Hence, the alignment \(\mathcal{A}\) in equation 26 admits the expansion_ \[\mathcal{A}=\mathbf{tr}(QE)=\mathbf{tr}(QB)+\mathbf{tr}(Q\Delta B). \tag{35}\] The scalar coefficients \(\alpha_{i},\beta_{i}\) in equation 34 depend on \(||\mathbf{z}^{(a)}||_{2}\) and \(||\mathbf{z}^{(b)}||_{2}\), respectively and the choice of \(\sigma(\cdot)\). The disintegration of the alignment into two terms in equation 35 is useful because the second term can be shown to be relatively small and, using the observation that in the linear case \(E_{ab}=\langle\mathbf{z}^{(a)},\mathbf{z}^{(b)}\rangle\), \(\mathbf{tr}(QB)\) can be related to \(\mathcal{A}_{\text{fin}}\). We next provide two lemmas relevant to this. **Lemma 4**.: _Given a family of matrices \(S\in\mathbb{S}^{n\times n}\) that have a bounded norm, \(||S||_{\text{op}}\leq\nu\), we have_ \[\mathbf{tr}(QB)\geq\rho\mathcal{A}_{\text{fin}} \tag{36}\] _where \(\rho\) is a constant that depends on the choice of non-linearity function \(\sigma(\cdot)\)._ The next lemma shows that the elements of the matrix \(\Delta B\) are smaller compared to corresponding elements in \(B\), which implies that the second term in equation 35 can't decrease \(\mathcal{A}\) too much. **Lemma 5**.: _Each element of \(\Delta B\) has the same sign as the corresponding element in \(B\). Also, the following element-wise inequality holds between the two matrices:_ \[|\Delta B|\leq\beta\cdot|B| \tag{37}\] _where \(\beta\) is a constant that depends on our choice of non-linearity and is determined from the proof._ Putting the above two lemmas together, we reach the following conclusion about the alignment of the two-layer GNN with \(\sigma(\cdot)\) as \(\tanh\). **Theorem 3**.: _Given a family of matrices \(S\in\mathbb{S}^{n\times n}\) that have a bounded norm, \(||S||_{\text{op}}\leq\nu\) and that satisfy \(\mathcal{A}_{\text{fin}}=\mathbf{tr}\left(QB_{\text{fin}}\right)\geq\xi\cdot||Q|| _{F}||B_{\text{fin}}||_{F}\) for some constant \(0<\xi\leq 1\), \(\mathcal{A}_{\text{fin}}\) lower bounds the alignment for the two-layer GNN with \(\tanh\) non-linearity, \(\mathcal{A}\), up to a constant as follows_ \[\mathcal{A}\geq\Big{(}c-\frac{d}{\xi}\Big{)}\mathcal{A}_{\text{fin}}\, \tag{38}\] _for some positive constants \(c\) and \(d\)._ **Remark 3** (GSO in GNNs).: _Our key takeaway from Theorem 3 is that when maximizing \(\mathcal{A}_{\text{fin}}\) over the family of shift operators that satisfy the assumption with a sufficiently large \(\xi\), such that \(c-d/\xi\) is a positive constant, we are essentially maximizing a lower bound on the alignment \(\mathcal{A}\) of the two-layer GNN. This observation, together with Corollary 3, motivates using the cross-covariance matrix \(C_{XY}\) as a GSO for the two-layer GNN._ **Alignment, The NTK and Generalization** Thus far, we have provided the theoretical results motivated by the fact that larger alignment can imply faster convergence of gradient descent during training. However, the alignment and NTK are also closely related to generalization. Specifically, the analyses pertaining to generalization from Arora et al. (2019) and Wang et al. (2022) can be extended to the case of graph filters, which leads to the conclusion that larger alignment could also lead to smaller generalization error. Hence, the results on improved training and generalization together motivate models with larger alignment in practice. The analysis regarding generalization has been provided in Appendix E. ## 4 Experiments In this section, we provide the experiments that validate the theoretical insights pertaining to the cross-covariance matrix being an optimal GSO for GNN training and generalization with respect to GSO derived only from the input data for a regression task. The dataset and inference task for this purpose are described below. **Data.** The HCP-YA dataset is a publicly available brain imaging dataset collected over a population of 1003 healthy adults in the age range of 22-35 years Van Essen et al. (2012, 2013). In our experiments, we leveraged the rfMRI data for each subject made available by HCP. This data consisted of a multi-variate time series of \(100\) features, with each time series consisting of 4500 time points. **Inference task.** Noting that the \(100\) features could be considered as \(100\) nodes of a graph, our objective was to use the data at all nodes at the current time step for an individual to predict the data at all nodes at a future time step. Specifically, given the signal value at time step \(t\) as Figure 1: Experimental Results for an individual, we aimed to predict the signal value after \(\Delta t\) time steps, i.e., \(\mathbf{z}^{(t+\Delta t)}\in\mathbb{R}^{100}\). For every \(\Delta t\in\{1,2,3,4,5\}\), a separate training/test set of size \(N_{train}=1000,\ N_{test}=100\) was created, such that, for the signal at a time point \(t\), i.e., \(\mathbf{z}^{(t)}\) as the input, the signal after \(\Delta t\) time steps \(\mathbf{z}^{(t+\Delta t)}\) was the output to be predicted. For additional implementation details, see Appendix H. **Performance evaluation.** We trained two sets of GNNs and two sets of graph filters using the time series data of each individual for a given \(\Delta t\), where one set comprised of predictors with \(C_{XY}\) as the GSO and the other with \(C_{XX}\) as the GSO. The GNNs with \(C_{XX}\) as the GSO have been studied before as VNNs in Sihag et al. (2022) and provide an appropriate baseline for comparison as it is representative of GNNs with GSOs extracted only from the input data. Figures 0(a) and 0(b) illustrate faster convergence of both training loss and test loss during gradient descent for predictors with \(C_{XY}\) as compared to those with \(C_{XX}\) for one representative individual when \(\Delta t=1\). This observation was consistent for graph filters and GNNs. For each individual, the training process for every architecture was repeated 10 times. The average of these runs is shown in Figures 0(a) and 0(b). Further, we checked whether these observations were consistent across the dataset and for different \(\Delta t\). Figure 0(c) illustrates the gap between the test error for predictors with \(C_{XY}\) and \(C_{XX}\) and different values of \(\Delta t\), averaged across all individuals. Even as the accuracy of prediction diminished with increasing \(\Delta t\), we observed a consistent gain in test performance when using \(C_{XY}\) as compared to \(C_{XX}\). Similarly, Fig. 0(d) shows that predictors with \(C_{XY}\) achieved smaller training error relative to those with \(C_{XX}\) at each epoch of gradient descent, averaged across the dataset. Thus, the observations in Fig. 1 validate the theoretical insights drawn from the analysis that argued for \(C_{XY}\) as an appropriate GSO for GNNs that can achieve smaller training error and better generalization. We refer the reader to Appendix C for the conclusions, limitations, and potential future directions of our work. ## 5 Reproducibility Statement **Theoretical Proofs.** The proofs for all theorems and lemmas in the main body of the paper can be found in Appendix D. **Experiments.** Additional experimental details and results have been provided in Appendix H. In addition, the code and data for training the relevant models and producing results similar to what is shown in Figures 0(a) and 0(b) for one individual from the HCP-YA dataset can be found in [https://github.com/shervinkh2000/Cross_Covariance_NTK](https://github.com/shervinkh2000/Cross_Covariance_NTK). The complete HCP-YA dataset is publicly available and accessible from [https://db.humanconnectome.org/](https://db.humanconnectome.org/) as per the data use terms. **Additional Details.** Comments made regarding the relation between the NTK and generalization made in the main body of the paper have been thoroughly explained in Appendix E. Additional necessary theoretical considerations that did not directly contribute to the main message of the paper, namely, that the NTK of the GNN architecture that we analyzed is constant in the infinite-width limit, and that training the first layer of the GNN leads to similar results to those we saw for the second layer in section 3.2, have been discussed further in Appendices F and G respectively.
2307.05035
Number Systems for Deep Neural Network Architectures: A Survey
Deep neural networks (DNNs) have become an enabling component for a myriad of artificial intelligence applications. DNNs have shown sometimes superior performance, even compared to humans, in cases such as self-driving, health applications, etc. Because of their computational complexity, deploying DNNs in resource-constrained devices still faces many challenges related to computing complexity, energy efficiency, latency, and cost. To this end, several research directions are being pursued by both academia and industry to accelerate and efficiently implement DNNs. One important direction is determining the appropriate data representation for the massive amount of data involved in DNN processing. Using conventional number systems has been found to be sub-optimal for DNNs. Alternatively, a great body of research focuses on exploring suitable number systems. This article aims to provide a comprehensive survey and discussion about alternative number systems for more efficient representations of DNN data. Various number systems (conventional/unconventional) exploited for DNNs are discussed. The impact of these number systems on the performance and hardware design of DNNs is considered. In addition, this paper highlights the challenges associated with each number system and various solutions that are proposed for addressing them. The reader will be able to understand the importance of an efficient number system for DNN, learn about the widely used number systems for DNN, understand the trade-offs between various number systems, and consider various design aspects that affect the impact of number systems on DNN performance. In addition, the recent trends and related research opportunities will be highlighted
Ghada Alsuhli, Vasileios Sakellariou, Hani Saleh, Mahmoud Al-Qutayri, Baker Mohammad, Thanos Stouraitis
2023-07-11T06:19:25Z
http://arxiv.org/abs/2307.05035v1
# Number Systems for Deep Neural Network Architectures: A Survey ###### Abstract Deep neural networks (DNNs) have become an enabling component for a myriad of artificial intelligence applications. DNNs have shown sometimes superior performance, even compared to humans, in cases such as self-driving, health applications, etc. Because of their computational complexity, deploying DNNs in resource-constrained devices still faces many challenges related to computing complexity, energy efficiency, latency, and cost. To this end, several research directions are being pursued by both academia and industry to accelerate and efficiently implement DNNs. One important direction is determining the appropriate data representation for the massive amount of data involved in DNN processing. Using conventional number systems has been found to be sub-optimal for DNNs. Alternatively, a great body of research focuses on exploring suitable number systems. This article aims to provide a comprehensive survey and discussion about alternative number systems for more efficient representations of DNN data. Various number systems (conventional/unconventional) exploited for DNNs are discussed. The impact of these number systems on the performance and hardware design of DNNs is considered. In addition, this paper highlights the challenges associated with each number system and various solutions that are proposed for addressing them. The reader will be able to understand the importance of an efficient number system for DNN, learn about the widely used number systems for DNN, understand the trade-offs between various number systems, and consider various design aspects that affect the impact of number systems on DNN performance. In addition, the recent trends and related research opportunities will be highlighted. Number Systems, Artificial Intelligence Accelerators, Deep neural networks, floating point, fixed point, logarithmic number system, residue number system, block floating point number system, dynamic fixed point Number System, Posit Number System. ## I Introduction During the past decade, Deep Neural Networks (DNNs) have shown outstanding performance in a myriad of Artificial Intelligence (AI) applications. Since their success in speech recognition [1] and image recognition [2], great attention has been drawn to DNNs from academia and industry [3]. Although DNNs are inspired by the deep hierarchical structures of the human brain, they have exceeded the human accuracy in a number of domains [4]. Nowadays, the contribution of DNNs is notable in many fields including self-driving cars [5], speech recognition [6], computer vision [7], natural language processing [8], and medical applications [9]. This DNN revolution is helped by the massive accumulation of data and the rapid growth in computing power [10]. Because of their high computational complexity and memory space requirements, general-purpose compute engines (like powerful central processing units (CPUs) and Graphics Processing Units (GPUs)), or customized hardware (e.g., using FPGAs or ASICs) have been used to accelerate DNN processing [11]. While general-purpose compute engines remain dominant for processing DNNs within academia, the industrial applications of DNNs often require implementation on resource-constrained edge devices ( e.g., smartphones or wearable devices) [3]. Whether DNNs are run on GPUs or dedicated accelerators, speeding up and/or increasing DNN hardware efficiency without sacrificing their accuracy continues to be a demanding task. The literature includes a large number of survey papers that have been dedicated to highlighting the directions that can be followed to reach these goals [3, 4, 12, 13, 14]. Some examples of these directions are DNN model compression [15], quantization [12], and DNN efficient processing [4]. One of the directions that have a great impact on the performance of DNNs, but has not been comprehensively surveyed yet is the DNN number representation. As the compute engines use a limited number of bits to represent values, real numbers cannot be infinitely represented. The mapping between a real number and the bits that represent it is called number representation [16]. Generally speaking, number representation has a great impact on the performance of both general-purpose and customized compute engines. Recalling the huge amount of data that need to be processed in the context of DNNs, the choice of the format used to represent these data is a key factor in determining the precision of DNN data, storage requirements, memory communication, and arithmetic hardware implementation [17]. This in turn shapes different metrics of the DNN architecture performance; mainly the accuracy, power consumption, throughput, latency, and cost [18]. To this end, there is a significant body of literature that has focused on assessing the suitability of specific number systems for DNNs, modifying conventional number systems to fit DNN workloads, or proposing new number systems tailored for DNNs. Some of the leading companies, such as Google [19], NVIDIA [20], Microsoft [17], IBM [21], and Intel [22, 23, 24], have contributed in advancing the research in this field. A comprehensive survey of these works will be helpful to furthering the research in this field. While conventional number systems like Floating Point (FLP) and Fixed Point (FXP) representations are frequently used for DNN engines, several unconventional number systems are found to be more efficient for DNN implementation. Such alternative number systems are presented in this survey and they are the Logarithmic Number System (LNS), Residue Number System (RNS), Block Floating Point Number System (BFP), Dynamic Fixed Point Number System (DFXP), and Posit Number System (PNS). Figure 1 shows the bit visualization of conventional and unconventional number systems used in DNN implementation. The structure of the survey is summarized as follows. * Section II gives an overview of conventional number systems and their utilization for DNNs. * Section III classifies the DNNs that adopt the logarithmic number system. * Section IV describes the concepts behind the residue number system and its employment for DNNs. * Section V describes the block floating point representation and the efforts done to make it suitable for DNNs implementation. * Section VI discusses the dynamic fixed point format and the work done to calibrate the parameters associated with this format. * Section VII explains various DNN architectures that utilize Posits and the advantages and disadvantages associated with these architectures. * Section VIII provides an insight into recent trends and research opportunities in the field of DNN number systems. ## II Conventional Number Systems for DNN architectures The two conventional number systems, mainly the floating point and the fixed point, are the common choice for almost all general-purpose DNN engines. While the FLP representation is usually used for modern computation platforms (e.g., CPUs and GPUs), where high precision is a must, FXP is more common in low-cost computation platforms that are used in applications that demand high speed, low power consumption, and small chip area. In this section, these two representations are introduced and their utilization for implementing DNN hardware is briefly discussed, in order to facilitate the comparison between conventional and unconventional number systems. ### _FLP for DNN Architectures_ In the FLP number system, a number \(n\) is represented using a sign (1 bit), an exponent \(e\) (unsigned integer of length \(es\)) and a mantissa \(m\) (unsigned integer of length \(ms\)) (Figure 1a) and its value is given by \[n=(-1)^{s}\times 2^{e-e_{max}}\times(1+\frac{m}{2^{ms}}), \tag{1}\] where \(e_{max}=2^{es-1}-1\) is a bias used to ease the representation of both negative and Positive exponents. Although there are several FLP formats [25], the IEEE 754 FLP format [26] is the most common representation used by modern computing platforms [18, 27]. According to IEEE 754, the FLP can be of single, double, or quad-precision depending on the used bit-widths (e.g., for the single-precision FLP the bit-width is 32 bits and \(es=8\)). The single-precision FLP, also called FLP32, is commonly used as a baseline to evaluate the efficiency of other number representations. Unless otherwise stated, the performance degradation or enhancement is presented in comparison to the FLP32 format in this survey as well. Multiplication of two FLP numbers is implemented in hardware by adding their exponents, multiplying the mantissas, normalizing the resultant mantissa, and adjusting the exponent of the product [28]. FLP addition involves comparing the operand exponents, shifting their mantissas (if the exponents are different), adding the mantissas, normalizing the sum mantissa, and adjusting the sum exponent [25]. Usually, the increased complexity of the FLP32 arithmetic requires using a separate unit called Floating Point Unit (FPU) to perform the FLP calculations [29]. The high power consumption and cost of this unit limits its usage within embedded processing units such as FPGAs [30]. Consequently, the standard FLP32 is rarely used for building efficient DNN architectures [28]. To increase the efficiency of the FLP in DNN architectures several custom FLP formats [19, 20, 31, 32] have been proposed. Also new designs of the FLP arithmetic hardware (mainly the multiplier) have been investigated [28, 33]. The 32-bit FLP representation has a wide dynamic range, beyond what is usually required for DNNs [28], resulting in a low information-per-bit metric, which means an unnecessary increase in power consumption, area, and delay. For this reason, the proposed custom FLP representations mainly have reduced bit-width and a different allocation of the bits to mantissa and exponent, than IEEE 754. The bit-width is reduced to 19 bits in Nvidia's TensorFloat32 [20] and 16 bits in Google's Brain FLP (bfloat16) [19] formats used in DNN training engines. 8-bit FLP has been proposed to target the DNN inference in [31, 32]. These reduced FLP formats proved their efficiency in replacing FLP32 with comparable accuracy, higher throughput, and smaller hardware footprint. It is worth noting that most of these custom FLP formats are used to represent data stored in memory (i.e., weights and activations), whereas, for internal calculations (e.g., accumulation and weight updates), FLP32 is used instead to avoid accuracy degradation [19, 32, 34]. In summary, the standard FLP representation has a massive dynamic range, which makes it a good choice for computationally intensive algorithms that include a wide range of values and require high precision. At the same time, the complex and power-hungry FLP calculations make FLP less attractive for DNN accelerators. This leads to using narrower custom FLP formats which require less hardware and memory footprint while preserving the performance of the standard FLP32. However, the utilization of the FLP format for DNN accelerators is relatively limited and it loses ground to fixed point and other alternative representations. ## I Introduction Fig. 1: Conventional [(a) FP, (b) KXP] and unconventional [(c) BFP, (d) DEXP, (e) Posit, (f) LNS, (g) RNS] number systems for DNNs ### _FXP for DNN Architectures_ The power inefficiency of the FLP arithmetic is the main motivation to replace it with the FXP format for designing energy-constrained DNN accelerators. A real number \(n\) is represented in FXP with the sign, the integer, and the fraction parts. The fixed point format is usually indicated by \(<I,F>\) where \(I\) and \(F\) correspond to the number of bits allocated to the integer and the fractional parts, respectively. In this paper, we use the notations FXP8, for example, to denote the FXP representation with bit-width equal to 8, i.e., \(I+F+1=8\). In FXP format, the separation between the integer and the fractional parts is implicit and usually done by specifying a scaling factor that is common for all data. Thus, the FXP number can be treated as an integer and, hence, integer arithmetic is used. Integer arithmetic requires substantially fewer logic gates to be implemented and consumes much less chip area and power, compared to FLP arithmetic. This makes FXP attractive to be used for DNN accelerators on the edge. Moreover, the FXP allows for more reduction in the number of bits resulting in a significant reduction in the power consumption, storage requirements, and memory bandwidth [4]. On the other hand, the dynamic range1 of data represented by low precision FXP is limited. This makes FXP suitable to represent data with only narrow range of values. Since this is not the case for most DNNs, using low precision FXP for DNNs is challenging. To enable this, various approaches were adopted such as quantization [12]. For instance, uniform quantization includes scaling weights and activations of DNN and mapping them to a restricted range of values. These values can be represented by low-bit-width FXP. This allows lowering the number of bits to be less than 8 bits [35, 36, 37], and even as low as 2 bits (i.e., ternary DNNs [38, 39, 40]) or 1 bit (i.e., binary DNNs [41, 42, 43, 44, 45]). For more information about the FXP quantization, precision reduction, and binary DNNs the interested reader is referred to [4, 12, 46]. Footnote 1: The dynamic range of a number system is the ratio of the largest value that can be represented with this system to the smallest one. In short, the FXP for DNN implementation offers great hardware efficiency at the expense of some accuracy degradation. Between the two extreme representations (FLP and FXP), there are several number systems that offer different trade-offs (Pareto optimal points) between the hardware efficiency and the acquired accuracy. These number systems and their usage for DNN implementation are presented in subsequent sections of this paper. ## III LNS for DNN Architectures Proposals for LNS first emerged in the 1970s to implement the arithmetic operations of digital signal processing. The utilization of LNS for neural computing was first proposed in the late 90's [47]. Since then, using LNS to implement efficient hardware for DNN has become more popular. The main benefit of using LNS is in simplifying the implementation of the costly arithmetic operations required for DNN inference and/or training [48]. In addition, representing the data in LNS enables a reduction of the number of bits required to obtain the same DNN accuracy as with conventional number systems [49, 50]. In LNS, a real number \(n\) is represented with a logarithm of radix \(a\) of its absolute value (\(\tilde{n}=\log_{a}(|n|)\)) and a sign bit \({s_{n}}^{2}\)[48]. The number \(\tilde{n}\) is represented using two's complement fixed point format [51], as shown in Figure 1f. The radix \(a\) of the logarithm is usually selected to be 2 for simpler hardware implementation. Throughout this survey, we will use \(a=2\) as well. The main DNN operation that can be dramatically simplified using LNS is the multiplication by transforming it into linear (i.e., fixed point) addition. The LNS product \(\tilde{p}\) of two real numbers \(n_{1},\text{ and }n_{2}\) is calculated as follows \[\tilde{p} =\tilde{n_{1}}\odot\tilde{n_{2}}, \tag{2}\] \[=\log_{2}(|n_{1}|\times|n_{2}|),\] \[=\tilde{n_{1}}+\tilde{n_{2}},\] \[s_{\tilde{p}}=s_{n_{1}}XOR\ s_{n_{2}}, \tag{3}\] where \(\odot\) is the multiplication operation in LNS domain that can be implemented with a simple integer adder, and \(s_{\tilde{p}}\) is the product sign, which is calculated by XORing the signs (\(s_{n_{1}}\) and \(s_{n_{2}}\)) of the two operands. Existing proposals for LNS-based DNNs are for either using LNS for the whole DNN architecture from end-to-end, just for using the LNS-based multipliers, or for using logarithmic quantization for DNN weights and/or layer inputs. Based on this classification, LNS-based DNN architectures are discussed next by highlighting the challenges associated with each architecture and the solutions presented in the related work. ### _End-to-end LNS-based DNN Architectures_ End-to-end-LNS implementation utilizes the LNS for all blocks of the architecture, and thus, no conversion from or to conventional systems takes place. For this, the inputs (i.e., the dataset) and the weights3 are assumed to be fed to DNN in LNS format. This task is usually performed offline and has no overhead on the implemented architecture. In this section, we review LNS-domain implementation of the main operations that are needed for DNN training and inference. The two types of DNNs that were implemented using LNS from end to end are convolutional neural networks (CNNs) [49, 50] and recurrent neural networks (RNNs) [52]. These two types of DNN have different architectures, but they share the same basic operations which are multiplication, addition, and activation functions. Since the multiplication operation becomes a linear addition in LNS-domain, the challenging part of this architecture is implementing LNS-addition and LNS-activation functions, which are discussed next. Footnote 3: When the architecture targets DNN inference. #### Iii-A1 Addition in LNS As opposed to multiplication, performing addition in LNS is not straightforward. Let \(\tilde{n_{1}}\) and \(\tilde{n_{2}}\) be the two operands to be added in LNS. This LNS addition \(\oplus\) is usually defined as follows \[\begin{split} s\tilde{u}m&=\tilde{n_{1}}\oplus\tilde{n_{2 }},\\ &=\log_{2}|(-1)^{s_{n_{1}}}\times 2^{\tilde{n_{1}}}+(-1)^{s_{n_{2}}} \times 2^{\tilde{n_{2}}}|,\end{split} \tag{4}\] where \(s\tilde{u}m\) is the LNS domain summation of the two operands, and \(s_{n_{1}}\) and \(s_{n_{2}}\) are their signs. As these operands can be negative or Positive, \(\tilde{u}m\) is derived from 4 [50] such that \[\begin{split} s\tilde{u}m=\left\{\begin{array}{ll}\max(\tilde{ n_{1}},\tilde{n_{2}})+\log_{2}(1+2^{-|\tilde{n_{1}}-\tilde{n_{2}}|}),\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ Multiplier that receives linear operands and produces a linear product as well. #### Iii-B1 LNS-based Multiplier To simplify the discussion and comparisons between various LNS multipliers proposed for DNN, some notations are introduced for the next subsections. Let \(n\) be a Positive integer and its \(w\)-bit binary representation is \(B_{n}=b_{w-1}\)\(b_{w-2}\ldots b_{0}\). Let \(b_{k}\) be the most significant \({}^{\prime}1^{\prime}\) in \(B_{n}\) (\(k\) is called the characteristic number of \(n\)). The linear number \(n\) and its logarithm can be represented by \[n =2^{k}(1+x), \tag{13}\] \[\log_{2}(n) =k+\log_{2}(1+x), \tag{14}\] where \(0\leq x<1\) is called the mantissa of \(n\). Let \(n_{1}=2^{k_{1}}(1+x_{1})\) and \(n_{2}=2^{k_{2}}(1+x_{2})\) be the multiplier and multiplicand, respectively. The product of these operands and its logarithm are given by \[n_{1}\times n_{2}=2^{k_{1}+k_{2}}(1+x_{1})(1+x_{2}), \tag{15}\] \[\log_{2}(n_{1}\times n_{2})=k_{1}+k_{2}+\log_{2}(1+x_{1})+\log_{2 }(1+x_{2}), \tag{16}\] The main idea of the logarithmic multiplier is to use a specific approximation based on the characteristics of the logarithms to simplify the product calculation by mainly using shift and add operations instead of hardware-intensive conventional multiplication. Given their effectiveness, many logarithmic multipliers have been proposed for image processing and neural computing [53, 54, 55, 56, 57, 58]. Several of these multipliers were utilized to also build efficient DNN architectures. They are classified in this survey into multipliers that use Mitchell's approximation, iterative logarithmic multipliers, double-sided error multipliers, and multipliers with explicit logarithm and antialgorithm modules. Mitchell's MultiplierAccording to Mitchell's algorithm [54], the logarithm of a number \(n\) is approximated with piece-wise straight lines as in (7). Thus, the logarithm of the product in (16) is approximated by the sum of the characteristic numbers and the mantissas of the operands as follows \[\log_{2}(n_{1}\times n_{2})\approx k_{1}+k_{2}+x_{1}+x_{2}. \tag{17}\] The final product is obtained in (18) by applying the antilogarithm on (17) using the approximation in (7). Then, the product of two integers is calculated using add and shift operations, as \[n_{1}\times n_{2}\approx\left\{\begin{array}{ll}2^{k_{1}+k_{2}}(1+x_{1}+x_{ 2}),&x_{1}+x_{2}<1\\ 2^{k_{1}+k_{2}+1}(x_{1}+x_{2}),&x_{1}+x_{2}\geq 1\end{array}\right. \tag{18}\] Even though the error introduced by Mitchell's approximation is relatively high (up to 11% [55]), this multiplier showed no accuracy degradation for CNN architecture with 32- bit precision [59], while being 26.8% more power-efficient compared to conventional multipliers of the same number of bits. To gain additional power efficiency over the one achieved by Mitchell's multiplier, a truncated-operand approach has been proposed [60]. Instead of using the whole operands, these operands are truncated and only their \(\omega\) most significant bits are used to calculate the approximated product. For instance, selecting \(\omega=8\) allows for a more efficient multiplier that saves up to 88% and 56% of power when compared to an exact 32-bit FXP multiplier and a Mitchell's multiplier, respectively. The additional error introduced by this truncation caused an accuracy degradation of 0.2% for the ImageNet dataset. The significant power saving associated with the negligible performance degradation of this approach comes from the fact that the most significant part of the operand can be sufficient to provide an acceptable approximation [61, 62]. Iterative Logarithmic MultiplierThe iterative logarithmic multiplier aims to reduce the error introduced by the approximation in (18) by adding correction terms. The calculation of these terms usually requires iterative multiplications that can be calculated in the same way as calculating the approximate product ( see Figure 3). These correction terms can be biased (always Positive) or unbiased (negative/Positive). The product of two numbers in (15) can be written using biased correction terms [55] as \[n_{1}\times n_{2}=P_{approx}+E, \tag{19}\] where \(P_{approx}=2^{k_{1}+k_{2}}(1+x_{1}+x_{2})\) is an approximate product that can be calculated using shift and add operations. \(E=2^{k_{1}+k_{2}}x_{1}x_{2}\) is a correction term that is ignored in (18). Estimating the term \(E\) requires calculating the product \((2^{k_{1}}x_{1})(2^{k_{2}}x_{2})=(n_{1}-2^{k_{1}})(n_{2}-2^{k_{2}})\) iteratively, in the same way of calculating \(P_{approx}\). Then, \[n_{1}\times n_{2} =P_{approx}^{(0)} +E^{(0)}, \tag{20}\] \[=P_{approx}^{(0)} +P_{approx}^{(1)}+E^{(1)},\] \[=P_{approx}^{(0)} +P_{approx}^{(1)}+\cdots+P_{approx}^{(i-1)}+E^{(i-1)},\] where \(i\) is the number of iterations and \(E^{(i-1)}\) is the error to be ignored after the \(i^{th}\) iteration. Notice that when \(i\) equals the number of bits that have the value of '1' in the operands, then \(E^{(i-1)}=0\), and the exact product is produced. For each iteration, the new operands to be multiplied are obtained by removing the leading 'ones' from the original operands. For this reason, the correction terms can be calculated in parallel using one additional circuit for each iteration. Hence, there is a trade-off between the accuracy of the multiplication and the area and power overhead due to adding these correction circuits. For example, this iterative logarithmic multiplier with one iteration (i.e., one correction circuit) was able to save 10% on area and 20% on power consumption without any notable impact on the learning accuracy when it was used to implement the hardware of a relatively simple neural network and compared with the case of using floating point multiplier [63]. On the other hand, using unbiased iterative correction terms of (21) shows a better area and power reduction by up to 44.6% and 48.1%, respectively, compared to the multiplier designed with the error terms of (19) [64]. \[E=\left\{\begin{array}{ll}((1-x_{1})2^{k_{1}}-1)((1-x_{2})2^{k_{2}}-1),&\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ approximated using a LUT. The LUT maps \(f\) to \(Q_{\alpha}(2^{f}-1)\), where \(Q_{\alpha}\) is the applied quantization to limit the number of bits to \(\alpha\). LUT-approximation is usually used to escape from the errors introduced by the approximation in (7). However, to keep the size of the required LUTs reasonable, the value of \(\alpha,\gamma,\text{ and }\beta\) should be kept as small as possible. This introduces a loss in accuracy. In addition, this approach is expected to be less hardware-friendly because of the needed overhead to implement these LUTs. Nevertheless, experimental results showed that integrating 16-bit LUT-based multiplier with a wide FXP accumulator results in a reduction in power consumption and area by up to 59% and 68%, respectively, in comparison to 16-bit FLP multiplier [67]. This comes in addition to achieving a negligible accuracy degradation (\(<1\%\)) for the CNN ResNet50 network trained on the ImageNet dataset. Another approach for approximating log/antilog modules is using bit-level manipulation to innovate area and speed efficient logarithm or antilogarithm operations [68, 69]. Among these works, the two-region manipulation-based logarithm converter and bit correction-based antilogarithm converter [70] are used to implement an LNS multiplier that is exploited to build an efficient CNN accelerator design from area and delay point of view [68]. When this design is compared to conventional multiplier implementation, it saves up to 60% of the area-delay product. However, neither the accuracy of the CNN nor a comparison with other logarithmic multipliers has been reported for this design. Summary and Discussion of LNS-based MultipliersLNS-based multipliers use the characteristics of the logarithm to transform the multiplication into simpler operations. Most of the proposed logarithmic multipliers for DNN architecture started from Mitchell's approximation to innovate their logarithmic multipliers. Table I compares various architectures that use LNS multipliers. We notice that these multipliers are used to implement efficient CNN architectures suitable for DNN inference rather than training. In addition, this table depicts that using the vanilla Mitchell's multiplier offers power-efficient hardware with comparable accuracy to the FLP32 multiplier of the same number of bits. However, reducing the number of bits requires a more accurate approximation with less average error than Mitchell's. When the LNS multiplier is designed with the characteristics of DNN in mind, such as the double-sided-error multiplier, the outcome is further reduction in the number of bits with significant power savings, while preserving and even enhancing the classification accuracy of the reported CNNs. ### _Logarithmic quantization for DNN architectures_ Logarithmic quantization involves representing a real number \(n\) with a sign and an integer exponent (integer power of two). The integer is usually an approximation of the logarithm \(\log_{2}|n|\) of the real number after applying the clipping and rounding [71, 72]. Logarithmic quantization has been employed in order to achieve efficient hardware implementation of CNNs [71, 72]. The main idea behind this is the that the multiplication by this integer exponent can be easily implemented in hardware by bit shifting. In CNNs, both the convolutional and fully-connected layers include matrix multiplication, i.e., dot product between the weights \(w\) of each layer and the input activation \(x\), which is the output of the previous layer after applying the non-linearity (e.g., ReLU). This matrix multiplication is usually performed using a number of multiply-and-accumulate operations when the conventional data representation is used to implement digital hardware, as shown in Figure 5(a). However, this dot product can be implemented more efficiently when Logarithmic quantization is utilized. Due to the non-uniform distribution of the weights and inputs, using nonuniform quantization, such as logarithmic quantization, is preferred over the uniform quantization, such as when FXP is used [72]. Existing CNN architectures that use logarithmic quantization assume that weights and/or the inputs of the layer are quantized. When logarithmic quantization is applied to inputs only (i.e activations), Figure 5(b), or to weights only, Figure 5(c), the dot product becomes a simple bit shift operation followed by an accumulation. applying logarithmic quantization on the weights only shows insignificant accuracy degradation [73, 74] and significant power and area savings [75, 76], see Table II. Quantizing the inputs only in LNS results in the same performance from an accuracy point of view [49, 77], however, with an additional linear-to-LNS module to be added. This module is responsible for transforming output activations to LNS before storing them in memory. This scheme has the advantage of requiring a smaller memory bandwidth as the stored activation is represented in LNS [49]. The works that apply logarithmic quantization to weights as well as to activations usually use a logarithm radix different from '2' [71, 72]. Then, the multiplication becomes an addition of the LNS quantized weights and activations followed by an approximation to decode this sum into the linear domain before implementing the accumulation. This add-decode-accumulate scheme adds a complication to hardware implementation, however, with comparable accuracy to the aforementioned logarithmic quantization schemes, as illustrated in Table II. ## IV RNS for DNN Architectures The Residue Number System (RNS) can be an attractive choice for DNN accelerators due to its arithmetic properties. In this Section, a brief overview of the RNS is given, and several RNS-based architectures for AI applications reported in literature are presented. Architectures are classified to _partially RNS-based_, where intermediate conversions to conventional representations between successive layers are used, and _end-to-end_ RNS-based architectures, where the entire processing takes place in the RNS domain. The typical computation flow of these two types of systems is shown in Fig. 6. The number representation scheme utilized in realizing DNN architectures directly impacts the accuracy, speed, area, and energy dissipation. Modern Deep Learning models keep growing in depth and number of parameters and require a huge amount of elementary arithmetic operations, the majority of which are multiply-add operations (MAC). In the Residue Number System, each number is represented as a tuple of residues with respect to a modulus set \(\{m_{1},m_{2},\ldots,m_{n}\}\), which is called the _base_ of the representation. The dynamic range of the representation is given by \[R=\prod_{i=1}^{N}m_{i}. \tag{27}\] If the moduli are _co-prime_, i.e., \[\operatorname*{gcd}_{\begin{subarray}{c}1\leq i,j\leq N\\ i\neq j\end{subarray}}(m_{i},m_{j})=1, \tag{28}\] where \(\gcd(\cdot)\) denotes the greatest common divisor operation, each integer inside the range \([0,R)\) has a unique RNS representation \[X\mapsto(x_{1},x_{2},\ldots,x_{n}),\ x_{i}=\langle X\rangle_{m_{i}}, \tag{29}\] where \(\langle\cdot\rangle_{m}\) is the modulo-\(m\) operator. Inverse transformation is generally harder and can be realized by means of the _Mixed Radix Conversion_ or the _Chinese Remainder Theorem_[78]. ### _RNS Addition and Multiplication_ Due to the properties of the modulo operation, addition and multiplication can be done independently and in parallel for each residue channel, i.e., without inter-channel propagation of information. Suppose \(A=(a_{1},a_{2},\ldots,a_{n})\) and \(B=(b_{1},b_{2},\ldots,b_{n})\), then \[a\oplus b=((a_{1}\oplus b_{1})_{m_{1}},\langle a_{2}\oplus b_{2} \rangle_{m_{2}},\ldots,\langle a_{n}\oplus b_{n}\rangle_{m_{n}}), \tag{30}\] where \(\oplus\) can be either the addition or the multiplication operator. This property is what makes RNS very efficient in applications that require a large number of these operations, such as DSP applications and, more recently, Neural Network inference. This is because, by decomposing the computations into independent channels, long carry propagation chains are eliminated, thus arithmetic circuits can operate at higher frequencies, or with reduced power dissipation. The general architecture of a modulo adder is shown in Fig 7[79]. The design consists of an \(n\)-bit, adder, where n is the size of the channel, that performs the addition of two numbers \(a+b\), and a CSA adder which performs the computation of \(a+b-m_{i}\) (modulo operation). The sign of the CSA result is used to select the correct result of the two adders. The selection of moduli can significantly simplify the design of modulo arithmetic circuits. In case of moduli of the form \(2^{k}\) the modulo operation translates into just keeping the \(k\) least significant bits, whereas in the case of \(2^{k}-1\), the output carry of the addition simply needs to be added to the result. In this case, end-around-carry adders can be used. For channels of the form \(2^{k}+1\), diminished-1 arithmetic can be used [80], which basically involves an inverted end-around logic. If the size of the channel is large, then fast adder designs such as prefix adders must be utilized within each channel. Modulo multiplication is a trickier operation, however the benefits of RNS can be greater. This is because of the (approximately) quadratic scaling of a multiplier with the input size. This means that, by decomposing a large multiplication into smaller ones, the energy and delay savings can be significant, providing that the overhead of the modulo is diminished. One approach for RNS multiplication is to perform regular multiplication of the two \(n\)-bit numbers and then use a reduction circuit to obtain the final result modulo \(m_{i}\). This approach introduces, however, considerable overhead to the design, as the reduction of a \(2n\)-bit number to a \(n\)-bit number modulo \(m_{i}\) is not as straightforward as in the case of addition. A low complexity adder-based combinatorial multiplier has been proposed in [81], where the number of FAs required is minimized. Other multiplication techniques are based on intermediate RNS transformations, such as _core functions_[82] and _isomorphisms_[79], which are transformations that convert multiplication into addition. These transformations utilize look-up tables to convert RNS to an intermediate representation where multiplication is translated into addition. In the case of modulo \(2^{k}\) multiplication, regular multipliers operating only on the \(k\) LSBs can be used, whereas in the case of modulo-\((2^{k}+1)\) diminished-1 arithmetic can be applied [83]. A end-around-carry multiplier which can be used for \(2^{k}-1\) channels is shown in Fig. 8. Due to the properties of the particular channel, the modulo operation is translated into simple bit re-ordering, thus no overhead is introduced. Based on the above, most of the RNS designs reported in literature utilize the low-cost forms of moduli, which allow Fig. 5: Arithmetic processing elements of CNNs utilizing linear and various logarithmic quantization schemes (a) Linear quantization (b) Inputs logarithmic quantization (c) Weights logarithmic quantization (d) Inputs and weights logarithmic quantization, adapted from [76] Fig. 6: Computation flow of RNS accelerators. Partially RNS-based accelerators utilize binary converters between successive layers for non-trivial RNS operations, while end-to-end systems perform all NN operations, including activation functions, in the RNS domain. Base-extension is usually required to increase the dynamic range before the accumulation of the partial products. Fig. 7: Modulo adder [79] to fully exploit the RNS benefits (elimination of long carry chains) with minimal hardware overhead. #### Iv-B1 Conversions and Non-trivial Operations While addition and multiplication are very efficiently implemented in RNS, other operations such as sign detection, comparison and division, or the realization of non-linear activation functions are not straightforward to implement, as the require the combination of the RNS channels. A common approach is to use RNS-to-binary converters and then perform the operation in the binary domain. Conversion to and from an RNS representation is a crucial for the performance of any RNS-based processing system. Especially for the architectures that perform frequent intermediate conversions (partially RNS-based) the overhead can be significant. The complexity of these converters largely depends on the particular base selection, namely the size, number and format of the moduli. While _Binary-to-RNS_ or _forward_ converters can have a relative simple hardware realization, following Eq. 29, especially if particular forms of moduli are used, _RNS-to-binary_ or _inverse_ converters are generally harder to implement. Extensive bibliography exists for this topic. The most commonly used approaches are the Chinese Remainder Theorem (CRT) and the Mixed Radix Conversion (MRC) [84]. The CRT is expressed as \[X=\left\langle\bigg{(}\sum_{i=1}^{n}\overline{m}_{i}\langle x_{i}\overline{m} _{i}^{-1}\rangle_{m_{i}}\bigg{)}\right\rangle_{M} \tag{31}\] where \(\langle\cdot\rangle\) denotes the modulo operation, \(X\) is the binary representation of the number, \(x_{i}\) are its residues, \(m_{i}\) are the moduli, \(M\) is the dynamic range, \(\overline{m}_{i}=M/m_{i}\), and \(\overline{m}_{i}\)\({}^{-1}\) is the modulo inverse of \(\overline{m}_{i}\). CRT requires the pre-computation of \(\overline{m}_{i}\), and \(\overline{m}_{i}\)\({}^{-1}\), additions of potentially large products, as well as the final modulo operation with the \(M\), which can be very large. It can be computed, however, in a single cycle. In the other hand, MRC requires the computations Fig. 8: Array multiplier for calculating \(x\cdot y\) mod \(15\). At each level \(i\), the full adders corresponding to the \(i\) most significant positions (dashed squares) are moved to the \(i\) least significant positions. This is possible because \(2^{n+k}\) mod \((2^{n}-1)=2^{k}\). A modulo-\(15\) carry-propagation adder is used to obtain the final result. of some intermediate coefficients and is a sequential process which requires several steps, but these steps only include small bit-width operations. The Mixed Radix Conversion finds the coefficients \(k_{1},k_{2},\ldots,k_{n}\), such that \[X=k_{1}+k_{2}m_{1}+k_{3}m_{1}m_{2}+\ldots+k_{n}m_{1}m_{2}\ldots m_{n-1} \tag{32}\] The coefficients are calculated one by one in a number of steps [84], each of which requires the previously calculated coefficients. The modulo inverses can be pre-calculated and pipelining stages can be introduced to make this computation efficient. Sign detection is one of the most critical and frequent operation required by NNs, as the Rectified Linear Unit (ReLU), which maps negative values to zero, is the most common activation function. In an RNS representation, numbers in the range \(0<X<R/2\) are positive, whereas numbers in the range \(R/2\leq x<M\) are negative. Magnitude comparison of two RNS numbers which is required for the MaxPooling layers, is also difficult to directly to implement in the RNS domain. Comparison algorithms for particular moduli sets (\(2^{k}-1,2^{k},2^{k}+1\)) [85], or more complex general ones have been proposed [86], that can eliminate the overhead of the conversion. If the choice of moduli is restricted to some specific bases, simple and efficient algorithms have been reported for sign detection [87] and comparison. Finally, division, which is necessary after the multiplication and accumulation operations of a convolutional layer for example, in order to bring the result in the original dynamic range, also requires special handling. Methods that use special form of moduli, such as powers of two [88] or a product of the moduli [89] as divisors can simplify the hardware implementation. Some methods rely on using small (only one-channel wide) lookup-tables and typically relay on base extension methods, during which an RNS base with \(k\) channels is extended to \(k+r\) channels. ### _Partially RNS-based Architectures_ A common approach in RNS-based DNN implementations is to perform all multiply-add operations of a single convolutional or dense layer in the RNS representation and then use a converter to obtain a partial result in normal positional binary representation [90, 91, 92, 93]. With this intermediate result, the non-linear activation functions (_ReLU_, \(\tanh\), _softmax_) can be computed and the results can be again converted to RNS format to be fed to the next layer. Many application-specific AI accelerator designs, as well as more general purpose architectures, such as TPUs or GPUs, perform DNN computations by decomposing them into matrix or vector multiplication primitives. Thus, by utilizing efficient hardware matrix multipliers, performance can be orders of magnitude better than CPUs. An RNS TPU (Tensor Processing Unit) is proposed in [90]. In the core of this architecture there is a RNS matrix multiplier implemented as a two dimensional systolic array. Each processing element performs one operation (MAC) at each cycle, and passes the result to neighboring processing elements. Systolic arrays are an efficient way of increasing throughput and dealing with the limited memory bandwidth problem. In this particular RNS systolic array, each processing element decomposes the larger MAC operation (typically 8 or 16 bits), into smaller, each within the range of the respective channel, that can be performed in parallel. Using an FPGA implementation the RNS matrix multiplier is reported to perform a \(32\times 32\) fixed point matrix multiplication up to \(9\times\) more efficiently than a binary matrix multiplier for large matrices. In [92] the authors extend the RNS usage to the implementation of the convolution operation. Individual layers are executed on an RNS-based FPGA accelerator. However results are sent to a CPU, which performs the non-trivial RNS operations, such as applying the activation functions, before being sent back to the FPGA for the execution of the next layer. RNS results in a \(7.86\%-37.78\%\) reduction of the hardware costs of a single convolutional layer compared to the two's-complement implementation, depending on the RNS base selection. A variant of the Residue Number System, called the _Nested RNS_ (NRNS) is proposed in [91]. NRNS applies a recursive decomposition of the residue channels into smaller ones. Adder and multipliers can be thus implemented by using smaller and faster circuits. Assuming that a number \(X\) has a RNS representation of \((x_{1},x_{2},\ldots,x_{n})\), then the nested RNS representation will be of the form \[X=(x_{1},x_{2},\ldots,(x_{i1},x_{i2},\ldots,x_{im}),\ldots,x_{n}) \tag{33}\] where \((x_{i1},x_{i2},\ldots,x_{im})\) is the RNS decomposition of the \(i-th\) channel This technique introduces an additional complexity, as any operation must be recursively applied to each level of the representation, however it manages to handle large dynamic ranges with very small channels. The authors use a 48-bit equivalent dynamic range composed only of 4-bit MAC units which can be realized by look-up tables of the FPGA. Contrary to [92], which relies on an external CPU, in this work binary-to NRNS and NRNS-to-binary conversions are realized by DSP blocks and on-chip BRAMs. After Input data are converted into the NRNS representation, a number of parallel convolutional units perform all the necessary computations of a single convolutional layer. The results are then converted to binary using a tree-based NRNS-to-binary converter. The authors report a performance per area improvement of \(5.86\times\) compared to state-of-the-art FPGA implementations for the ImageNet benchmark. In a different approach the RNS arithmetic costs are reduced by restricting the RNS base selection to low-cost moduli of the form \(2^{k}\pm 1\)[93]. This way, modified fast prefix adders and CSA trees using end-around-carry propagation can be used, diminishing any overhead of the modulo operator. In another category of RNS-based architectures, the usage of very small channels allows the realization of multiplier-free CNN architectures. The authors utilize a small RNS base of (3,4,5) and reduce the implementation of the multiplications to shifts and additions [94]. Despite the reduced dynamic range of the representation, the authors report minimal accuracy loss, while achieving \(36\%\) and \(23\%\) reduction in power and area, respectively. A method to drastically reduce the number of multiplications in CNN RNS-bases accelerators is proposed in [95]. It utilizes a modified hardware mapping of the convolution algorithm where the order of operations is rearranged. Because of the small dynamic range of each RNS channel, there is an increased number of common factors inside the weight kernels during convolution. By first executing the additions of the input feature map terms that correspond to the same factors, and then performing the multiplications with the common weight factors, a \(97\%\), reduction of the total multiplications is reported for state-of-the-art CNN models. ### _End-to-end RNS Architectures_ While the above circuits manage to achieve some performance gain in the implementation of a single convolutional layer, they require significant amounts of extra hardware to perform the conversions which can become the bottleneck for some of these designs. More recent approaches focus on overcoming the difficulties of performing operations such as sign detection, comparison, and scaling which is usually required following multiplication. In these approaches, input data are initially converted to an RNS representation and then the entire processing takes place in the RNS domain. #### Vi-C1 State-of-the-art End-to-end RNS Architectures The system in [96] introduces some novel mechanisms for dealing with this problem and proposes an efficient fully RNS-based architecture. The authors of this work choose to work with moduli of the form \(2^{k}-1\), \(2^{k}\), \(2^{k+1}-1\). In particular they select (31,32,63) as the basis of their representation, as it is found to provide a sufficient dynamic range (16-bit equivalent), that results in no accuracy loss, for state-of-the-art networks and benchmarks. For the design of the modulo adders, which are simplified due to the particular selection of the moduli, parallel-prefix Sklansky adders with an end around carry are utilized. For the multiplications, a radix-4 Booth encoding is adopted within each channel. An optimized sign detection unit for this set of moduli is used, based on an approach proposed in [87]. which can be further transformed and result in a relatively hardware-friendly implementation. Using a similar logic to the work proposed in [85], the comparison of two RNS numbers can also be implemented by calculating auxiliary partitioning functions. The authors also introduce a base extension mechanism which is necessary in order to avoid potential overflow when accumulating the partial sums. In this work, a base extension method proposed in [97] is used, where the middle channel is extended from \(2^{k}\) to \(2^{k+e}\). This way the convenient properties of the chosen moduli are maintained. Base extension takes places once before each multiplication to ensure that the product lies within the dynamic range and then again before the accumulation. The authors define the number of extra bits that are added each time based on extensive simulation on benchmark networks and on a per-layer basis. RNS circuits result in significant delay and energy efficiency improvement, especially in the case of multiplication at the cost of larger overall area. Comparisons in terms of various performance metrics against the Eyeriss [98] accelerator are reported for various networks. Up to 61% reduction in energy consumption compared to the conventional positional binary representation has been achieved. The system can also support an increased clock frequency as high as \(1.20\) GHz versus \(667\) MHz in the case of the positional binary system, indicating a 1.8\(\times\) improvement in computational latency. #### Vi-C2 In-Memory Computing RNS Architectures Recently, there has been a growing focus of AI accelerator design research on in-memory computing. This is because of the paradigm-shifting effect that emerging memory technologies can have on processing systems. It is known that the largest part of the energy consumption of any DNN accelerator is due to the memory accesses and data transferring, particularly to and from the off-chip RAM. In-memory computing (IMC) aims to diminish data transfer costs by bringing the computing inside or near the memory elements. Efforts have been made to bring the benefits of the RNS to IMC systems. In these (mainly digital) IMC designs, the benefit of the usage of RNS over binary representation stems from the speedup of the bitwise serial addition operations, due to the inherent parallel operations of the RNS channels. RNS has been utilized in the design of an in-memory computing system [99]. In this work, the selected moduli are of the form \(2^{k}-1\), \(2^{k}\), \(2^{k}+1\). A sign detection mechanism similar to [87], is developed, in order to implement the ReLU and MaxPooling operations without having to convert to a binary representation. Addition and multiplication within each RNS channel, take place inside the memory elements. Multiplication of two numbers, \(a,b\) is implemented through addition and memory accesses by calculating the quantity \(\frac{(a+b)^{2}}{4}-\frac{(a-b)^{2}}{4}\), where squaring is implemented using look-up-tables. A single crossbar memory is assigned to each neuron and supports in-memory addition in a tree-based structure. For this purpose, a Memristor Aided loGIC (MAGIC) is used. Based on experimental results, the proposed RNS in-memory architecture consumes 145.5\(\times\) less energy and leads to a speedup of \(35.4\times\) compared to NVIDIA GPU GTX 1080. An near-memory RNS-based processing architecture is proposed in [100]. Instead of memristor-based memory macros, a DRAM computational sub-array is utilized for the implementation of the MAC operations in the RNS domain, combined with parallel-prefix adders, to implement bitwise multiplication and accumulation. Unlike [96], where multiplication is directly implemented in memory (by mapping to additions and squaring), here they are implemented by combining elementary bit-wise operations (AND, OR, XOR) between the operands. The authors also design a more flexible activation function unit which is based on a Mixed-Radix conversion. Similar to [96], an RNS base of \((2^{k}-1\), \(2^{k}\), \(2^{k+1}-1)\) is utilized. Gains in the order of \(331-897\times\) in terms of energy efficiency compared to GPU platforms are reported, and \(2\times\) compared to other IMC designs. ### _Summary of RNS-based DNN architectures_ RNS-based architectures targeting DNN applications are summarized in Table III. The majority of these approaches utilize low cost moduli of the form \(2^{k}-1\), \(2^{k}\), \(2^{k}+1\) to reduce the overhead of the modulo operator and are targeting CNNs. Most of these RNS accelerators can achieve speedups in the order of \(1.5-3\times\) and can also me more energy efficient. IMC RNS-based systems exhibit the largest energy savings. Among conventional systems, [96] illustrates more clearly the applicability of the RNS in DNN architectures by proposing a fully RNS system which outperforms the binary state-of-the-art counterpart. The RNS usage is also extended to LSTM networks, by designing hardware friendly RNS activation units for the implementation of \(tanh\) and \(sigmoid\) functions [101]. In conclusion, the Residue Number System (RNS) can be an attractive number representation choice for DNN accelerators, and several RNS-based architectures have been reported recently targeting AI applications, due to its various advantages. RNS exhibits inherent parallelism at the residue channel processing level. It utilizes parallel computations along separate residue channels, where operations in each of them are performed modulo a specific modulus, with no need for information (carry or other) to be shared between residue channels. The main challenge in designing an efficient RNS-based accelerator is to minimize or, possibly, eliminate the overhead introduced due to the implementation of the non-linear operations. Another key factor is the optimization of the moduli selection and the corresponding arithmetic circuits, to meet the accuracy requirements. Some of the RNS systems proposed in recent literature only perform the multiply-add (MAC) or matrix multiplication operation, required by the convolutional layers, in the RNS representation, and use intermediate converters between number systems for the non-linear operations. More recently, completely RNS-based approaches have been proposed that eliminate the overhead introduced by these intermediate conversions to and from a traditional positional binary representation. ## V BFP for DNN Architectures BFP representation offers a middle ground between FLP and FXP formats. This representation is proposed to preserve accuracy comparable to full precision FLP and hardware efficiency comparable to FXP. This is achieved by representing numbers with an exponent and a mantissa similar to FLP to guarantee a wide dynamic range. However, instead of representing each value separately, a group (called here a block) of values has a common exponent while maintaining private mantissa. Let \(N\) be a tensor that represent a block of \(t\) elements initially represented in FLP as \[N =(n_{1},\ldots n_{i},\ldots n_{t}), \tag{34}\] \[=((-1)^{s_{1}}\ m_{1}\ 2^{e_{1}},\ldots(-1)^{s_{i}}\ m_{i}\ 2^{e_{i}},\ldots(-1)^{s_{t}}\ m_{t}\ 2^{e_{i}}).\] This block is represented with BFP format as \(\hat{N}\) such that \[\hat{N} =(\hat{n_{1}},\ldots\hat{n_{i}},\ldots\hat{n_{t}}), \tag{35}\] \[=((-1)^{s_{1}}\ \hat{m_{1}},\ldots(-1)^{s_{i}}\ \hat{m_{i}},\ldots(-1)^{s_{t}}\ \hat{m_{t}})\times 2^{e_{N}},\] where \(\epsilon_{N}\) is a shared exponent between the elements of block \(N\), and \(\hat{m_{t}}\) is the aligned mantissa of element \(i\) such that \(\hat{m_{i}}=\mathbf{BS}(m_{i},e_{i}-\epsilon_{N})\), where \(\mathbf{BS}\) is the bit-shift operation. For large difference between the Private and shared exponents (\(e_{i}-\epsilon_{N}\)), this shifting causes some of the least-significant bits of the mantissa to be truncated. The truncation happens frequently when there are many outliers in a block, which in turn depends on the size of the block and the way the shared exponent is selected. Since the dot product is the basic operation involved in DNN inference and training, the main target of BFP is to simplify the complex hardware required to perform this operation when FLP is used. For two blocks \(\hat{N_{1}}\) and \(\hat{N_{2}}\) represented in BFP, the dot product is calculated as \[\hat{N_{1}}.\hat{N_{2}}^{T} =((-1)^{s_{1,1}}\ m_{1,1}^{\prime},\ldots(-1)^{s_{t,1}}\ m_{t,1} )\times 2^{e_{N_{1}}}. \tag{36}\] \[((-1)^{s_{1,2}}\ m_{1,2}^{\prime},\ldots(-1)^{s_{t,2}}\ m_{t,2} ^{\prime})^{T}\times 2^{e_{N_{2}}}\] \[=2^{e_{N_{1}}+\epsilon_{N_{2}}}\sum_{i=1}^{t}(-1)^{s_{i,12}}\hat{ m_{i,1}}\times\hat{m_{i,2}},\] where \(s_{i,j}\) and \(m_{i,j}\) are the sign and mantissa of the \(i^{th}\) element in the \(j^{th}\) block, respectively, \(\epsilon_{N_{j}}\) is the shared exponent of the \(j^{th}\) block, \(s_{i,12}\) results from XORing \(s_{i,1}\) and \(s_{i,2}\), and \(T\) stands for transportation. Equation (36) shows that the dot product of two blocks of size \(t\) represented in BFP involves \(t\) FXP multiplications of mantissa, \(t-1\) FXP additions of the products, and one addition of the two shared exponents. The additional overhead compared to FXP representation comes from the hardware required to handle the shared exponent which mainly depends on the number of the blocks [17]. As a result, the performance of DNN in presence of BFP representation is determined by block partition scheme, shared exponent selection, and the bit-width of the mantissa and shared exponent, which will be discussed next. ### _BFP Block Design_ Determining how the blocks are partitioned is essential to achieving good DNN performance with BFP [102, 103]. Usually, the input activation of each layer is considered as one block, whereas the weight matrix needs a specific scheme to be divided into blocks. There are two known blocking approaches, filter-based blocking [102, 103, 104, 105, 23, 106, 23] and tile-based blocking [108, 109, 71, 110, 111], illustrated in Figure 9(a) and Figure 9(b), respectively. In the filter-based blocking, each filter of weights along the input channels is considered a block. Then the total number of blocks equals to the number of filters. This blocking is usually called coarse-grain blocking and it is the most hardware-friendly blocking approach as the accumulation of each output activation is done with the same shared exponent. Thus, it can be done using the FXP arithmetic [102]. However, this approach may end up with severe accuracy degradation due to the increased number of outliers that need to be truncated within these large blocks. On the other hand, the tile-based blocking is proposed to strike a compromise between accuracy and hardware efficiency. This approach relies on breaking large matrices of the filters down into small tiles to fit into limited hardware resources. Each tile is considered as a block with a shared exponent. The size of these tiles is a metric that need to be optimized. For example, a large tile of size 576 is used in [110] which requires a 12-bit mantissa to obtain acceptable accuracy. However, the authors in [109] showed that 12-bit FXP can achieve similar accuracy with simpler hardware implementation. This indicates that BFP may has no advantage over FXP for such large tiles. Smaller tiles of 16 elements are used in [71, 17] seeking better accuracy, but with an added hardware complication comes from the need to convert to FLP before the accumulation. ### _Shared Exponent Selection_ One shared exponent for each block need to be selected after partitioning the blocks4 and whenever a new block is created with multiple shared exponents. For example, this exponent is aligned after performing the calculation of each DNN layer as the calculation of the output activation usually ends up with a matrix of multiple exponents [103]. To this end, most of DNN accelerators that adopt BFP calculate the shared exponent dynamically during DNN training or inference. Static shared exponent selection can be utilized prior to DNN inference. Footnote 4: As the case of partitioning the weight blocks prior to DNN inference, which is usually performed offline. One of two schemes is usually used for this dynamic shared exponent selection; maximal exponent-based or statistics-based schemes. The dynamic maximal exponent selection scheme is more popular [112, 113, 114]. In this scheme, for each block of (34), different floating point numbers \(n_{i}\) are compared and the maximum exponent is selected as follows \[\epsilon_{N}=\max_{e^{i}}:i\in 1,\ldots,t. \tag{37}\] To find this maximal exponent before performing the dot product between weights and activations result from previous layer, the output activations represented in BFP with several exponents need to be converted back into FLP, which adds large overhead on the performance and the resources. To keep the advantage of the dynamic calculation of the shared exponent while avoiding frequent conversion between BFP and FLP the statistics-based scheme is proposed to predict the shared exponent during DNN training [115, 23]. In this scheme, the optimal exponent for each block is predicted based on statistics collected in the previous learning iteration. For example, in [23] the maximum value recorded within each block is stored for the last \(i\) iterations. Then, the maximum and the standard deviation of the stored values are used to calculate the shared exponent for the next iteration. This scheme works because the values within each block change slowly during the training. However, although this scheme avoids the conversion to FLP to calculate the exponent, some additional overhead is required to store the recorded statistics for each block. Thus, this scheme is suitable for the case when the number of blocks is relatively small. The static shared exponent scheme is presented to get rid of exponent calculation overhead when the BFP is employed for CNN inferences rather than training [105, 106, 116, 117]. Instead of dynamically calculating the shared exponent during run-time, the shared exponent can be set to a constant value estimated offline. The common approach to determine the shared exponent offline is to minimize the Kullback-Leibler (K-L) divergence [117] between FLP32 distribution and BFP distribution of all blocks before the inference. By doing so, the extra memory and computational resources used for the exponent and the conversion between BFP and FLP are eliminated [17]. Because the input and output activations may Fig. 9: Different blocking Schemes, different colors indicate differed blocks have different shared exponents, a bit shifting is needed after each layer calculation, Figure 10(c). Figure 10 summarizes the dataflow of the BFP when each of the three shared exponent determination schemes is adopted. ### _BFP Precision_ The precision of BPF is determined by the number of bits allocated for both the shared exponent and mantissa. Reducing this precision is an objective to increase the arithmetic efficiency and memory bandwidth. At the same time, the over-reduced bit-width of the mantissa results in what is known as zero setting problem [118]. This problem occurs when all the bits of the mantissa are shifted out resulting in a zero number representation, despite the presence of the exponent value. The over-reduction of the shared exponent number of bits is much worse. This is because of insufficiency to represent the actual exponent of the block, and thus the caused truncation ruins the correct representation of all numbers in the block. This precision is usually either static [102, 103, 104, 105, 106, 107, 111, 112, 113, 114, 115, 116, 119] or dynamic [109, 118]. In the static precision, the number of bits is fixed and selected offline. To select the best precision, usually few experiments are performed using different number of bits [105, 104, 115]. This gives an insight on the impact of this metric on the performance of DNN and allows for picking the minimum number of bits that preserve acceptable accuracy or the one that gives the best trade of between hardware efficiency and accuracy. Reducing the mantissa bit-width was paid attention in the literature because the performance of DNN is less sensitive to mantissa reduction compared to shared exponent. For example, 23-bit mantissa, same as the case of FLP, is required to guarantee the convergence of the Q-learning in [115], whereas 8-bit mantissa, or even less, was found to be sufficient for other CNN accelerator designs [109, 116, 117, 102, 103, 104, 105, 106, 107, 111, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 22, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 289, 286, 288, 289, 292, 287, 288, 289, 293, 288, 289, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 325, 33, 34, 35, 36, 37, 38, 39, 311, 32, 34, 36, 38, 39, 32, 35, 37, 39, 33, 36, 38, 39, 33, 39, 34, 37, 39, 35, 39, 36, 37, 39, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 89, 91, 80, 83, 84, 85, 86, 88, 89, 92, 85, 87, 88, 89, 93, 94, 88, 89, 95, 89, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 116, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 212, 213, 214, 215, 216, 217, 218, 219, 22, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 252, 253, 254, 255, 256, 257, 258, 259, 261, 259, 270, 272, 274, 275, 276, 277, 278, 279, 289, 290, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 291, 289, 292, 293, 285, 286, 287, 288, 289, 294, 295, 288, 289, 296, 297, 298, 300, 311, 320, 31, 325, 34, 35, 36, 37, 38, 39, 301, 33, 34, 36, 38, 39, 31, 35, 37, 39, 31, 36, 38, 39, 302, 303, 31, 325, 34, 36, 38, 39, 31, 304, 305, 306, 307, 308, 309, 310, 325, 34, 36, 38, 39, 31, 33, 34, 35, 36, 39, 31, 34, 37, 39, 31, 35, 36, 39, 31, 304, 305, 306, 307, 308, 311, 325, 34, 36, 39, 31, 308, 31, 309, 32, 309, 33, 31, 34, 309, 32, 35, 36, 37, 39, 31, 37, 39, 31, 38, 39, 31, 39, 32, 33, 34, 35, 37, 39, 31, 39, 33, 34, 36, 39, 32, 37, 39, 33, 34, 38, 39, 35, 39, 303, 310, 32, 34, 36, 39, 32, 35, 36, 39, 37, 39, 31, 38, 39, 304, 305, 306, 307, 308, 310, 32, 34, 39, 31, 34, 39, 32, 35, 36, 39, 31, 39, 33, 34, 39, 33, 35, 37, 39, 31, 39, 34, 35, 36, 39, 31, 37, 39, 32, 34, 39, 35, 37, 39, 31, 39, 33, 36, 39, efficient hardware capable of performing CNN training phase without ruining the accuracy, this representation got the same amount of attention for highly accurate inference hardware implementation. Most of these architectures achieved negligible accuracy degradation compared to FLP even with less than 8-bit mantissa [17, 109, 120]. Different implementations make use of different combinations of the discussed design choices, thus, the reported results of these works can't be used to prove the superiority of a specific design choice over the others. However, we can conclude that there is no clear trend in the accuracy enhancement when tile-based blocking is used instead of a filter-based one. ## VI DFXP for DNN Architectures DFXP representation shares the same concept of BFP discussed in Section V and sometimes the notations DFXP and BFP are used interchangeably. As in the case of BFP, in DFXP, the values are grouped and different scaling factors (i.e., shared exponents) are used for different groups. Thus, a scaling factor is unique for each group (e.g., layer). In some cases, it can be changed from time to time (i.e., dynamic). This is compared to the case of FXP which assigns a single global scaling factor for the whole DNN architecture all the time. To this end, Equations (34,35,36) are applicable to DFXP. Although several works use the term DFXP to indicate a representation similar to BFP [113, 124, 123, 114], the majority of works use DFXP to indicate FXP representation provided with flexibility to change the place of the decimal point, that specifies the length of the integer and fraction parts for each group of values, Figure 1d. This requires that a scaling factor \(\epsilon_{N}\) of a group \(N\), (35), to be in the range \([-w_{N},0]\), where \(w_{N}\) is the bit-width used to represent elements of a group \(N\)[125, 126, 127]. Hence, DFXP representation can be reduced to \(<I_{N},F_{N}>\) format where \(I_{N},~{}F_{N}\) are the number of bits allocated to the integer and fractional parts, respectively, for all values within a group. Such that \(w_{N}=I_{N}+F_{N}\) and \(\epsilon_{N}=-F_{N}\). Thus, the zero setting problem frequently happens with BFP will not appear for the DFXP at the expense of limited dynamic range, but still better than the one of FXP. We will limit our discussion on these works in this section, whereas the other works that use DFXP notation to indicate BFP are discussed in Section V although they use the term DFXP. A notable difference between DNN architectures that use BFP and DFXP is that the latter gives less attention to the way that the groups (i.e., blocks) are partitioned. The common grouping approach for DNN architecture based on DFXP is to consider the weights, biases, input activation, and gradients vectors (when DFXP is used to accelerate training) for each layer as separate groups and thus associated with different scaling factors [128, 129, 130, 131, 132, 133, 134]. Only one architecture presented in [40] statically clusters the filters (i.e., weights) that accumulate to the same output activation of each layer. Then, each cluster represents a group that has its unique scaling factor. The quantization error is effectively reduced with smaller clusters (e.g., when a cluster contains 4 filters) since smaller groups tend to have smaller range of values. The main differences between the DFXP representation in different works are the way of finding the best scaling factor \(F_{N}\) and determining the bit-width \(w_{N}\). The approaches used to optimize the decimal point Position and specify the precision of DFXP are classified in the next subsections. ### _Group Scaling Factor Selection_ The scaling factor (i.e., \(F_{N}\)) assignment to each group in DFXP is usually performed in an offline or online manner. The offline assignment is usually used when the architecture is implemented for inference purpose [128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140]. The common approach for the offline assignment depends on finding the minimum integer bit-width \(I_{N}\) that accommodates the maximum value within a group as in \[I_{N}=\log_{2}(\max(|n_{max}|,|n_{min}|)), \tag{38}\] where \(n_{max}\), \(n_{min}\) are the maximum and minimum values within a group \(N\). The remaining bits \(w_{N}-I_{N}\) are allocated to the fractional part \(F_{N}\). This approach is used for example in [126, 128]. However, as the presence of outliers in a group results in an unnecessary increase in the integer bit-width, the outliers can be excluded before calculating the bit-width \(I_{N}\)[126]. Several works minimize the impact of the outliers by selecting a scaling factor that minimizes the error between computed and real values [127, 130, 135]. For instance, K-L divergence between FLP32 and DFXP weight distributions is used in [127], whereas a greedy algorithm is utilized in [130] to determine the best scaling factor. The online scaling factor selection is needed for the training phase in which the values within each group change frequently [123, 127, 129, 133, 141, 142, 143, 142, 143]. Usually, the scaling factor is updated at a given frequency based on the rate of overflow during the training. When the current integer part fails to handle a value in a group, the overflow rate increases. The overflow rate is compared to a threshold to decide whether this scaling factor should be increased or decreased. This threshold can be deterministic and predefined [129, 133, 141, 142], or stochastic [143]. Fig. 11: Example of how the distribution of weights (a), activations (b) and weight updates (c) changes during different DNN training iterations [23] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Ref. & Phase & DNN & \begin{tabular}{c} Exponent \\ selection \\ \end{tabular} & \begin{tabular}{c} Block \\ design \\ \end{tabular} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multirow{2}{*}{ \begin{tabular}{c} Mainissa \\ bits \\ \end{tabular} } & Accuracy & Area & Power & Speed \\ & & & & & & & & & & \\ \hline [23] & \begin{tabular}{c} Training, \\ inference \\ \end{tabular} & \begin{tabular}{c} CNN, \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ GANs \\ \end{tabular} & \begin{tabular}{c} Filter-based \\ statistics \\ \end{tabular} & \begin{tabular}{c} CIFAR-10, \\ LSUN \\ \end{tabular} & \begin{tabular}{c} AlexNet, \\ WGAN \\ \end{tabular} & 16 & \(\thicksim 0\) & - & - & - \\ \hline [112] & \begin{tabular}{c} Inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & \begin{tabular}{c} Tile-based \\ \end{tabular} & \begin{tabular}{c} ILSVRC \\ \end{tabular} & \begin{tabular}{c} AlexNet \\ DenseNet \\ \end{tabular} & 10 & \(\thicksim 0\) & - & - & - & \begin{tabular}{c} 10 \\ (\(\{\)121\(\}\)) \\ \end{tabular} \\ \hline [103] & \begin{tabular}{c} Inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & \begin{tabular}{c} Filter-based \\ CIFAR10 \\ \end{tabular} & \begin{tabular}{c} MNIST, \\ ResNet-18, \\ ResNet-50, \\ GoogLeNet \\ \end{tabular} & 8 & \(<\)0.3 & - & - & - \\ \hline [110] & \begin{tabular}{c} Training, \\ inference \\ \end{tabular} & \begin{tabular}{c} CNN, \\ RNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & Tile-based & \begin{tabular}{c} CIFAR-100, \\ SVHN, \\ ImageNet \\ \end{tabular} & \begin{tabular}{c} ResNet, \\ WideResNet, \\ DenseNet \\ \end{tabular} & 8 & \(<\)1 & - & - & \begin{tabular}{c} 8.5 \\ (\(\{\)PLP16\(\}\)) \\ \end{tabular} \\ \hline [108] & \begin{tabular}{c} Inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & Tile-based & Sports-1M & Custom & 15 & 0.4 & - & \begin{tabular}{c} 92 \\ (\(\)Intel 17.950) \\ \end{tabular} & \begin{tabular}{c} 8.2 \\ (\(\)Intel 17.950) \\ \end{tabular} \\ \hline [107] & \begin{tabular}{c} Inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & \begin{tabular}{c} Filter-based \\ \end{tabular} & \begin{tabular}{c} ImageNet \\ ResNet-50 \\ \end{tabular} & 8 & \(<\)0.14 & - & \begin{tabular}{c} 31 \\ (\(\)PLP16\(\{\)122\(\}\)) \\ \end{tabular} & - \\ \hline [102] & \begin{tabular}{c} Inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & \begin{tabular}{c} Filter-based \\ CIFAR10 \\ \end{tabular} & \begin{tabular}{c} ImageNet, \\ VGG16, \\ GoogLeNet, \\ ResNet-50 \\ \end{tabular} & 8 & 0.12 & - & \begin{tabular}{c} 15 \\ (\(\)PLP16\(\{\)122\(\}\)) \\ \end{tabular} & 3.76 \\ \hline [104] & \begin{tabular}{c} Training, \\ inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & \begin{tabular}{c} Filter-based \\ Filter-based \\ CIFAR10 \\ \end{tabular} & \begin{tabular}{c} ImageNet, \\ CIFAR-164 \\ \end{tabular} & 8 & \(<\)3 & - & - & - \\ \hline [119] & \begin{tabular}{c} Training, \\ inference \\ \end{tabular} & \begin{tabular}{c} CNN \\ \end{tabular} & \begin{tabular}{c} Dynamic, \\ max \\ \end{tabular} & \begin{tabular}{c} Filter-based \\ Filter-based \\ CIFAR-10 \\ \end{tabular} & \begin{tabular}{c} MNIST, \\ CIFAR-164 \\ \end{tabular} & 8 & 0.1 & - & - & \begin{tabular}{c} 17 \\ (\(\)ARM AS3\(\}\) \\ \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\) \(\(\) \(\) The stochastic thresholding is presented because the lower deterministic threshold results in inaccurate representation for small values while the higher threshold causes large clipping error [143], Figure 12. The random shuffling between higher and lower thresholds is found to be effective in compensating for the accuracy degradation of the low-precision training (less than 6 bits). ### _DFXP Precision_ The bit-precision of DFXP (i.e., \(w_{N}\)) can be static, mixed, or dynamic with different trade-offs between accuracy and hardware efficiency. The static precision, which is used in [126, 127, 134, 136, 139, 141], indicates that the number of bits is statically specified prior and is kept fixed for all groups during the training or inference, i.e., \(w_{N_{i}}=w_{c}\) for \(i=1,\ldots,N_{t}\), where \(N_{t}\) is the total number of groups associated with a specific DNN architecture. The advantage of this scheme is its simplicity from the hardware efficiency point of view. However, the selected precision is not optimal for all groups, layers, and architectures [143]. On the other hand, in the mixed-precision scheme, the bit-width, which is determined offline as well, can be different for different groups [125, 40]. The need for mixed-precision mainly comes from the fact that different groups (such as weights and activations) have different required dynamic ranges and thus different required number of bits [138]. As the activation results from the convolution accumulation, it is usually allocated more bits. For instance, using DFXP with 4-bit wights and 8-bit activations gives an accuracy degradation within 2% of the full precision using the Resnet-50 CNN model on the ImageNet dataset [40]. In other works, different precision is allocated to different groups in different layers [135, 125]. The authors in [125] stated that a specific fully connected layer activation is more sensitive to bit reduction and it is better to be allocated 16 bits while the activation bit-width of the other layers can be shrink to 8 bits. This mixed-precision allows them to achieve a 55.64% saving for weights' storage and 69.17% for activations' memory traffic with less than 2.5% loss in the accuracy when the Alexnet model and ImageNet dataset are used. The experiments in [135] show similar results. They found that the groups in shallower layers are less robust to bit reduction than the ones in deeper layers. In addition, the computation of the first and the last network layers should use high bit-precision to achieve better performance. To optimize the mixed-precision for different groups and to reach the above conclusions, the authors in [135] adopted an iterative bit-precision reduction scheme that aims to discover the groups for which the bit precision can be reduced without causing noticeable performance degradation. When DFXP with mixed-precision is used for training, sometimes different bit-widths are used for the weights during the updates than during the forward and backward propagations [27]. Using higher precision for the weight updates allows for the small changes in the weights to be accumulated precisely. The use of DFXP with dynamic precision is presented to adjust the bit-width on-the-fly during training to enable speeding up this process [129, 143, 142]. The scheme in [129, 142] suggests starting with an aggressive initial target bit-width and monitoring the training loss as a feedback from the training process. If the training becomes unstable, the bit-width is increased to its maximum value. Afterward, the target bit-width is gradually increased by a unit step for the next trial. This procedure is repeated until reaching the minimum target bit-width that allows for stable training. To maintain the low overhead of this algorithm, it is activated once after each forward/backward computation to find the global bit-width of DNN architecture. A simpler search-based scheme to adapt the bit-width of each layer is suggested in [143]. In this scheme, the convolution is calculated in presence of low and high bit-widths at the same time for several iterations per epoch. If the difference between the high and low precisions is higher than a predefined threshold, the bit-width increases starting from the next iteration till the end of the epoch. After applying this scheme to different datasets and different CNN models, an interesting conclusion was that different datasets require different average bit-widths even if the same model is used. One added complication of utilizing the dynamic bit-width is the need to design a configurable processing unit that can be configured to compute with various bit-widths during run-time. Thus, the efficiency of the dynamic precision scheme is highly affected by the hardware's supportability of the bit-width levels. Two relatively high bit-width levels (32 bits and 64 bits) are adopted in [129]. The baseline precision to prove the efficiency of their proposed approach is 64 bits which is relatively high training precision compared to other works. On the other hand, [143] could train the CNN with negligible loss of training and testing accuracy using an average bit-width less than 8. This is because they were able to use finer bit-width levels thanks to the bit-slice serial architecture they proposed. ### _Summary and Discussion of DFXP-based DNN Architectures_ DFXP and BFP are very similar representations. DFXP can be considered as a subset of BFP with less dynamic range and less hardware complication at the same time. For example, when the DFXP representation is used for CNN inference, the only additional hardware required over the FXP is a simple bit-shifter to align the output activation with the scale factor of the next layer input activation [136, 138]. This simplicity makes it appealing for many DNN architectures [27, 40, 113, 123, 144, 124, 125, 145, 146]. By considering the number of accelerators in the literature that utilize each representation, DFXP can be considered the most widely used alternative number system. The widespread of DFXP can be attributed Fig. 12: Comparing the deterministic and stochastic thresholds for online scaling factor selection [143] to its simplicity and to implementing it in some of publicly available DNN frameworks, such as Ristretto [128]. Several of the DFXP-based DNN architectures used this representation without much contribution to the proposed vanilla DFXP. Other works used different approaches to select the scaling factor of each group and to optimize the bit-width of this representation. Different approaches used for these design metrics are discussed and compared above. ## VII Posit for DNN Architectures Posit number system, also known as type III universal number (Unum) system [147, 148, 149], is a floating-like format that is proposed to overcome several shortcomings of FLP representation [148]. Compared to FLP, Posit uses the bits more efficiently (allows for better accuracy with the same number of bits) [150], and has better accuracy and dynamic range [148, 151]. Figure 1e illustrates Posit representation. The \(w\) bits Posit number representation consists of four fields; a sign (1 bit), a regime (of variable length \(rs\in[1,w-1]\)), an exponent \(e\) (unsigned integer of fixed length \(es\)) and a mantissa (of variable length \(ms=w-rs-es-1\)). The regime field contains \(d\) consecutive identical bits and an inverted terminating bit (i.e., \(rrr\ldots r\))5. Footnote 5: This is the general case when \(rs<(w-1)\). Otherwise, the regime pattern can be \(rrr\ldots r\) when it is terminated by the end of the \(w\) bits [152]. The numerical value of a real number \(n\) is represented in the Posit format (by \(\hat{n}\) ) as follows: \[\hat{n}=(-1)^{s}\times u^{k}\times 2^{e}\times(1+\frac{m}{2^{ms}}), \tag{39}\] where \(s,\,e,\,\text{and}\,\,m\) are the values of the sign bit, exponent and mantissa, respectively. The used \(u\), and \(k\) are calculated in (40) and (41) with the same order. \[u=2^{2^{es}}, \tag{40}\] \[k=\left\{\begin{array}{ll}-d,&\text{if }r=0.\\ d-1,&\text{if }r=1,\end{array}\right. \tag{41}\] Posit representation is commonly characterized by two parameters, mainly \(w\) and \(es\), and defined as Posit(\(w\),\(es\)) [147, 153]. The parameter \(es\) is used to control the trade-off between the precision and the dynamic range [147]. When the Posit is intended to be used for DNN, these parameters are usually specified in an offline manner regardless of whether this architecture targets DNN training or inference [147, 148, 153, 154]. The selection of these parameters is done usually by experimenting with different parameters and selecting the parameters that give the best accuracy [154] or the parameters that offer the best balance between the accuracy and hardware efficiency. For instance, when the exponent length is set to \(es=1\) in [155] a better trade-off between accuracy and energy-delay-product is obtained for \(w=7\) and \(w=5\). On the other hand, the author in [153] decided to eliminate the exponent part (i.e., \(es=0\)) as the Posit, in this case, better represents the dynamic range of the used DNN weights. There are two main differences between Posit and FLP representations, Figure 1e and 1a. The first difference is the presence of the regime field, and the second is the variability of the mantissa bit-length. Indeed, the innovation in the Posit format comes from its ability to allocate more bits to the mantissa when the represented number is very small (i.e., higher precision) and fewer bits for large numbers (i.e., larger magnitude) without changing the total bit-width of the format [150]. The Posit is usually known for its _tapered-accuracy_, i.e., small magnitude numbers around the '1' have more accuracy than extremely large or extremely small numbers [156]. The authors in [147] compared the decimal accuracy ( \(-\log_{10}|\log_{10}(\frac{\hat{x}}{\hat{x}})|\), where \(x\) is the actual real number value and \(\hat{x}\) is the represented number value [149]) of different Posit representations to the FLP8 and FXP8, see Figure 13. Their experiment showed that: i) the FXP representation has a peak accuracy so it is suitable to represent data with a narrow range, ii) the floating point has almost constant accuracy and it should be used to represent data that are uniformly distributed to exploit its efficiency, iii) and the Posit has tapered accuracy which makes it suitable to represent the normally distributed data efficiently. Since data in DNNs usually are normally distributed, see for example Figure 11, Posit is expected to be the most attractive number system for DNN [147]. DNN architectures that use Posit number system are usually either rely on Posit format from end-to-end [150, 151, 157, 158, 159] or partially utilize this format and a conversion from and to other formats is required within this architecture [153, 160]. These two approaches of using Posit are discussed next. In addition, to increase the efficiency of Posit number system for DNNs, several Posit variants are proposed. These variants are reported below as well. ### _End-to-end Posit-based Architectures_ When DNN data are represented in Posit from end-to-end new hardware that is able to perform all operations on these data must be used. In this case, the most fundamental arithmetic operations that need to be carefully designed in hardware are MAC operation and activation functions [150]. Fig. 13: The decimal accuracy of the Posit representation compared to the FLP8 and FXP8 [147] Different designs of the Posit-based MAC (or multiplier) are proposed in [16, 147, 148, 155, 159, 161, 162, 163, 164]. In most of these works, the MAC design mainly follows the standard FLP MAC as in [147, 148, 155, 161, 162, 163, 164]. The main additional steps over the FLP MAC design are the decoding to extract Posit fields of the operands and encoding the result to Posit format [159]. Indeed, Posit MAC hardware implementation is more complicated and less efficient than the FLP MAC with the same number of bits because of the length-variability of the regime and mantisas fields. It is shown in [16] that Posit(32, 6) multiplier has 78% more area and consumes 94% more power than the FLP32 multiplier. This is attributed to the fact that the multiplier should be designed to handle the extreme lengths of mantissa, which is \(w-es-2\), and regime, which is \(w-1\). In addition, the critical path of this Posit multiplier is found to be longer than FLP32 due to the sequential bit decoding required for Posit. By making the fields of Posit format fixed, the area and power efficiency increased by 47%, and 38.5%, respectively, over the variable length fields Posit at the expense of negligible accuracy loss. Similar results are shown in [165] as well. Alternatively, to design a more power and area-efficient multiplier, the authors in [159] proposed a Posit-LNS-Approximate multiplication. This combination allows for exploiting the advantages of Posit accuracy and LNS hardware efficiency. The general concept of performing LNS multiplications is similar to Mitchell's approximation discussed in section III-B1a, however, by considering Posit format instead. For example, the logarithm of a Posit number is given in (42) by taking the logarithm of both sides of (39) and applying the approximation in (7). \[\log_{2}(|\hat{n}|)=2^{es}\times k+e+\frac{m}{2^{ms}}. \tag{42}\] Consequently, Posit multiplication is performed using fixed point addition. The experiments in [159] showed significant reduction in the multiplier area by 72.86%, power by 81.79%, and delay by 17.01% compared to Posit multipliers in [158]. The implementation of several activation functions of Posit represented data is discussed in [147, 157, 166]. The Sigmoid activation function in (11) is found to be easy to be implemented in hardware for Posit represented data [149]. Few simple bit-cloning and masking is adequate to approximate this function. Similarly, a fast implementation of the \(\tanh\) and the Extended Linear Unit (ELU) activation functions are presented in [166] and [167], respectively. ### _Partial Posit-based Architectures_ Several architectures aimed to benefit from the high accuracy and dynamic range of Posit while avoiding its hardware inefficiency by representing only the weights with Posit prior to the inference process [153, 160]. This enables significant decrease in both the storage and communication overheads. These weights are then converted back to another format, such as FLP in [153] or FXP in [160], during the computation. The only overhead over the hardware of the standard architectures are modules to convert from Posit to the other formats and vice versa. The penalty of converting Posit to FXP is the increase in critical path delay and power consumption of the MAC by 22.8% and 5%, respectively [160]. ### _Posit Variants_ Two Posit variants are proposed for DNNs; the fixed-Posit representation [16], and the generalized Posit representation [152, 156]. As its name indicates, the fixed-Posit representation proposes using a fixed length of the regime \(rs=constant\) instead of using a variable length in the vanilla Posit. Although the dynamic range and the accuracy of this representation are expected to be less than that of Posit, using this representation results in much more efficient hardware, in terms of power, area, and delay, with negligible loss in classification accuracy (0.12 %) when it used for ResNet-18 on ImageNet [16]. The generalized Posit representation [152, 156] proposed a modification to the vanilla Posit format to better represent the dynamic range and data distribution of DNNs. They relied on the fact that Posit with \(w<8\) and a specific \(es\) is observed to be unable to accommodate the variability in parameter distributions and dynamic range of different DNN layers and various DNN models. Instead of using mixed-precision Posits which requires a very large search space (as huge as \(4^{110}\) for ResNet-110 when 4 different \(w\) values are searched [152]), Posit format is modified by inserting two hyper-parameters that can be adjusted per-layer to enable a parameterized tapered accuracy and dynamic range. These two hyper-parameters are the exponent bias and the maximum regime bit-width that can be applied by replacing \(e\) in (39) with \(e+sc\), where \(sc\) is the exponent bias, and restricting the number of bits allocated to the regime \(rs\leq rs_{max}\). The exponent bias is used to scale the zone of maximum accuracy (i.e., minimum and maximum magnitude values) downward or upward in order to track the data distribution of different layers. The maximum regime bit-width \(rs_{max}\) controls the maximum and minimum Positive representable values. When \(rs_{max}=1\), the generalized Posits becomes a FLP-like format, whereas it turns into vanilla Posit format with \(rs_{max}=w-1\). Various tapered-precision representations can be obtained by selecting the \(rs_{max}\) between these two bounds. The experimental results on several datasets and CNN models showed that the generalized Posit offers considerable accuracy improvement when \(w<8\) bits compared to the vanilla Posit at the expense of a relatively moderate increase in energy consumption. ### _Summary and Discussion of Posit-based DNN Architectures_ Posit representation can be considered as a variant of FLP. This representation offers better accuracy and wider dynamic range than FLP. Thus, Posit can represent DNN data more efficiently with the same number of bits. However, in general, the hardware implementation of Posit is found to be more complicated compared to FLP hardware, as it relies on the FLP hardware in addition to the hardware needed to convert from and to FLP. Several trials have been made to enhance Posit hardware efficiency discussed above such as combining Posit with other representations (FXP and LNS) or modifying Posit by fixing or limiting the regime field. ## VIII Future Directions and Open Research Issues Next, we briefly highlight several issues and opportunities for future research in DNN number systems. This includes dynamic number representations, hybrid number systems, and utilization of DNN statistics. ### _Dynamic Number Systems_ The main challenge of using low-precision number systems for training DNNs is the dynamic distribution of weights, activation, and gradients during training. In addition, several works show that optimal parameters of the number system (e.g., bit-widths) can be different for different datasets. This makes a dynamic number system (i.e., a number system that can adjust its parameters either offline or during run-time) highly desirable, especially for training DNNs. However, implementing such a system with online adaptation adds complications to the hardware which should be re-configurable to adapt to the changes in the number system format. Several works that adopt a format with a dynamic bit-width, for example [168], discussed the worthiness of this approach from a software (accuracy and speed gain) point of view. it seems worthy to investigate the effectiveness of a dynamic number system from the hardware efficiency perspective. ### _Hybrid Number Systems_ Several hybrid number systems have been investigated. Some examples of hybrid representations include DFXP with binary FXP [127], DFXP with ternary FXP [40], DFXP with FLP [132], dual DFXP with DFXP [136], FXP with Posit [16], BFP with LNS [106], Posit with LNS [159], and RNS with LNS [169]. Combining two number systems allows for gaining from the benefits offered by both systems. The hybrid representations are found to be more efficient, a hardware and accuracy point of view, than using each representation separately. More combinations of these representations can be investigated in the future. For example, applying the same concept of BFP (i.e., each block shares the same exponent) to Posit number system is expected to relieve the hardware complication compared to the vanilla Posit number system. ### _Utilization of DNN Characteristics_ DNN has special characteristics that should be considered when searching for more efficient representations dedicated to DNNs. For example, the ability of the neural networks to tolerate the noise is exploited in [66] to design an efficient LNS multiplier by reducing the average rather than the absolute error introduced by the multiplier. This results in enhancing the accuracy of DNN instead of ruining it as would be anticipated when using approximate multipliers. Another example of utilizing the noise tolerance of DNNs is using stochastic rounding (i.e., rounding the number up or down at random) when the real number is mapped to a specific representation. This kind of rounding allowed for training DNNs with lower precision when it is integrated with FXP [170, 21, 171], BFP [109], Posit [150], or DFXP [128]. Similarly, the ability to cluster DNN data into groups with narrower dynamic ranges gave birth to BFP and DFP representations. Moreover, realizing that DNN data are normally distributed shed light on the effectiveness of using the Posit number system, which has tapered accuracy. For future work on DNN number systems, these and other DNN characteristics should be paid attention to achieve more efficient representations. ## IX Summary and Conclusions Deep neural networks have become an enabling component for a myriad of artificial intelligence applications. Being successful in providing great performance and even exceeding human accuracy, they have attracted the attention of academia and industry. The great performance of DNNs comes at the expense of high computational complexity and intensive memory requirements. Thus, increased attention is paid to redesigning DNN algorithms and hardware, in an effort to enhance their performance and/or enable their deployment on edge devices. A research direction that has a great impact on the performance of DNNs is their number representation. A great body of research has been focused on finding more suitable number systems, than FLP and FXP, tailored for DNNs. The standard FLP representation has a massive dynamic range which makes it a good choice for computationally intensive algorithms that include a wide range of values and require high precision. At the same time, the complex and power-hungry FLP calculations make it less attractive for DNN architecture implementation. On the other hand, the FXP for DNN implementation offers great hardware efficiency at the expense of accuracy degradation. Between the two extreme representations (FLP and FXP), there are several number systems that are used for DNNs and offer different trade-offs between energy efficiency and acquired accuracy. The surveyed alternative number systems for DNNs are LNS, RNS, BFP, DFXP, and Posit number systems. The main objective of using LNS is to simplify the implementation of the costly multiplication operation and have a multiplication-free DNN accelerators. This hardware simplification allows for significant savings in the area, power consumption, and cost, with some accuracy degradation 6 resulting from logarithmic approximation. This makes LNS a good choice when DNNs are deployed on source-constrained devices for accuracy-resilience applications. Footnote 6: This is the common case. However, several works that adopted LNS showed no accuracy degradation. See Table I and Table II. The RNS can be an attractive number representation choice for DNN accelerators. RNS exhibits inherent parallelism at the residue-processing level. It utilizes parallel computations along separate residue channels, where operations in each of them are performed modulo a specific modulus, with no need for information to be shared between residue channels. The main challenge in designing an efficient RNS-based accelerator is to minimize or, possibly, eliminate the overhead introduced when implementing the non-linear DNN operations. Another key factor is the optimization of the moduli selection and the corresponding arithmetic circuits, to meet the accuracy requirements. The BFP strikes a balance between FLP and FXP format. Consequently, different trade-offs can be obtained by specifying different BFP design choices represented by the block size, shared exponent selection, and bit-width choice. Most of the surveyed DNN architectures that depend on BFP achieved negligible accuracy degradation compared to FLP even with less than 8 bits, with varying levels of speed, power, and area efficiency. DFXP can be considered as a subset of BFP with less dynamic range and less hardware complication at the same time. While BFP is closer to FLP, the DFXP is more like the FXP (as their names indicate). This results in different trade-offs between DNNs metrics (accuracy, power consumption, speed up, etc.) Finally, Posit representation can be considered as a variant of FLP. offering better accuracy and a wider dynamic range, Posit can represent DNN data more efficiently with the same number of bits as FLP. This allows for more reduction in the number of bits compared to FLP implementations with similar accuracy. However, Posit has complex hardware, due to the hardware needed to convert Posit numbers to another number system (basically FLP) to do the arithmetic operations in the other domain before returning back to Posit domain. The efforts made to enhance its hardware efficiency have been discussed in this survey. For all aforementioned alternative number systems, their impact on the performance and hardware design of DNN has been reported in details. In addition, this article highlighted the challenges associated with the implementation of each number system and the different solutions proposed to address these challenges. ## Acknowledgments This work was supported by the Khalifa University of Science and Technology under Award CIRA-2020-053.
2308.13978
A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss Function for Combinatorial Optimization using Reinforcement Learning
Quadratic Unconstrained Binary Optimization (QUBO) is a generic technique to model various NP-hard combinatorial optimization problems in the form of binary variables. The Hamiltonian function is often used to formulate QUBO problems where it is used as the objective function in the context of optimization. Recently, PI-GNN, a generic scalable framework, has been proposed to address the Combinatorial Optimization (CO) problems over graphs based on a simple Graph Neural Network (GNN) architecture. Their novel contribution was a generic QUBO-formulated Hamiltonian-inspired loss function that was optimized using GNN. In this study, we address a crucial issue related to the aforementioned setup especially observed in denser graphs. The reinforcement learning-based paradigm has also been widely used to address numerous CO problems. Here we also formulate and empirically evaluate the compatibility of the QUBO-formulated Hamiltonian as the generic reward function in the Reinforcement Learning paradigm to directly integrate the actual node projection status during training as the form of rewards. In our experiments, we observed up to 44% improvement in the RL-based setup compared to the PI-GNN algorithm. Our implementation can be found in https://github.com/rizveeredwan/learning-graph-structure.
Redwan Ahmed Rizvee, Md. Mosaddek Khan
2023-08-27T00:57:01Z
http://arxiv.org/abs/2308.13978v2
A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss Function for Combinatorial Optimization using Reinforcement Learning ###### Abstract Quadratic Unconstrained Binary Optimization (QUBO) is a generic technique to model various NP-hard combinatorial optimization problems in the form of binary variables. The Hamiltonian function is often used to formulate QUBO problems where it is used as the objective function in the context of optimization. Recently, PI-GNN, a generic scalable framework, has been proposed to address the Combinatorial Optimization (CO) problems over graphs based on a simple Graph Neural Network (GNN) architecture. Their novel contribution was a generic QUBO-formulated Hamiltonian-inspired loss function that was optimized using GNN. In this study, we address a crucial issue related to the aforementioned setup especially observed in denser graphs. The reinforcement learning-based paradigm has also been widely used to address numerous CO problems. Here we also formulate and empirically evaluate the compatibility of the QUBO-formulated Hamiltonian as the generic reward function in the Reinforcement Learning paradigm to directly integrate the actual node projection status during training as the form of rewards. In our experiments, we observed up to \(44\%\) improvement in the RL-based setup compared to the PI-GNN algorithm. Our implementation can be found in 1. Footnote 1: [https://github.com/rizveeredwan/learning-graph-structure/](https://github.com/rizveeredwan/learning-graph-structure/) Hamiltonian Function Deep Reinforcement Learning Graph Neural Network Monty Carlo Tree Search ## 1 Introduction and Motivation Combinatorial Optimization (CO) is a branch of optimization that deals with finding the best solution from a finite pool of possibilities where the chosen solution maintains the given set of constraints and optimizes a problem-specific objective. However, due to the size of the problem's variables and the corresponding large size of the set of possible solutions, it is difficult to find an exact solution which leads to finding an approximate one. Schuetz et al. (2022) proposed a Graph Neural Network (GNN) based solution (PI-GNN) to address the Combinatorial Optimization (CO) problems over graphs. First, they formulate the problem using a Quadratic Unconstrained Binary Optimization (QUBO). Then based on that they apply a generic loss function over stacked layers of GNN to produce the node probability distribution which leads to the labeling of all the nodes as \(\{0,1\}\). This labeling as either \(0\) or \(1\) denotes the set of nodes (or edges) that will belong to the solution to optimize the problem's objective or reduce the number of violated constraints. The main conceptual novelty of PI-GNN is generality and scalability. Recently, Boettcher (2023) demonstrated in a critical review paper that PI-GNN performs worse than traditional greedy algorithms when solving the Max-Cut problem. Additionally, Angelini and Ricci-Tersenghi (2023) raises a similar kind of concern that the addressed simple GNN-based solution lacks behind classical greedy algorithms presenting their discussion over the Maximum Independent Set problem. Both of the literature mainly expressed that, in a very specifically curated problem (e.g., Max-Cut, Maximum Independent Set, etc.) the relevant greedy algorithms might work better than PI-GNN. Though the author of the PI-GNN has argued over the comments (Boettcher, 2023; Schuetz et al., 2023) that ignoring the generality and scalability of their proposed framework reduces the merit of their work. They also provided empirical results to support the argument. In this study, we also support the generality and scalability of PI-GNN, especially their loss function formulation based on QUBO-formulated Hamiltonian. However, we also highlight a concern that we investigate in this work. We highlight the concerns through the following points, 1. Absence of actual projection Strategy in loss function: The actual projection or node labeling strategy is not present in the generic loss function stated in Schuetz et al. (2022). Apart from the interpretability of the node labeling, this also raises an important concern. Using gradient descent optimization the architecture might converge to a local minima but while projecting the actual node labels the performance may significantly deteriorate due to the absence of an actual node projection strategy in the loss function. The projected node labels represent the actual constraint satisfaction status of the graph in the current iteration which should guide the loss function as per our observation. 2. Implementational issue in denser graphs: While experimenting with PI-GNN in graphs of different densities, we observed a crucial concern 2, mostly in denser graphs. In the implementation to early stop the training, they track consecutive loss variation with a patience value \(p(p=100)\) and tolerance value \(\tau(=10^{-4})\). When there is no consecutive loss reduction for \(p\) epochs or the consecutive loss variation is less than \(\tau\) occurs for \(p\) epochs, the training is stopped. In our experiments, we observed some quality performance degradation in denser graphs due to this early stopping strategy. Footnote 2: [https://github.com/amazon-science/co-with-gnns-example](https://github.com/amazon-science/co-with-gnns-example) Based on the aforementioned observations, we summarize the following contributions of this article, 1. Fuzzy Early stopping strategy: We suggest applying some relaxed or fuzzy early stopping strategies. We suggest omitting \(\tau\) and tracking improvement over the best objective function value \(obj^{*}\) observed till this phase of the iteration. If no improvement is observed for \(p\) epochs over \(obj^{*}\), we can stop the training. In denser graphs, based on the early stopping strategy used in the official implementation, we often received a uniform node probability distribution leading to a significant number of constraint violations. However, by applying the suggested relaxed strategy we generated a quality node probability distribution leading to the number of significantly reduced number of constraint violations with PI-GNN. 2. Compatibility of QUBO-formulated Hamiltonian as generic reward function in Reinforcement Learning (RL) based setup: To embed the actual projection status in the loss function or loss objective during training we experiment with the QUBO-formulated Hamiltonian as a generic reward function in RL-based setups. Being inspired by the formulation stated in Drori et al. (2020) and Khalil et al. (2017), we establish a modified generic framework \(GRL\) that works with the QUBO-formulated Hamiltonian as the generic reward function. Additionally, we also formulate a Monty Carlo Tree Search with a GNN-based solution where we apply a guided search through manual perturbation of node labels during training. Empirically, we have gained up to \(44\%\) improvement in reducing the number of violated constraints. To continue the discussion of this study, we address the Max-Cut problem. But all the presented proposals are generic in manner and can be extended to a wide group of graph-based canonical optimization problems as stated in Schuetz et al. (2022). In simple terms, the classical Max-Cut problem targets to divide the nodes of an undirected unweighted graph into two different sets such that the number of "cut edges" maximizes where a node is in one set and the other node of the edge is in the other set. The rest of the study is organized as follows. In Section 2, we present all our proposals and formulations that we have experimented with in our work. In Section 3, we empirically evaluate all the architectures with PI-GNN based on various metrics and discuss the results. Finally, we conclude the study through an overall review and direct toward the future extensions in Section 4. ## 2 Our Proposals In this section, we present all the proposals and the formulations presented in this article in a concise manner. ### PI-GNN with Fuzzy Strategy In this section, first, we formally present two early stopping strategies that can be used during the training of PI-GNN Schuetz et al. (2022). Then, we discuss the main distinguishable differences between them and the underlying reasonings. For the sake of discussion, we name the strategies as, _Strict Stopping_ and _Fuzzy Strategy_. **Definition 2.1** (Strict Stopping).: During training, for the objective function \(F_{obj}\), if no successive improvement is observed for consecutive \(p\) epochs or the successive variation is lesser than \(\tau\) occurs for consecutive \(p\) epochs, the training can be stopped. **Definition 2.2** (Fuzzy Stopping).: Let us assume that, \(obj^{*}\) denotes the current best value observed for the objective function \(F_{obj}\) during any phase of the training iterations. If no improvement in the value of \(F_{obj}\) occurs for successive \(p\) epochs compared to \(obj^{*}\), the training can be stopped. Definition 2.2 is much fuzzier compared to the definition 2.1. Because, during the training with gradient descent, it is a quite common phenomenon that sometimes the objective function variates quite slowly before moving into larger reductions. Definition 2.2 addresses this by removing the dependency over \(\tau\). During the training of PI-GNN, a common phenomenon is that the objective or the loss function starts from a high positive value and then with gradual training moves into larger negative values. Especially, in denser graphs, we often observed this occurrence, during the transition from positive loss to negative loss, a period occurs when the loss varies or improves a bit very insignificantly (e.g., \(<10^{-7},<10^{-8}\), etc.). Then the \(\tau\) value or the early stopping can degrade the performance quite significantly. ### PI-GNN Architecture and Loss function PI-GNN (Schuetz et al., 2022) consists of k layers of stacked GNN and the last layer is a sigmoid layer to generate the node probability distributions. PI-GNN takes the QUBO-based Hamiltonian formulation (\(Q\)) of a problem, e.g. \(F_{obj}(X)=X^{T}QX=\sum_{i\leq j}x_{i}Q_{ij}x_{j}\) where \(x_{i}\) denotes the variable related to the \(i^{th}\) node of the problem and \(Q\) denotes the problem encoded matrix. The task of PI-GNN is to set the values of \(x_{i}\) variables to either \(0\) or \(1\) based on training to maximize \(F_{obj}(X)\). PI-GNN develops a loss function based on the QUBO-formulated Hamiltonian and minimizes the function as stated in equation 1. Here \(P_{i}(\theta)\) denotes the PI-GNN generated node probability for the \(i^{th}\) node. The modification over the base article that we apply is the usage of Fuzzy stopping during termination. \[\text{maximize}\sum_{i\leq j}x_{i}Q_{ij}x_{j}\,\rightarrow\,\text{ minimize}(-\sum_{i\leq j}P_{i}(\theta)Q_{ij}P_{j}(\theta)) \tag{1}\] ### Generic Reinforcement Learning framework, \(Grl\) Drori et al. (2020) proposes a generic RL-based framework to address a wide range of combinatorial optimization problems. They use a Graph Attention Network (GAT) based encoder architecture to encode and generate the node feature vectors, upon which they apply an attention-based decoding mechanism to greedily select the nodes and apply the node labelings. We formulate a modified version of Drori et al. (2020) in this study. The main differences are pointed out as follows, 1. Generic QUBO-formulated Hamiltonian Reward Function: A subset of terms from \(X^{T}QX=\sum_{i\leq j}x_{i}Q_{ij}x_{j}\) is considered the observed reward \(r^{t}\) at time \(t\) during training for a particular epoch \(e\). When a node \(v_{i}\) is greedily selected and labeled, we check which terms \(x_{i}Q_{ij}x_{j}\) where \(i\leq j\) and \(x_{j}Q_{ij}x_{i}\) where \(j<i\) can be calculated and sum them. This is considered as the reward, \(r^{t}=\sum_{i\leq j}x_{i}Q_{ij}x_{j}+\sum_{j<i}x_{j}Q_{ji}x_{i}\). 2. Attention-based decoding strategy: In equation 2, we provide the mathematical formulation to calculate the attention or weight for node \(v_{i}\) as \(\gamma_{i}\). Here, \(\phi_{1}\in\mathbb{R}^{d_{h}\times d},\phi_{2}\in\mathbb{R}^{d_{h}\times d}\) and \(\phi_{3}\in\mathbb{R}^{d_{h}\times n}\) are weights or architecture parameters. \(\mu_{i}\) denotes the node feature vector (row vector) for the node \(v_{i}\) reported from the GAT layer. \(C\), \(d\) and \(d_{h}\) all are hyperparameters. \(n\) denotes the number of nodes in the input graph. Through applying a sigmoid non-linear activation over \(\gamma_{i}\), node probability distribution \(P_{i}(\theta)\) is generated. Over \(P_{i}(\theta)\), a probability threshold \(\beta\) (\(e.g.,\)\(\beta=0.5\)) is applied to fix the node labels, e.g., \(P_{i}(\theta)\geq\beta\) leads to\(x_{i}=1\). Here \(\theta\) denotes the complete set of trainable architecture parameters. To denote the dimensions of the weights (e.g., \(\phi\)) we maintain output dimension times input dimension all throughout the study. \(T\) is used to denote the transpose of a vector (row to column or column to row). \[\gamma_{i}=C\text{ tanh}(\frac{\mu_{i}\phi_{1}^{T}.(X_{v}\phi_{3}^{T}+\sum_{j\in N(i) }\mu_{j}\phi_{2}^{T})}{\sqrt{d_{h}}})\] (2) To update a node \(v_{i}\)'s attention weight \(\gamma_{i}\) we consider three aspects, node \(v_{i}\)'s own node feature vector (\(\mu_{i}\)), its adjacent neighbors' (\(N(i)\)) feature vectors (\(\mu_{j}\) where \(j\in N(i)\)) and the current condition of node labeling \(X_{v}\). If a node \(v_{j}\) has already been selected and labeled then the \(j^{th}\) entry will be \(1\) otherwise it will remain as \(0\). So \(X_{v}\) is a binary vector. This formation is comparatively more contextual than the decoding formation stated in Drori et al. (2020). In each selection, as the greedy choice, from the pool of unselected nodes, the node with the highest attention value \(\gamma_{i}\) was selected and labeled. When a node \(v_{i}\) is selected, its neighbors (\(j\in N(i)\)) attention weights \(\gamma_{j}\) are updated. Now, we point out the common strategies that we adopt from Drori et al. (2020) through the following points, 1. GAT as an encoder architecture: \(K\) stacked layers of GAT are used as the encoder architecture to generate the node feature vectors \(\mu\). The last layer consists of a sigmoid non-linear activation layer to generate the node probabilities. This probability helps to select the first node to initiate the decoding with the attention mechanism. 2. Loss objective and Training: The loss objective that we use is stated in equation 2. Here \(P(v^{t})\) denotes the probability of the greedily selected node \(v\) at the \(t^{th}\) iteration. \(r^{t}\) has already been defined prior. \(b\) denotes the reward observed from the baseline architecture at the \(t^{th}\) iteration. In our setup, we do not use any baseline architecture. After accumulating the complete set of rewards (termination of an epoch), we apply gradient descent over the model parameters and backpropagate. \[L(\theta)=\sum_{t=1}^{n}(r^{t}-b)\times P(a^{t})=\sum_{t=1}^{n}r^{t}\times P(a^ {t})[\text{when }b=0]\] (3) ### Monty Carlo Tree Search with GNN through manual perturbation, MCTS-GNN In this section, we concisely present a formulation where we integrate the Monty Carlo Tree Search in assistance with GNN in an RL-based setup. The main idea is that each node of the search tree consists of a partial solution (or node labels), and based on that a single GNN is trained to approximate the remaining nodes' labels. The goal is, using this manual perturbation of node labels, a guided search is conducted to maximize the amount of rewards. Now, we discuss the strategies based on RL terminologies (state, action, reward) and Monty Carlo tree search terminologies (selection, rollout, exploration, and backpropagation) in a brief manner. 1. State \(S\): Each node or a state \(S\) of the MCTS tree, provides a partial solution or a subset of possible node labeling for the concerned CO problem. Similar to the previous section, for the processing we maintain a binary vector \(X_{v}\) where the \(i^{th}\) entry is set to \(1\) if \(i^{th}\) node has already been labeled. 2. Action, \(a\) and Transition function, \(\pi(S,a)\): An action \(a\) means choosing a label (either \(0\) or \(1\)) for the input graph node variable \(x\). From each state \(S\) multiple actions can be created by fixing the node labels of the unselected nodes from the point of view of \(S\). Each action \(a\) from the state \(S\) also bears a transition probability \(\pi(S,a)\) denoting the likelihood of taking action \(a\) from \(S\). \(\pi(S,a)\) is approximated using a GNN. 3. Reward, \(r\): To calculate the reward for a state \(S\), GNN is used. Using GNN, node probability distributions \(P(\theta)\) are generated. Based on \(S\), a subset of nodes have already been fixed, for the remaining unselected nodes' labels, \(P(\theta)\) is used over a probability threshold \(\beta\), e.g., for a node \(v_{i}\), if \(P_{i}(\theta)\geq\beta\), then \(x_{i}=1\) else \(x_{i}=0\). After approximating the labels of all the nodes, QUBO-formulated Hamiltonian is used to calculate the reward, \(r=\sum_{i\leq j}x_{i}Q_{ij}x_{j}\). As it can be already understood, GNN plays a very important part of this design, now we state the mathematical formulation of GNN's forward pass along with the loss function that is used to update the parameters \(\theta\). \[X_{em}=E(G) \tag{4}\] \[\mu^{\prime}=GNN(G,X_{em})\] (5) \[\mu=f_{1}(X_{v}\theta_{1}^{T}+\mu^{\prime}\theta_{2})\] (6) \[P(\theta)=f_{2}(\mu\theta_{3}^{T}) \tag{7}\] The complete set of the mathematical formulation is presented from equation 4 to equation 7. First, a set of node embedding vectors \(X_{em}\) is generated (equation 4). Then, \(X_{em}\) and input graph G is passed to GNN to generate a set of node feature vectors \(\mu^{\prime}\) (equation 5). After that, more contextual information (\(X_{v}\)) is added with feature vector (\(\mu^{\prime}\)) to calculate the complete node feature vectors \(\mu\) (equation 6). The final equation to generate the node probability distribution \(P(\theta)\) is given in 7. Here the set of architecture parameters \(\theta=\{\theta_{GNN}\in\mathbb{R}^{d_{2}\times d_{1}},\theta_{1}\in\mathbb{R} ^{d_{3}\times 1},\theta_{2}\in\mathbb{R}^{d_{3}\times d_{2}},\theta_{3}\in \mathbb{R}^{1\times d_{3}}\}\). We use \(\theta_{GNN}\) to denote all the weight parameters associated with the layers of GNN. \(f_{1}\) is a ReLU activation function and \(f_{2}\) is a sigmoid activation function to generate the probabilities. Similar to before \((.)^{T}\) denotes the transpose from row vector to columnn vector or vice versa. \(P(\theta)\) is also used to approximate \(\pi(S,a)\). For a particular variable \(x_{v}\) or input graph node \(v\), \(P_{v}(\theta)\) will indicate the likelihood of labeling \(v\) to \(1\) (\(x_{v}=1,\pi(S,v=1)=P_{v}(\theta)\)). Similarly, to label \(x_{v}\) as \(0\), we set \(\pi(S,v=0)=1-P_{v}(\theta)\). From each state \(S\), we create child nodes for the unselected variables for both of the labels. To train the GNN architecture we use the formulation stated in equation 8 as the loss function. This function is quite similar to the equation presented in 1 except here we add manual perturbation by fixing the node labels to guide the searching. Here \(X_{v,i}\) denotes the value of the \(i^{th}\) entry in \(X_{v}\). This value is set to \(1\) means, node \(i\) has already been labeled and \(0\) means it has not been labeled. Our underlying intuition behind this formulation is presented in the form of proposition 2.1. \[L(\theta)=\sum_{i\leq j,X_{V,i}=1,X_{v,j}=1}x_{i}Q_{ij}x_{j}+\sum_{i\leq j,X_{ V,i}=1,X_{v,j}=0}x_{i}Q_{ij}p(\theta_{j})+\sum_{i\leq j,X_{V,i}=0,X_{v,j}=0}p( \theta_{i})Q_{ij}p(\theta_{j}) \tag{8}\] **Proposition 2.1** (Manual perturbation to avoid local minimas).: _Through manual perturbation, we enforce the training to avoid various local minimas by tackling noises (different sets of node labels while updating the parameters) resulting in more robust architecture and improved performance in terms of reducing constraint violations._ Now, we discuss the terminologies associated with MCTS through the following points, 1. Selection and Exploration: In equation 9, we present the greedy metric, Upper Confidence Bound (UCB) to measure the average reward obtainable for a child state \(C_{i}\) with respect to its parent state \(S\) combining its transition likelihood (\(\pi\)) of being selected. Here \(C_{i}.w\) and \(C_{i}.v\) denote the total amount of reward accumulated in state \(C_{i}\) and the number of times \(C_{i}\) has been visited respectively. \(\alpha\) denotes a hyperparameter, \(\pi(S,a)\) denotes the transition probability to state \(C_{i}\) from \(S\) reported by GNN. \(S.v\) denotes the total number of times state \(S\) was visited. \(log\) denotes mathematical logarithmic operation. The node with the highest UCB value is selected to explore in its subtree. When a leaf node is reached in this manner, we start the rollout phase. \[UCB(C_{i})=\frac{C_{i}.w}{C_{i}.v}+\alpha*\pi(S,a)*\sqrt{\frac{log(S.v)}{C_{i}.v}}\] (9) 2. Rollout: In the rollout phase, using GNN, we conduct the training for multiple epochs to approximate the node labels for the unlabeled nodes and calculate the rewards. We have already discussed how GNN is used to generate the node probability distributions in the previous paragraphs. 3. Backpropagation: After the rollout phase we enter the backpropagation phase of MCTS. In this phase, we update the state variables, \(v\) and \(w\) for the path of the root to the current leaf state of the search tree. For all the nodes in the path we increment the visiting attribute by \(1\) and add the reward to \(w\) approximated by GNN. ## 3 Evaluation In this section, we empirically evaluate GRL and MCTS-GNN with simple GNN in addressing the Max-Cut problem for the graphs of different densities. The graphs were randomly generated and all the graphs were undirected. For a given graph of \(n\) nodes, there can be at most \(\frac{n\times(n-1)}{2}\) edges where any two nodes do not have multi-edges among them. Our prepared graph dataset can be found in 3. We conduct the experiments based on three metrics - the number of satisfied constraints, scalability and required time. All the experiments were conducted on a 64-bit machine having AMD Ryzen 9 5950x 16-Core Processor x 32, 128 GB RAM, and 24 GB NVIDIA GeForce RTX 3090 GPU. All the implementations were done in Python language based on the blocks provided by deep-learning libraries Pytorch[Paszke et al., 2019], Pytorch Geometric [Fey and Lenssen, 2019] and LabML [Varuna Jayasiri, 2020]. ### Architecture Description, Hyperparameter Setup In this section, we present the description of the architectures along with the value of the hyperparameters that are used during the training. The description is presented in the following manner, 1. Architecture Description: To present the discussion we mostly use the variables stated in the respective sections. Our simple GNN architecture follows a similar definition stated in Schuetz et al. (2022). It consists of two layers of Graph Convolutional Architecture (GCN) and a sigmoid layer to generate the node probabilities. The first GCN layer's node feature vectors are propagated through a non-linear ELU activation and a dropout layer before passing to the second layer of GCN. Let us assume the node feature vector size of the \(1^{st}\) and \(2^{nd}\) layers of GCN are \(d_{1}\) and \(d_{2}\). As mentioned in the base article, we follow a similar setup. If the number of nodes \(n\geq 10^{5}\), then \(d_{1}=\sqrt[3]{n}\), else \(d_{1}=\sqrt{n}\). Also, \(d_{2}=\frac{d_{1}}{2}\). To implement GRL we took inspiration from the study presented in Drori et al. (2020). Here, we had three layers of GAT as the encoding architecture and a sigmoid layer to generate initial node probabilities. The attention-based decoding formulation has been presented in Section 2.3. To set the dimension of the node feature vectors, we follow a similar strategy stated in the previous paragraph based on the number of nodes of the input graph, \(n\). So, if \(n\geq 10^{5}\)m then, \(d_{1}=\sqrt[3]{n}\) else, \(d_{1}=\sqrt{n}\) and followup \(d_{2}=\lceil\frac{d_{1}}{2}\rceil\). We have used a single attention-head mechanism. In MCTS-GNN, we train a single GNN in different rollout phases by applying different manual labeling perturbations. The setup is completely similar to the discussion already presented above. So, if \(n\geq 10^{5}\), then \(d_{1}=\sqrt[3]{n}\) else, \(d_{1}=\sqrt{n}\). \(d_{2}=\lceil\frac{d_{1}}{2}\rceil\) and \(d_{3}=1\). 2. Training Description: To train the architectures, we use Adam optimizer for each of the architectures. We use patience value to implement fuzzy early stopping for all of them. As RL-based setups inherently exhibit abrupt behavior (frequent ups and downs in terms of reward variation) we keep a comparatively higher patience value for GRL and MCTS-GNN compared to PI-GNN. As per our setups, we eventually observed almost linear variation in terms of objective functions' values at some phase of the training epochs for all the experimented architectures for different graph inputs. This supports the validity of the chosen hyperparameters for early stopping. Another reason, behind setting a higher patience for RL-based setups is that, here we set the bar over the actual integer reward value compared to the fractional loss variation in PI-GNN. Inherently, the integer rewards should fluctuate more compared to the fractional loss values. For PI-GNN, we simply choose learning rate \(lr=10^{-4}\) with a patience value of \(\tau=100\) denoting if there is no loss objective improvement compared to the current best value observed for a consecutive \(100\) epochs, we stop the training. As stated previously, we apply fuzzy early stopping here. For GRL, we choose a learning rate of \(0.001\) (\(lr\)) for both encoder-decoder and a patience value of \(700\) (\(\tau\)). We also experimented with a lesser learning rate (\(0.0001\)) for GRL but could not improve the performance in terms of satisfying the number of constraints. For, MCTS-GNN, we also chose the learning rate \(lr\) to \(0.001\) for the GNN architecture with the patience value of \(100\). But here we also had another early stopping criterion denoting the improvement in the reward objective. Here we set another patience value \(\tau^{\prime}\) to \(700\) denoting if there is no reward improvement compared to the best reward observed till the current iteration, the MCTS-GNN algorithm terminates. ### Comparison in terms of number of satisfied constraints In this section, we present the number of satisfied constraints observed in each architecture, PI-GNN, GRL and MCTS-GNN for the graphs of different intensities. The complete result is shown in Table 1. In this study, this metric is denoted as the reward in terms of RL-based formulation. For GRL and MCTS-GNN we present the best reward observed throughout the complete training. For PI-GNN, we record the least loss and the corresponding probability distribution to approximate the node labels based on the threshold \(\beta=0.5\) - the node's probability exceeding \(\beta\) gets labeled as \(1\). These approximated node labels are used to calculate the QUBO-formulated Hamiltonian function. In the last two columns of Table 1, we present how GRL and MCTS-GNN's performance has improved compared to PI-GNN in percentage. Positive value means, the result has improved and the negative means the opposite. From the presented result in Table 1, we can see that, the performance has comparatively improved than the base PI-GNN. In simple terms, upon applying RL-based formulation we improved the quality of the resultant output. If we investigate in a more detailed manner we can see that, for a particular graph having \(n\) nodes, the performance generally improves more with it being denser or increased number of edges. ### Training Stability In this section, we mainly highlight the convergence status or training stability of the experimented architectures based on our set hyperparameters. We centralize our discussion based on three graphs of \(50\) nodes with \(89\), \(139\), and \(499\) edges respectively for all the architectures to understand the behavior transition from sparser to denser graphs. In Fig. 3.3 we present the reward variations for PI-GNN, GRL, and MCTS-GNN respectively for a graph of 50 nodes with 89 edges. In a similar manner, we present the reward variations for a graph of 50 nodes with 139 edges and a graph of 50 nodes with 499 edges for all the architectures in Fig. 3.3 and 3.3 respectively. To specifically understand the loss convergence status of PI-GNN we present Fig. 3.3 for the three 50 node graphs with 89, 139, and 499 edges respectively. Here, in the captions (50, 89) means a graph having 50 nodes and 89 edges. The main observation that can be drawn from each is that the convergence behavior or almost linear variation is quite visible where our training has stopped for all the architectures. This supports the stability quality of the set hyperparameters, especially the patience values. Also, we do not provide all the other graphs' training convergence chart in this study, because they exhibit a similar pattern. All of our experiments are reproducible and can be regenerated by importing our official code repository. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \# Nodes & \# Edges & PI-GNN & GRL & MCTS-GNN & GRL & MCTS-GNN \\ & & & & & vs PI-GNN(\%) & vs PI-GNN (\%) \\ \hline 50 & 89 & 72 & 75 & 76* & 4.2 & 5.56 \\ 50 & 139 & 95 & 105 & 107 * & 10.53 & 12.63 \\ 50 & 499 & 276 & 301 & 314* & 10.14 & 13.78 \\ \hline 100 & 199 & 147 & 155 & 167* & 5.44 & 13.61 \\ 100 & 799 & 373 & 534 & 537* & 43.2 & 44 \\ \hline 300 & 399 & 342 & 360 & 365* & 5.26 & 6.73 \\ 300 & 899 & 613 & 690 & 699* & 12.56 & 14.03 \\ 300 & 1299 & 747 & 932 & 947* & 24.77 & 26.77 \\ \hline 500 & 799 & 622 & 672 & 694* & 8.04 & 11.58 \\ 500 & 1499 & 1007 & 1143 & 1158* & 13.51 & 15.0 \\ 500 & 5499 & 2689 & 3414 & 3535* & 26.96 & 31.46 \\ \hline 700 & 1199 & 938 & 979 & 1023* & 4.37 & 9.06 \\ 700 & 1699 & 1288 & 1308 & 1360* & 1.55 & 5.59 \\ 700 & 4699 & 2422 & 2992 & 3202* & 23.53 & 32.2 \\ \hline 1000 & 1299 & 1104 & 1112 & 1141* & 0.73 & 3.35 \\ 1000 & 3299 & 2204 & 2449 & 2525* & 6.9 & 14.56 \\ 1000 & 5299 & 3098 & 3716 & 3750* & 19.95 & 21.05 \\ \hline 3000 & 3499 & 2956 & 3218* & 2996 & 8.86 & 1.35 \\ 3000 & 4499 & 3627 & 3907* & 3885 & 7.72 & 7.11 \\ 3000 & 6999 & 4841 & 5550 & 5622* & 14.65 & 16.13 \\ \hline \end{tabular} \end{table} Table 1: Number of Satisfied Constraints (Reward) for the graphs of different intensities for the Max-Cut problem. (*) denotes the best value observed. Figure 1: Reward variation curve for Figure 2: Reward variation curve for Figure 3: Reward variation curve for PI-GNN for the graph (50, 89) GRL for the graph (50, 89) MCTS-GNN for the graph (50, 89) Figure 4: Reward variation curve for Figure 5: Reward variation curve for Figure 6: Reward variation curve for PI-GNN for the graph (50, 139) GRL for the graph (50, 139) MCTS-GNN for the graph (50, 139) Figure 10: Loss variation curve for PI-Figure 11: Loss variation curve for PI-Figure 12: Loss variation curve for PI-GNN for the graph (50, 89) GNN for the graph (50, 139) GNN for the graph (50, 499) Figure 7: Reward variation curve for Figure 8: Reward variation curve for Figure 9: Reward variation curve for PI-GNN for the graph (50, 499) GRL for the graph (50, 499) MCTS-GNN for the graph (50, 499) ### Complexity Analysis In this section, we concisely present the time complexity of each of the experimented architectures or more specifically discuss the computation-intensive segments for each. PI-GNN is the most scalable and runtime efficient compared to GRL and MCTS-GNN due to the novelties of GNN architecture. In simpler terms, to process a graph having \(n\) nodes to generate a \(d\) dimensional node feature vectors through \(L\) layers of GNN, the big-O complexity is \(\mathcal{O}(Lnd^{2})\)4. GRL also uses a GNN-variation (GAT) to encode the input graph. After that, it applies greedy selection at each iteration to pick a node with the highest attention value reported by the decoder. The additional complexity of this selection portion is \(\mathcal{O}(nLogn)\) - maintaining a max heap structure, per selection takes \(O(logn)\) complexity. MCTS-GNN also uses a single GCN architecture in the rollout phases to build the search tree. So, a similar complexity of GNN is automatically added to the overall complexity. It also takes multiple attempts to train the same GNN again by varying inputs through manual perturbation of labelings. So, this incurs additional costs also. But our experiments suggest that only a very few times, GNN is trained exhaustively, and the other times, it generally runs for a very small number of epochs (less than \(1000\) in our experiments) and reaches the early stopping criterion. So, in summary, PI-GNN is the most scalable. The complexity of GRL increases with the graphs becoming larger and the complexity of MCTS-GNN increases with the number of iterations to run to expand the search tree. Footnote 4: Here our assumption is that, the graph is sparse so, the number of edges \(E=\mathcal{O}(n)\) Apart from these theoretical aspects, other constraints, e.g., step size or learning rate are also quite important and play a crucial role in converging the objective loss functions. If it takes a good time for the convergence, then the overall training time increases also. Based on our observations, generally, we found MCTS-GNN to take the most amount of time due to the expansion of the search tree. GRL takes way less time than MCTS-GNN and provides a competitive performance against PI-GNN - where PI-GNN takes a longer number of epochs to be trained, GRL takes way less number of epochs where there lies a significant amount of processing in each epoch by selecting the nodes in a greedy manner. PI-GNN sometimes gives worse performance than GRL, especially in denser graphs where it might need a good amount of time to be converged. In Table 2, we present some results, each value is denoted in seconds. But, a point to be mentioned is that, based on loss convergence status, in multiple trials, the runtime can vary significantly. ## 4 Discussion and Conclusion In this study, we extend the work of Schuetz et al. (2022). Our main contributions are - identifying a crucial issue during training of PI-GNN while early stopping and experimenting with RL-based formulations to understand the performance \begin{table} \begin{tabular}{c|c|c|c|c} Node & Edge & PI-GNN & GRL & MCTS-GNN \\ \hline 50 & 89 & 698 & 211 & 1066 \\ 50 & 139 & 691 & 184 & 835 \\ 50 & 499 & 3074 & 439 & 1834 \\ \hline 100 & 199 & 317 & 347 & 5147 \\ 100 & 799 & 2668 & 602 & 8479 \\ \hline 300 & 399 & 398 & 427 & 1027 \\ 300 & 899 & 667 & 862 & 5789 \\ 300 & 1299 & 1837 & 1789 & 9035 \\ \hline 500 & 799 & 478 & 679 & 1507 \\ 500 & 1499 & 1478 & 1025 & 6478 \\ 500 & 5499 & 2478 & 2247 & 8798 \\ \hline 700 & 1199 & 932 & 725 & 2017 \\ 700 & 1699 & 1027 & 879 & 4789 \\ 700 & 4699 & 2104 & 2578 & 9786 \\ \hline 1000 & 1299 & 1265 & 1681 & 10510 \\ 1000 & 3299 & 2333 & 1414 & 15146 \\ 1000 & 5299 & 2017 & 2768 & 22147 \\ \hline 3000 & 3499 & 1879 & 2147 & 8978 \\ 3000 & 4499 & 2998 & 2378 & 13147 \\ 3000 & 6999 & 5014 & 4789 & 24789 \\ \end{tabular} \end{table} Table 2: Training time in seconds for PI-GNN, GRL, and MCTS-GNN for different graphs in terms of the number of satisfied constraints. In GRL, we experiment with a QUBO-formulated Hamiltonian as a generic reward function, and in MCTS-GNN we apply the Monty Carlo tree search with GNN guided by manual perturbation of node labeling. Based on our experiments, we found that RL-based setups generally give comparatively better performance in terms of satisfied constraints than the simple PI-GNN setup with additional incurring runtime or processing costs. So, our summarized observation over Boettcher (2023) and Angelini and Ricci-Tersenghi (2023) is that PI-GNN (Schuetz et al., 2022) is quite scalable and provides a moderate performance in terms of the number of satisfied constraints which can be improved by enforcing RL-based formulations. In the next phase of our work, we plan to investigate more RL-based sophisticated formulations in terms of addressing CO problems considering scalability, training criteria, graph representation, etc.
2303.12914
TRON: Transformer Neural Network Acceleration with Non-Coherent Silicon Photonics
Transformer neural networks are rapidly being integrated into state-of-the-art solutions for natural language processing (NLP) and computer vision. However, the complex structure of these models creates challenges for accelerating their execution on conventional electronic platforms. We propose the first silicon photonic hardware neural network accelerator called TRON for transformer-based models such as BERT, and Vision Transformers. Our analysis demonstrates that TRON exhibits at least 14x better throughput and 8x better energy efficiency, in comparison to state-of-the-art transformer accelerators.
Salma Afifi, Febin Sunny, Mahdi Nikdast, Sudeep Pasricha
2023-03-22T21:09:49Z
http://arxiv.org/abs/2303.12914v1
# TRON: Transformer Neural Network Acceleration with Non-Coherent Silicon Photonics ###### Abstract Transformer neural networks are rapidly being integrated into state-of-the-art solutions for natural language processing (NLP) and computer vision. However, the complex structure of these models creates challenges for accelerating their execution on conventional electronic platforms. We propose the first silicon photonic hardware neural network accelerator called TRON for transformer-based models such as BERT, and Vision Transformers. Our analysis demonstrates that TRON exhibits at least 14\(\times\) better throughput and 8\(\times\) better energy efficiency, in comparison to state-of-the-art transformer accelerators. ## 1 Introduction Transformer neural networks have gained significant popularity in the last few years, surpassing the performance of traditional Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) [1]. As the network architecture in transformer models relies on attention mechanisms and positional encodings instead of recurrence, it enables much higher parallelization than RNNs for sequence modeling and transduction problems. Since the introduction of the first transformer in 2017 [2], considerable progress has been made, with the emergence of powerful transformer-based pre-trained natural language processing (NLP) models, such as BERT [3] and Albert [4], and computer vision models, such as the Vision Transformer [5]. Despite the remarkable success of the transformer model, its size, number of parameters, and operations still require significant computational resources, hindering its progress and usage in resource-constrained systems. This highlights the main issues with these models, which includes long inference times, large memory footprint, and low computation-to-memory ratio. Existing work on inference acceleration of conventional artificial neural networks (ANNs), mainly focuses on compute-intensive operations and optimizations at the layer-level granularity, which makes extending it to transformers--with its unique layer architecture and memory-intensive requirements--challenging. Several transformer-centric accelerators have been proposed in recent years to overcome these challenges with transformer execution [6]-[9]. However, most of the work presented so far either focuses on accelerating a specific transformer architecture or is based on electronic components. Electronic accelerators are susceptible to the limits of the post Moore's law era, where diminishing performance improvements are being observed with technology scaling. Such limitations also present major performance and energy bottlenecks for electronic dataflows [10]. On the other hand, silicon photonics has proven its proficiency as a solution beyond high-throughput communication in the telecom and datacom domains, and it is now being considered for chip-scale communication. Moreover, CMOS-compatible silicon photonic components can be used for computations, such as matrix-vector multiplications and logic gate implementations. Accordingly, the integration of silicon photonics is now actively being considered for deep learning acceleration [11]. In this paper, we introduce _TRON_, the first silicon-photonic-based transformer accelerator that can accelerate inference of a broad family of transformer models. Our novel contributions are: * The design of a novel transformer accelerator using non-coherent silicon photonics, with the ability to accelerate any existing variant of transformer neural network models, * Detailed crosstalk analyses, to improve signal-to-noise ratio (SNR) and tunability for photonic microresonator (MR) banks, * A comprehensive comparison with GPU, TPU, CPU, and state-of-the-art transformer accelerators. The rest of the paper is organized as follows. Section 2 presents a background on transformers, ANNs, and their acceleration using silicon photonics. Section 3 describes our _TRON_ architecture. Section 4 discusses the experimental setup and comparisons with other accelerators, followed by conclusions in Section 5. ## 2 Background ### Transformer neural network models The attention mechanism has emerged as a prominent technique in sequence learning and NLP, where long-term memory is required. By utilizing the attention mechanism, transformers have outperformed RNNs (LSTMs, GRUs) across many NLP tasks. As shown in Fig. 1, the original transformer model [2] designed for sequence learning has two main blocks: encoder and decoder. The encoder is responsible for mapping the input sequence into an abstract continuous representation. The decoder then processes that representation and gradually produces a single output while also being fed the previous outputs. Before being sent to the encoder, each input sequence is mapped to a vector, and positional encoding is used to embed the position information of each vector in relation to the original input sequence. The processed input is then passed through to the encoder/decoder block. The encoder and decoder blocks consist of \(N\) stacked layers (Fig. 1). The main sub-blocks in the encoder and decoder blocks are the multi-head attention (MHA) and feed forward (FF) layer, along with residual connections for each, followed by layer normalization. Self-attention is applied in MHA where it links each element (e.g., word) to other elements (e.g., words) in a sequence. Each MHA has \(H\) self-attention heads, and each attention head generates the query (\(Q\)), key (\(K\)), and value (\(V\)) vectors to compute the scaled dot-product attention. \(Q,K\), and \(V\) vectors are generated by multiplying the MHA's input sequence \(X\) by the query, key, and value weight matrices: \(W_{Q}\)\(W_{K}\), and \(W_{K}\). The self-attention output is then computed through a scaled dot-product operation as follows: \[Head(X)=attention(Q,K,V)=softmax(QK^{T}/\sqrt{d_{K}})V, \tag{1}\] where \(X\) is the input matrix and \(d_{K}\) is the dimension of \(Q\) and \(K\). The output of the MHA is the concatenation of the self-attention heads' outputs, followed by a linear layer. The FF network is composed of two dense layers with a _RELU_ activation in between. More recent transformer-based pre-trained language models, such as BERT [3] and its variants [4], include the transformer Figure 1: Transformer neural network architecture overview. encoder block only, as a cascaded set of N layers, followed by an FF layer, then _GELU_, and normalization layers. The recent Vision Transformer (ViT) model is also composed of \(N\) encoder layers, followed by a multi-layer perceptron [5], where the ViT's inputs are sequence vectors representing an image. ### Transformer acceleration Transformer accelerators in prior work focus on accelerating either a specific subset of transformer models or specific transformer layers. For instance, [7] proposed an FPGA-based hardware accelerator, for accelerating MHA and FF layers. Their approach involves efficiently partitioning the weight matrices used in the MHA and FF layers to allow both layers to share hardware resources. In [9], another FPGA-based acceleration framework was proposed with a pruning technique and a method for storing the sparse matrices. An in-memory computing-based transformer accelerator called TransPIM was presented in [6], with a novel token-based dataflow for optimized data movements along with hardware modifications to high bandwidth memory. The work in [8] proposed an automated framework called VAQF that guides the quantization and FPGA resource mapping for ViTs. Unlike prior efforts, our proposed TRON architecture can accelerate a broad family of transformer models for NLP and computer vision tasks. ### Silicon photonics for ANN acceleration Due to the significant benefits offered by optical ANN accelerators in terms of performance and energy efficiency, they have garnered a lot of traction from academic and industry researchers [11]. Optical ANN accelerators are either coherent or non-coherent. In coherent architectures, which use a single wavelength, parameters are imprinted onto the optical signal's phase [12] to perform multiply and accumulate (MAC) operations. Non-coherent architectures leverage multiple wavelengths and imprint parameters onto the optical signal's amplitude. Each wavelength can be used to perform operations in parallel. Current research in optical ANN accelerators has focused mainly on CNNs, MLPs, and RNNs [13]. To the best of our knowledge, TRON is the first optical accelerator for transformer ANN models. _TRON_ is a non-coherent optical accelerator that uses MR optoelectronic devices (see Fig. 2) for carrying out key operations. Each MR can be designed and tuned to work at a specific wavelength, called MR resonant wavelength (\(\lambda_{\lambda\text{dB}}\)), defined as: \[\lambda_{\lambda\text{R}}=\frac{2\pi R}{m}n_{\mathit{off}}, \tag{2}\] where \(R\) is the MR radius, \(m\) is the order of the resonance, and \(n_{\mathit{off}}\) is the effective index of the device. By carefully altering \(n_{\mathit{off}}\)with a tuning circuit, we can modulate electronic data onto an optical signal passing by (in the vicinity of) an MR. The tuning circuit used is either based on either thermo-optic (TO) [14] or carrier injection electro-optic (EO) tuning [15]. Both would result in a change in \(n_{\mathit{off}}\) and hence a resonant shift of \(\Delta\lambda_{\lambda\text{dB}}\) in the MR. In non-coherent networks, computations and, specifically multiplications, are done by tuning an MR's \(\Delta\lambda_{\lambda\text{dB}}\) resulting in a predictable change in the optical signal's wavelength amplitude. To increase throughput and mimic neurons in ANNs, non-coherent architectures make use of wavelength-division multiplexing (WDM). This entails having multiple optical signals with different wavelengths in a single waveguide using an optical multiplexer [11]. The waveguide layout would pass in the vicinity of a bank of MRs, each tuned to a certain wavelength in the waveguide, to enable performing several multiplications in parallel. Fig. 2 illustrates an example of multiplying an input vector [a\({}_{1}\), a\({}_{2}\), a\({}_{3}\)] by a weight vector [W\({}_{1}\), W\({}_{2}\), W\({}_{3}\)]. Two MR bank arrays are used: the first imprints input activations onto the optical signals and the second performs the multiplication. The dot product output can thus be calculated by summing the three signals in the waveguide, which can be done by a photodetector (PD) device. ## 3 _Tron_ Hardware Accelerator Our proposed _TRON_ architecture is a non-coherent photonic accelerator that can accelerate the inference of a broad family of transformer models. An overview of the architecture is shown in Fig. 3. The photonic accelerator core is composed of MHA and FF units. Such composition allows reuse of resources for the encoder and decoder blocks. Interfacing with the main memory, buffering of the intermediate results, and mapping the matrices to the photonic architecture, are handled by an integrated electronic-control unit (ECU). The following subsections describe the _TRON_ architecture and the hardware optimizations we have considered to efficiently accelerate transformer ANN models. ### MR tuning circuit design MR devices in non-coherent architectures require a tuning mechanism, based on EO or TO, as mentioned earlier. In TRON, we employ a hybrid tuning circuit where both TO and EO are used to induce \(\Delta\lambda_{\lambda\text{dB}}\). This enables us to combine the advantages of both while overcoming their disadvantages. EO tuning is faster (\(\approx\)ns range) and requires less power (\(\approx\)4 \(\mu\)W/nm), but it cannot be used for large tuning ranges [15]. Conversely, TO tuning accommodates a larger tunability range but at the expense of higher latency (\(\approx\)us range) and power (\(\approx\)27 mW/_FSR_) [14]. Accordingly, in our design, EO tuning is adopted for fast induction of small \(\Delta\lambda_{\lambda\text{dB}}\) in MRs, while slower TO tuning is used only when larger \(\Delta\lambda_{\lambda\text{dB}}\) is required. The effectiveness of this hybrid approach was previously demonstrated in [16]. To further reduce the power overhead of TO tuning, we adopt thermal eigen decomposition method (TED) from [17]. TED entails tuning all MRs within a bank array together, which reduces power consumption. Moreover, the approach uses microheaters to perform thermal tuning which reduces thermal crosstalk noise from heat dissipated from adjoining TO circuits. ### MR bank design-space analysis To ensure error-free MAC operations in the optical domain, it is necessary to manage various sources of noise, namely thermal and crosstalk noise, which can interfere with parameter imprinting and degrade the network performance and accuracy. Our TED-based tuning mechanism alleviates the thermal noise that can arise from TO tuning. But non-coherent architectures, like TRON, are inherently noise prone due to multiple wavelengths propagating in the same waveguide which creates inter-channel crosstalk. In inter-channel crosstalk, a portion of the optical signal from neighboring wavelengths can leak into one another, causing signal distortion (see Fig. 2; bottom right). This phenomenon is further exacerbated with the presence of multiple MR banks in series, where multiple wavelengths can undesirably drop into an MR. With well-designed channel spacing (CS) and Q-factor in the MR, this can be managed by ensuring that the signal-to-noise ratio (SNR) is better than the Fig. 3: **Overview of the proposed TRON accelerator architecture.** Fig. 2: **Top microhearing resonator (MR) shows input and through ports’ wavelengths after imprinting a parameter onto the signal. Bottom MR bank arrays perform multiplication by imprinting input activations (a-a), followed by weight vector values (W\({}_{1}\)-W\({}_{3}\)).** detector sensitivity. The design of an MR should ensure adequate Q-factor to improve SNR. Additionally, the MR design should also possess sufficient tunable range, so that necessary parameters can be imprinted free of error. Mathematically, tunable range can be represented as 2\(\times\)FWHM (full width half maximum), shown on the top left in Fig. 2. We optimize MR design for high FWHM and high SNR. For this optimization, we use the following models from [18]: \[SNR\ (dB) = 10\times\log_{10}\left(P_{signal}/P_{noise}\right) \tag{3}\] \[P_{signal} = \Phi\left(\lambda_{i},\lambda_{j},Q\right)P_{g}\left(\lambda_{i}, \lambda_{j}\right)\] (4) \[P_{noise} = \sum_{i=1}^{n}\Phi\left(\lambda_{i},\lambda_{j},Q\right)P_{g}\left( \lambda_{i},\lambda_{j}\right)\left(i\neq j\right), \tag{5}\] where \(\Phi\) is the crosstalk coefficient of the inter-channel crosstalk between neighboring channels \(\lambda_{i}\) and \(\lambda_{j}\), which is given by: \[\Phi\left(\lambda_{i},\lambda_{j},Q\right)=\left(1+\left(\frac{2Q\left(\lambda _{i}-\lambda_{j}\right)}{\lambda_{j}}\right)^{2}\right)^{-1}. \tag{6}\] Here, \(\left(\lambda_{i}-\lambda_{j}\right)\) represents the channel spacing CS, i.e., the spectral distance between two adjoining wavelengths. This is also an optimizable parameter within the confines of the free spectral range (FSR) we are considering. \(P_{g}\) in (4) and (5) is the signal power of \(\lambda_{i}\) that reaches the MR that is sensitive to \(\lambda_{j}\), and can be defined as: \[P_{g}=\psi\left(\lambda_{i},\lambda_{j}\right)P_{in}(i), \tag{7}\] where \(P_{in}\) is input power to the waveguide, calculated by considering the detector sensitivity and the signal power loss of \(\lambda_{i}\) before the MR with resonance wavelength \(\lambda_{j}\) within the bank, represented by \(\psi\). When an optical signal in a waveguide passes by an MR, the crosstalk induced power suppression in its power can be modeled as a through loss, which is defined as \(\gamma\) times the signal power before it passes by the MR. This suppression factor \(\gamma\) and hence \(\psi\) can be calculated as follows: \[\gamma\left(\lambda_{i},\lambda_{j},Q\right)= \left(1+\left(\frac{2Q\left(\lambda_{i}-\lambda_{j}\right)}{\lambda_{j}} \right)^{-2}\right)^{-1}, \tag{8}\] \[\psi\left(\lambda_{i},\lambda_{j}\right)= \prod_{i=1}^{n}\gamma\left(\lambda_{i},\lambda_{i},Q\right). \tag{9}\] For calculating FWHM, we use the following model: \[FWHM=\frac{\lambda_{res}}{Q-factor}. \tag{10}\] where \(\lambda_{res}\) is the resonant wavelength of the MR being considered.Using these models, we can identify the optimal design space for our MR banks which can ensure high SNR and high tunable range (\(R_{tune}\)). We must also consider that the lowest optical power level (\(P_{lpar}\)) should be higher than \(P_{noise}\), w.r.t. \(P_{signal}\): \[10\log_{10}\left(\frac{P_{signal}}{P_{lpar}}\right)<10\log_{10}\left(\frac{P _{signal}}{P_{noise}}\right), \tag{11}\] where \(P_{lpar}\) can be defined in terms of \(P_{signal}\) as follows: \[P_{lpar}=\frac{P_{signal}\times R_{tune}}{N_{levels}}. \tag{12}\] Replacing \(P_{lpar}\) in (11) yields the following relation: \[10\log_{10}\left(\frac{N_{levels}}{R_{tune}}\right)<SNR, \tag{13}\] where \(N_{levels}\) is the number of amplitude levels we need to represent across the available \(R_{tune}\): for an n-bit parameter (ANN weight or bias) representation, \(N_{levels}\) will be \(2^{n}\). If positive and negative values are represented separately, as in the case with _TRON_, then \(N_{levels}\) will be \(2^{n-1}\). The relationship in (13) can be rearranged to obtain the relationship between \(R_{tune}\) and \(SNR\): \[R_{tune}>N_{levels}\times 10^{-\frac{SNR}{10}} \tag{14}\] Utilizing these models, we can identify the ideal design space for our MR banks, as discussed later in Section 4.1. ### Multi-head Attention (MHA) unit design The major challenge with transformer inference acceleration is the time-consuming matrix multiplications (MatMuls). Fortunately, these operations can be decomposed into vector dot-product operations as outlined for optical CNN acceleration in [16]. Looking closely at the self-attention in each head (1), the computation of MatMul (\(Q.K^{T}\)) cannot be performed until the generation and storage of \(K^{T}\) completes. This dependency would infer significant power and latency overhead as we would first need to generate \(K\) matrix (\(K=XW_{K}\)) optically, convert the output to digital domain, buffer the values, generate \(K^{T}\), and then convert the matrix to the optical domain again to calculate the next MatMul (\(Q.K^{T}\)). Alternatively, using MatMul decomposition, we can rewrite the operation as two cascaded MatMul steps: \[Q.K^{T}=Q.(X.W_{K})^{T}=(Q.W_{K}^{T}).X^{T} \tag{15}\] As shown by the top four MR bank arrays in Fig. 4(a), no intermediate buffering is thus needed to compute \(Q.K^{T}\). The first two MR bank arrays generate \(Q\) then by having \(W_{K}^{T}\) and \(X^{T}\) previously stored and used to tune the MRs in the following two MR bank arrays, we can directly get the output of (15) optically without any intermediate buffering or expensive opto-electric conversions. To further reduce the latency and power overhead, we propose including the scaling factor in (1) within the weight matrix (\(W_{K}^{T}\)) storage in the ECU. As such, the individual MR tuning values would be \(W_{K}^{T}/\sqrt{d_{k}}\), instead of having an additional MR bank array to perform the scaling operation. As the value of \(d_{k}\) (dimension of \(Q\) and \(K\)) is usually 64 in most transformer models, a simple 3-bit left shift circuit can efficiently handle the division. For the MatMul operations, most optical ANN accelerators (such as [13]) calculate them one-by-one, by having separate MAC units with MR bank arrays to perform the multiplication operations. Consequently, they accumulate and add the partial sums. As there are more than two consecutive MatMul operations involved in the attention computation, we avoid the accumulation of intermediate values and pass the individual multiplication results generated by the first MR bank array to the following MR bank arrays directly. The summation of all the multiplications and partial sums is then done at the end, before the softmax block, as shown in Fig. 4(a). This approach avoids the latency and power costs from early summations, intermediate buffering, and associated opto-electric conversions. Moreover, as outlined in Section 3.1, we have ensured minimal crosstalk noise, that would normally be an issue Fig. 4: (a) Attention head unit comprised of seven MR bank arrays for MatMul operations, each with dimension \(K\)\(\times\)\(N_{i}\) (b) Linear layer comprised of an MR bank array with dimension \(K\)\(\times\)\(N_{i}\) (c) Add and Normalization layers using coherent photonic summation and an MR for imprinting the normalization parameter; (d) MHA unit composed of \(H\) Attention heads, buffer and concatenate block, linear layer, and an add and normalize block. due to such MR arrangement. Following the calculation of (_Q.K_\({}^{T}\)) by the upper MR bank arrays shown in Fig. 4(a), all partial sums are accumulated using balanced photodetectors (BPDs). BPDs help accommodate both positive and negative parameter values by placing separate positive and negative arms for the same waveguide. The sum acquired from the negative arm is subtracted by the BPD from the sum from the positive arm. The results are then converted to the digital domain, to undergo softmax computation. Another challenge in MHA is the softmax operation. It is performed in each attention head and restricts parallelism as all results from the previous MatMul need to be generated first. For its implementation, we propose two optimization solutions. First, we avoid the computationally expensive division and numerical overflow by employing the log-sum-exp trick, used in a few previous works such as [7], as follows. \[Softmax(x_{i})=\frac{\exp(x_{i}-x_{max})}{\sum_{j=1}^{d_{\text{a}}}\exp(x_{j}-x_ {\text{max}})}\enspace, \tag{16}\] \[=\exp\Bigg{(}x_{i}-x_{max}-\ln\Bigg{(}\sum_{j=1}^{d_{\text{a}}}\exp(x_{j}-x_{ \text{max}})\Bigg{)}\Bigg{)}.\] where softmax can be divided into four operations: finding \(x_{max}\), subtraction, natural logarithm (_ln_) and exponential (_exp_). Finding \(x_{max}\) and the subtraction can be computed using simple digital circuits. As shown in Fig. 4(a), the analog-to-digital converter (ADC) output is buffered while also being fed to a comparator circuit, so that finding \(x_{max}\) would be computed in parallel to the MatMuls. The natural logarithm (_ln_) and exponential (_exp_) computations can be calculated using look-up tables (LUTs) [19]. This also helps get the final softmax output as an analog value from the memristor cell in the LUT, which can be used to directly tune the MR bank array. Furthermore, our scaled dot-product attention design enables high parallelism because the bottom vertical cavity surface emission laser (VCSEL) array (Fig. 4(a)) can be synchronized to only be turned on when the softmax operation is done. The linear layer in MHA is also implemented optically using two MR bank arrays (Fig. 4(b)). For adding the MHA input to its current output (implementing the residual connection), coherent photonic summation is employed, as shown in Fig. 4(c), where the output signal from the linear layer is used to directly drive a VCSEL with wavelength \(\lambda_{w}\). Another VCSEL with the same wavelength, is driven by value(_i_) from the residual connection, and thus, when the two waveguides meet, they undergo interference, resulting in the summation of the two values. Coherent summation is ensured by using a laser phase locking mechanism [20], which guarantees that VCSEL output signals have the same phase for constructive interference to occur. Lastly, layer normalization (LN) is performed optically using a single MR, tuned by the LN parameter. The entire MHA architecture is shown in Fig. 4(d). ### Feed Forward (FF) unit design The FF Unit (Fig. 5(a)) is composed of two fully connected (FC) layers, with a non-linear activation in between. Each FC layer is accelerated using two MR bank arrays, with dimensions _K\(\times\)N_: one to imprint the input activations and the second to compute the MatMul between the inputs and the weight matrices. The bias values are added using coherent photonic summation, discussed in the previous section. For the non-linear unit, we implemented an optical _RELU_ unit, with semiconductor-optical-amplifiers (SOAs). When the gain in an SOA is adjusted to a value close to 1, the behavior becomes almost linear, resembling the _RELU_ operation. The work in [21] demonstrated how SOAs can be exploited to implement other non-linear functions such as _Sigmoid_ and _tanh_. This expands the scope of _TRON_ and enables us to implement the _GELU_ operation (used in ViT) instead of the _RELU_, optically. The _GELU_ operation can be approximated as follows [22]: \[\small\begin{split} GELU(x)=x\phi(x)&=0.5x(1+\tanh \left[\sqrt{2/\Pi}\left(x+0.044715x^{3}\right]\right)\\ &=x\phi(1.702x).\end{split} \tag{17}\] As shown in Fig 5(b), the first multiplication between 1.702 and \(x\) is implemented using a single MR, and the sigmoid function is computed using the SOA implementation, described above. The last multiplication of the input with the sigmoid output is calculated using two MRs. To store the input signal and use it to tune the second MR, a low-power, local storage mechanism is used where the analog input signal from the PD is stored in a memristor cell to directly tune the last MR. The output from the non-linear unit is then buffered and used to tune the MRs in the first bank array of the second FC layer (Fig. 5(b)), to be multiplied by the weight matrix (_W2_). Following the second FC layer, the normalization layer is implemented using an MR, the residual connection is added through coherent photonic summation, and the final normalization layer is implemented with another MR. ### TRON architecture The architecture of _TRON_ (Fig. 3) is designed to accelerate various transformer models. The _TRON_ architecture is composed of two sets of MHA units and one set of FF units. Each set has a dimension of \(L\). Such an arrangement enables both the encoder and decoder blocks to easily reuse most of the units. In case of the encoder block, the first VCSEL array will be used to drive the input to the second set of MHA units only. The MHA unit can be divided into two parts: before and after the softmax operation. As softmax (see (1)) cannot be computed till the first part is completed, both parts cannot be parallelized. However, the MatMul operations in the second part can be parallelized with the MatMul operations in the FF unit. For the decoder block, the first VCSEL array is used to drive the input to the first set of MHA units. Its output is used as the input to the second MHA unit whose output then drives the FF unit. Moreover, VCSEL-reuse, as described, reduces the laser power consumption and inter-channel crosstalk. Accordingly, single VCSEL arrays are shared among rows in each MR bank array and used to imprint the input activations. ## 4 Experiments and Results We performed detailed simulation-based analyses to assess the efficiency of our proposed _TRON_ architecture. Four transformer models were considered in our analyses: Transformer-base [2], BERT-base [3], Albert-base [4], and ViT-base [5]. The model parameters are shown in Table 1, where d\({}_{model}\) and d\({}_{ff}\) are the dimensionality of input/output and FF layers. We developed a simulator in Python to estimate the area, performance, and energy costs associated with running each model. The area, performance, and energy estimates for all electronic buffers used in _TRON_ were estimated using CACTI [23] at 28nm; while the electronic circuit in softmax was synthesized using Xilinx Vivado at 28 nm and the resulting power/delay estimates were used in our analyses. Tensorflow 2.9 was used to train and analyze model accuracy. Fig. 5: (a) FF block composed of four-MR bank arrays with dimensions _K\(\times\)N_, SOA-based RELU and GELU units, and bias and residual connection additions, done with coherent photonic summation;(b) GELU unit composed of three MRs, a semiconductor-optical-amplifiers (SOA), and a VCSEL. The achieved accuracies and datasets associated with each model are shown in Table 2. The Transformer, BERT, and Albert models were used for NLP tasks (language translation, sentiment analysis). ViT was evaluated using an image classification task, with pre-training on ImageNet and fine-tuning on Cifar-10. Our analysis concluded that 8-bit model quantization results in comparable accuracy to models with full (32-bit) precision (see Table 2); thus, we targeted 8-bit precision transformer models. The optoelectronic parameters considered for _TRON_'s analysis are shown in Table 3. We considered various factors that contribute to photonic signal losses such as; waveguide propagation loss (1 dB/cm), splitter loss (0.13 dB [24]), combiner loss (0.9 dB [25]), MR through loss (0.02 dB [26]), MR modulation loss (0.72 dB [27]), EO tuning loss (6 dB/cm [15]), and TO tuning loss (27.5 mW/_FSR_[14]). Increasing the number of wavelengths and the waveguide length will in turn increase the MR count, photonic loss, and the required laser power consumption. Accordingly, we modeled the required laser power used in our architecture for each source as: \[P_{laser}-S_{detector}\geq P_{photo\_loss}+10\times\log_{10}N_{\nu} \tag{19}\] where \(P_{laser}\) is the laser power in dBm, \(S_{detector}\) is the PD sensitivity in dBm, \(N_{\lambda}\) is the number of laser sources/wavelengths, and \(P_{photo\_loss}\) is the total optical loss encountered by the signal, due to the factors discussed. In the next subsection, we describe our analyses to determine the optimal values for _TRON_'s architectural parameters \(H\), \(L\), \(K\), and \(N\), which were discussed in section 3. ### TRON architecture design optimization The _TRON_ architecture design is dependent on four key parameters, as discussed in Section 3: \(H\)(the number of heads in the MHA unit), \(L\) (the number of layers), \(K\) (the number of rows), and _N_(the number of columns in each MR bank array). We performed an exploration to determine the optimal [_H,L,K,N_] configuration for _TRON_, defined as the configuration with lowest EPB/GOPs, where EP is energy-per-bit and GOPS is giga-operations-per-second. We also set a maximum power limit of 100W for the configuration. The result of this exploration is shown in the scatterplot in Fig. 6(a). The optimal configuration [4,2,5,17], is highlighted with the pink star. This configuration is used in the comparative analysis in the following subsections. For the MR-bank design, the models described in Section 3.2 were used to perform another exploration study. Using the SNR model (3), with the \(R_{\text{true}}\) constraint (14), we explored the MR bank design space to find the parameters [_Run,Q,SNR,CS_], with the aim of maximizing tuning range \(R_{\text{true}}\). We considered \(N_{\text{test}}\) of \(2^{\text{th-1}}\), an FSR of 20 nm, Q-factor ranging from 2000 to 8000, and channel spacing ranging from 0.1 to 1 nm. The result of the exploration is as shown in Fig. 6(b), where we have selected the data point with the best \(R_{\text{true}}\): [0.45, 6500, 24.3, 1]. ### TRON architecture component-wise analysis To understand the performance of the major components within the _TRON_ architecture, we present a breakdown in terms of power and latency for these components in Fig. 7. For the power, it is evident that MatMul operations in the attention heads contribute to more than half of the architecture's power overhead. This is because of the large dimensions of the matrices being multiplied in the MHA blocks, in each attention head. This requires many digital-to-analog converters (DACs), whose power consumption is considerable. Moreover, the sequential dependency in the attention head also contributes notably to the latency overhead. As Albert shares all attention and FF parameters across layers [4], this leads to a minimization of the number of active DACs, reducing the overall power consumption for the Albert model. ### Comparison to state-of-the-art accelerators We compared _TRON_ execution on multiple processors and state-of-the-art transformer accelerators: Tesla V100-SXM2 GPU, TPU v2 [30], Intel Xeon CPU, TransPIM [6], FPGA transformer accelerator in [7] (FPGA_Acc1), VAQF [8], and FPGA transformer accelerator in [9] (FPGA_Acc2). VAQF focuses on vision transformers and FPGA-Acc2 on traditional encoder-decoder transformer architectures and transformer-based language models; results for these two platforms are thus restricted to the models they are targeted for. We used power, latency, and energy values reported for the selected accelerators, and results from executing models on the GPU/CPU/TPU platforms to estimate the EPB and GOPS for each model. The _TRON_ architectural configuration used in the comparisons is the one described in Section 4.1. Fig. 8 shows the GOPS comparison between _TRON_ and the other architectures considered. Our architecture achieves on average 262\(\times\), 1631\(\times\), 1930\(\times\), 14\(\times\), and 55\(\times\) better GOPS than GPU, TPU, CPU, TransPIM, and FPGA_Acc1, respectively. When \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Model** & **Params** & **Layers** & **Heads** & **d=ad** \\ \hline Transformer-base & 52M & 2 & 8 & 512 & 2048 \\ \hline BERT-base & 108M & 12 & 12 & 768 & 3072 \\ \hline Albert-base & 12M & 12 & 12 & 768 & 3072 \\ \hline ViT-base & 86M & 12 & 12 & 768 & 3072 \\ \hline \end{tabular} \end{table} Table 1: Transformer models and parameter counts \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Model** & **Dataset(s)** & **Accuracy** & **Accuracy** \\ & & **(32-bit)** & **(8-bit)** \\ \hline \hline Transformer-base & Ted hrr translate & 66.73\% & 70.4\% \\ \hline BERT-base & Sentiment-Analysis-of- & 85.8\% & 85.8\% \\ \hline Albert-base & Sentiment-Analysis-of- & 88.3\% & 88.7\% \\ & IMDB-Movie-Reviews & & \\ \hline ViT-base & ImageNet/Cifar-10 & 97.7\% & 98.0\% \\ \hline \end{tabular} \end{table} Table 2: Transformer model performances Figure 6: (a) Architectural optimization for _TRON_, to find optimal [_H_, \(L\), \(K\), _N_] configuration with best energy-efficiency and throughput. The best configuration is shown with the pink star; (b) MR bank optimization for _TRON_, to identify optimal [\(R_{\text{true}}\),Q,SNR,CS]. The best design point with the highest \(R_{\text{true}}\) is shown with the pink star. Figure 7: Power and latency breakdown across _TRON_ components. comparing transformer model-specific accelerators, _TRON_ has on average 352\(\times\) higher GOPS than FPGA_Acc2 for transformer, BERT, and Albert models, and 846\(\times\) higher GOPS than VQAP for ViT. The higher throughput over all compute platforms can be explained in terms of _TRON_s high-speed execution in the optical domain and minimal computations in the digital/electric domain. Fig. 9 shows the energy-per-bit (EPB) comparison. On average, _TRON_ attains 4231\(\times\), 12397\(\times\), 10971\(\times\), 14\(\times\), and 8\(\times\) lower EPB than GPU, TPU, CPU, TransPIM, and FPGA_Acc1. For model-specific accelerators, we achieve on average 802\(\times\) lower EPB than FPGA_Acc2 for transformer, BERT, and Albert models, and 32\(\times\) lower EPB than VQAP for ViT. These EPB improvements can be attributed to _TRON_s low latency operations and relatively lower power compared to some of the computation platforms considered. ### TRON for edge environments Edge computing environments have stringent power constraints for accelerators. We performed a design space exploration, similar to that described in Section 4.1, to find an edge-friendly _TRON_ configuration with a power limit of 10W (instead of 100W that we considered earlier). We identified the optimal edge configuration values for [_H,L,K,N_] as [4, 1, 12, 12]. Fig. 10 illustrates a comparison for the average power, GOPS, and EPB values across models, among _TRON_edge, _TRON_, and the platforms previously discussed. The values shown are normalized to those obtained for the CPU. Our _TRON_ edge accelerator consumes on average, considerably lower power (\(\sim\)10W). While the GOPS values slightly decrease, the edge configuration's throughput still outperforms all compute and accelerator platforms by at least 16%. The EPB value for _TRON_edge is higher than for _TRON_ but it still is on average 4\(\times\) to 6292\(\times\) lower than all other platforms. In this manner, TRON can be customized to provide the best performance and energy-efficiency for any given target power consumption constraint. ## 5 Conclusions In this paper, we presented the first non-coherent silicon photonic hardware transformer accelerator, called _TRON_. Our proposed accelerator architecture exhibited throughput improvements of at least 14\(\times\) and energy-efficiency improvements of at least 8\(\times\) when compared to eight different processing platforms and state-of-the-art transformer accelerators. These results demonstrate the promise of _TRON_ in terms of energy-efficiency and high-throughput inference acceleration for transformer neural networks. This work focused on the hardware architecture design with silicon photonics. When combined with software optimization techniques that aim to reduce a transformer's large memory footprint, significantly better throughput and energy efficiency can be achieved.
2308.11809
Expressive probabilistic sampling in recurrent neural networks
In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.
Shirui Chen, Linxing Preston Jiang, Rajesh P. N. Rao, Eric Shea-Brown
2023-08-22T22:20:39Z
http://arxiv.org/abs/2308.11809v3
# Expressive probabilistic sampling in recurrent neural networks ###### Abstract In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for _recurrent_ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (_sampler-only_ network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits _reservoir-sampler networks_ (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based Bayesian brain models. ## 1 Introduction There is growing evidence that humans and other animals make decisions by representing uncertainty internally and carrying out probabilistic computations that approximate Bayesian inference [42; 30; 18; 23; 34]. How networks of neurons in the brain represent probability distributions for Bayesian inference has remained a major open question. There exist two major theories: one assumes that the neural activities encode the parameters of the underlying posterior distributions over sensory stimuli [5; 35; 52]. The other is the sampling-based hypothesis, which assumes that the neural responses can be interpreted as samples from a posterior distribution [28]. Under this hypothesis, recurrent neural circuits make use of their inherent stochasticity to produce samples from posterior distributions. The sampling-based theory has explained various experimental observations regarding neural variability [19; 20; 40], perceptual decision-making [24] and spontaneous cortical activity [7]. Many studies have proposed biologically plausible spiking rules and membrane dynamics models to implement sampling-based probabilistic inference. However, most of these studies mainly consider sampling from discrete Boltzmann distributions [12] and multivariate Gaussian distributions [17; 1; 38; 25], only match the first two moments of the distribution [19], or employ a Monte-Carlo approximation [29]. Although these studies use algorithmic substrates that can sample from any distribution with a density function _in theory_, it is not clear whether the underlying neural dynamics are capable of implementing a sufficiently expressive version of these sampling methods. Furthermore, natural image statistics are strongly non-Gaussian [39], and experimental evidence shows that humans use non-Gaussian prior representations to support cognitive judgments [27; 22]. It is known that deep artificial neural networks can be used to generate samples from complex data distributions [48; 49] using a "U-net" [43] backbone. However, the neural circuits in the cortex are highly coupled with an abundance of recurrent synapses. Therefore, an outstanding question for probabilistic computation in the brain is: what kind of recurrent neural network is capable of efficiently learning to produce samples from an arbitrarily complex posterior distribution? In this paper, we study this question under the basic assumption that the dynamics of recurrent neural circuits can be described by stochastic differential equations (SDEs). Note that this assumption is common to a broad range of past research that uses rate-based neural dynamics to implement sampling-based coding[1; 19; 17]. Moreover, spike-based models [46; 45; 38] that implement balanced spiking networks (BSNs) [11; 13] essentially train spiking networks to follow underlying continuous-time SDEs, so our work applies to this line of research as well (details in Appendix G). The contributions of this paper are as follows (Figure 1): 1. We establish the relationship between the sampling power of the neural dynamics and the ability of the dynamics to approximate the score function, which is the gradient of the log probability density function. (Section 3.1) 2. We show that the synaptic current dynamics of a network of neurons whose outputs directly represent the samples (traditional sampler-only network) is only able to approximate score functions that are in a finite-dimensional function space. (Section 3.2, Proposition 2) 3. We prove that the firing rate dynamics of our proposed reservoir-sampler network can sample from a distribution whose score function can approximate that of arbitrary target distributions (with mild restrictions) to arbitrary precision (Section 3.3, Theorem 3). 4. We derive a computationally efficient and biologically-plausible learning rule for our proposed model (Section 3.4) that sidesteps the demands of backpropagation through time, and we empirically demonstrate how our model can sample from several complex data distributions (Section 4). And interpretation of our contributions in biological terms is as follows: 1. Flexible synaptic connectivity within a circuit itself is not enough to allow that circuit to flexibly produce arbitrary patterns of variability involving all of its neurons. 2. In order for the circuit to achieve full flexibility in its output patterns, there need to be hidden variables involved, e.g. states of neurons in an upstream brain area or possibly non-synaptic signaling. If this condition is met, concrete and fairly efficient plasticity rules may be capable of shaping the output patterns as desired. ## 2 Background ### Fokker-Planck equation and stationary distribution We consider a general time-homogeneous SDE with drift vector \(\mathbf{b}(X_{t})\) and diffusion matrix \(\Sigma=\frac{1}{2}\sigma\sigma^{T}\): \[dX_{t}=\mathbf{b}(X_{t})dt+\sigma dB_{t} \tag{1}\] where \(\sigma\in\mathbb{R}^{n\times m}\) is the diffusion coefficient, and \(B_{t}\) is an \(m\)-dimensional standard Wiener process. The Fokker-Planck equation of this SDE describes the time evolution of the probability density \(p(x,t)\) for a given SDE, assuming the initial density \(p(x,0)\) is known: \[\partial_{t}p=\nabla\cdot(\Sigma\nabla p-\mathbf{b}p) \tag{2}\] where \(\partial_{t}=\partial/\partial t\), \(\nabla\cdot\) is the divergence operator, and \(\nabla\) is the gradient operator. A stationary probability density function is one for which the right-hand-side of equation (2) is 0. Mild regularity conditions guaranteeing the existence and uniqueness of a stationary density function are discussed in, for example, Cerrai [14]. We assume these are satisfied by the SDEs considered in this paper. Moreover, to ensure ergodicity and a well-defined score function, we assume that \(p\) is supported on \(\mathbb{R}^{n}\), i.e. \(p(x)>0\) for all \(x\in\mathbb{R}^{n}\). As a special case, consider the following Langevin dynamics, in which the drift term is given by the gradient of the log stationary probability density \(\nabla p(x)\), \[dX_{t}=\nabla\log p(X_{t})dt+\sqrt{2}dB_{t}. \tag{3}\] It can be verified through the Fokker-Planck equation that \(p(x)\) in the dynamics above is indeed a stationary probability density function of the dynamics. Therefore the Langevin dynamics can sample from the distribution \(p(x)\) as \(t\rightarrow\infty\). While the drift term in Langevin dynamics is a gradient vector field, this is not true in general and is often not the case for recurrent neural dynamics (Proposition 5 in Appendix B). ### Score-based generative modeling If we would like a particular dynamics to implement Langevin dynamics, we need to fit the drift term of an SDE to the score \(s_{\theta}(\mathbf{x})=\nabla\log p(\mathbf{x})\) of the probability distribution that we are trying to sample from. This procedure of fitting the score function \(s_{\theta}(\mathbf{x})\) is called score matching. In this section, we give a brief summary of one of the major methods of score matching that we will use in this paper, denoising score matching (DSM) [51]. The general idea of DSM is to match the score of a noise-perturbed version of the target distribution. More specifically, the explicit score matching loss Figure 1: **Reservoir-Sampler Networks versus Traditional Sampler-Only Networks. The sampler neurons that produce samples from the target distribution are present for both the sampler-only (SO) network and the reservoir-sampler (RS) network. The reservoir neurons (shown within the large black circle) are only present in the RS network. We explore the firing rate (FR) dynamics (Section 3.3) for both networks and the synaptic current (SC) dynamics (Section 3.2) only for the sampler-only network because the stationary distribution of the output neurons is intractable in the RS-SC case. We evaluate the approximation power of the function set represented by the drift term of the neural dynamics, and list the number of basis functions that span the set, whether these basis functions are fixed, and whether the function set is closed under addition (shown in the table).** (left-hand side) and the denoising score matching loss (right-hand side) are related to each other as follows [51]: \[\mathbb{E}_{q_{\sigma}(\tilde{\mathbf{x}})}\left[\frac{1}{2}\left\|s_{\theta}( \tilde{\mathbf{x}})-\nabla_{\tilde{\mathbf{x}}}\log q_{\sigma}(\tilde{\mathbf{ x}})\right\|^{2}\right]=\mathbb{E}_{q_{\sigma}(\tilde{\mathbf{x}},\mathbf{x})} \left[\frac{1}{2}\left\|s_{\theta}(\tilde{\mathbf{x}})-\nabla_{\tilde{\mathbf{ x}}}\log q_{\sigma}(\tilde{\mathbf{x}}|\mathbf{x})\right\|^{2}\right]+C_{\sigma} \tag{4}\] where \(\tilde{\mathbf{x}}\) is a noise-perturbed version of \(\mathbf{x}\), so \(q_{\sigma}(\tilde{\mathbf{x}}|\mathbf{x})=\mathcal{N}(\mathbf{x},\sigma)\), and \(C\) is a constant depending on \(\sigma\) but _not_ on \(\theta\). When the noise is 0, i.e. \(\tilde{\mathbf{x}}=\mathbf{x}\), the left-hand side of the equation above is the explicit score matching loss. Although in theory, we can start from extremely small Gaussian noise, and directly optimize the right hand of equation (4), empirically it is beneficial to start with large Gaussian noise and gradually decrease the noise magnitude until \(q_{\sigma}(\mathbf{x})\approx p(\mathbf{x})\) and \(s_{\theta}(\mathbf{x})\approx\nabla_{\mathbf{x}}\log q_{\sigma}(\mathbf{x}) \approx\nabla_{\mathbf{x}}\log p(\mathbf{x})\). Specifically, this has been shown to stabilize the training and improve score estimation [48]. ## 3 Methods ### Do we really need to match the score? The Langevin dynamics provides an elegant way to construct an SDE given a specific stationary distribution. However, as noted above, the neural network dynamics seldom have a drift term that is a gradient field (Appendix B). A natural question is therefore whether an SDE with a drift term that is _not_ a gradient field (also known as irreversibility) also gives rise to a specific stationary distribution. The answer requires us to look at the Fokker-Planck equation: \[\partial_{t}p(\mathbf{v},t)=\nabla\cdot(\Sigma\nabla p-pF_{\theta}(\mathbf{v})) \tag{5}\] where \(p(\mathbf{v},t)\) is the probability density function of the variable of interest \(\mathbf{v}\) at time \(t\), \(F_{\theta}(\mathbf{v})\) is the drift term of the neural dynamics parametrized by \(\theta\), and \(\Sigma=\frac{1}{2}\sigma\sigma^{T}\) is the diffusion matrix. Since the right-hand side of (5) needs to be 0 for a given stationary distribution \(p(\mathbf{v})\), we have \[\nabla\cdot(\Sigma\nabla p-pF_{\theta}(\mathbf{v}))=0. \tag{6}\] Therefore \(G:=-\Sigma\nabla p+pF_{\theta}(\mathbf{v})\) needs to be a divergence-free (DF) vector field. In other words, \(pF_{\theta}(\mathbf{v})\) is unique up to a DF field given a fixed stationary distribution \(p\). Ma et al. [36] shows that there exists a skew-symmetric matrix \(Q\) such that the DF field can be written as \(G=Q\nabla p+p\sum_{j}\frac{\partial}{\partial\mathbf{v}_{j}}Q_{ij}\), however, this does not shed more light on how expressive \(\left\{F_{\theta}\right\}_{\theta}\) needs to be without more knowledge of \(Q\) and its derivative. We show below that under certain conditions, the DF field \(G\) can be regarded as a component that is orthogonal to the score function. Therefore the function space \(\left\{F_{\theta}\right\}_{\theta}\) needs to have enough basis functions so that (when projected) it is able to approximate the score function of the target distribution. We first note that it would be convenient if \(F_{\theta}(\mathbf{v})\) could approximate the gradient fields \(\Sigma^{-1}\nabla\log p\) for any \(p\), in which case \(G=0\). To find out if this is a necessary condition for the dynamics to sample from an arbitrary target distribution \(p\), we let \(\Sigma=\mathbf{I}\) and invoke the Helmholtz-Hodge decomposition (HHD) [8; 37]. The decomposition theorem states that any sufficiently smooth vector field in \(L^{2}(\mathbb{R}^{n};\mathbb{R}^{n})\) can be uniquely decomposed into a DF vector field and a pure gradient field. In other words, the function space of all DF fields \(G\) and the function space of all gradient fields \(\nabla p\) are the orthogonal complement of each other. Therefore the projection of \(pF_{\theta}(\mathbf{v})\) onto the subspace of smooth gradient fields still needs to be able to approximate \(\nabla p\) despite the freedom to choose arbitrary DF field \(G\). For example, in the 1-D case, if we assume that both \(pF_{\theta}(\mathbf{v})\) and \(\nabla p\) are square-integrable (so they vanish at infinity), then the divergence-free vector field \(G\) (which must be constant in 1-D) is 0, therefore \(F_{\theta}(\mathbf{v})=\nabla\log p\). As a result, \(\left\{F_{\theta}\right\}_{\theta}\) indeed needs to be able to approximate \(\nabla\log p\) for every \(p\). In higher dimensions, the same conclusion holds under the assumption of a strict orthogonality constraint: **Proposition 1**.: _Let \(p\) be the stationary distribution of the neural dynamics, and the diffusion matrix be the identity matrix. If the DF field \(G\) is strictly orthogonal to the gradient field \(\nabla p\), meaning that \(G(\mathbf{v})\cdot\nabla p(\mathbf{v})=0\) for all \(\mathbf{v}\), then the drift term \(F_{\theta}(\mathbf{v})\) can be written as the sum of a divergence-free field \(p^{-1}G\) and a gradient field \(\nabla\log p\)._ The proof and further detail is in Appendix A. The form of the above decomposition coincides with that of the HHD [8; 9]. Therefore if we enforce the _normal-parallel_ boundary condition [15] for the gradient component, the HHD theorem [8] says that the orthogonal projection of the function space \(\{F_{\theta}\}_{\theta}\) onto the space of gradient fields is the function space of gradient fields \(\{\nabla\log p\}_{p}\) satisfying the boundary condition given the strict orthogonality constraint on \(G\). The upshot is that \(\{F_{\theta}\}_{\theta}\) needs to admit enough basis functions (Appendix A). Note that the boundary condition will not be restrictive if we take a sufficiently large bounded region. Therefore it is essential for the neural dynamics to have an expressive functional form that is able to approximate complex score functions, even if their dynamics are not gradient fields. Previous work has rigorously established that the strict orthogonality constraint holds, in particular, when the nonlinear dynamics is linearized around fixed points of \(\mathbf{v}\) (where the drift term is zero [2; 31; 41]). As a consequence, the conditions of Proposition 1 are true locally around fixed points. In what follows, we assume that the conditions of the Proposition 1 hold. Under this assumption, without loss of generality, we set \(G\) to be \(\mathbf{0}\) in the following text and explore whether \(\Sigma^{-1}F_{\theta}\), which is determined by the specific neural dynamics (Figure 1), is able to approximate complex score function \(\nabla\log p\). ### Synaptic current dynamics: sampler-only networks with limited capacity We consider the following stochastic synaptic current dynamics [16] (cf. eq. 7.39) that describe a recurrent neural network in terms of the synaptic current that each neuron receives: \[d\mathbf{v}=D(-\mathbf{v}+W\phi(\mathbf{v})+I)dt+\sigma d\mathbf{B}_{t}:=F_{ \theta}^{\mathrm{SC}}(\mathbf{v})dt+\sigma d\mathbf{B}_{t} \tag{7}\] where \(F_{\theta}^{\mathrm{SC}}(\mathbf{v}):=D(-\mathbf{v}+W\phi(\mathbf{v})+I)\), \(\mathbf{v}=[v_{1},\cdots,v_{m}]^{T}\in\mathbb{R}^{m}\) is the synaptic current of the \(m\) neurons in the recurrent network, \(D\in\mathbb{R}^{m\times m}\) is a diagonal matrix where diagonal elements are the decay constants, \(d_{i}=\tau_{i}^{-1}\), \(W\in\mathbb{R}^{m\times m}\) is the connection matrix, \(\phi(\cdot)\) is a nonlinear transfer function2, \(I\in\mathbb{R}^{m}\) is the external input, \(\sigma\in\mathbb{R}^{m\times l}\) is the diffusion coefficient and \(\mathbf{B}_{t}\) is an \(l\)-dimensional standard Wiener process. The diffusion term can be interpreted as input from other brain areas (due to the large number of incoming connections, the assumption of Gaussianity can be justified by the central limit theorem [21]). We assume that \(\theta=\{D,W,I\}\) are tunable parameters through biological learning. To show the limited expressivity of \(F_{\theta}^{\mathrm{SC}}\), we have the following corollary from the Hilbert projection theorem showing that \(F_{\theta}^{\mathrm{SC}}\) is only able to approximate functions in a finite-dimensional function space. Footnote 2: We later need it to be non-polynomial in Theorem 3 **Proposition 2**.: _Let \(H:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\), a function in the Hilbert space \(L_{2}(\mathbb{R}^{m},\mathbb{R}^{m};p)\). Let \(\Pi\) be the orthogonal projection operator onto the vector subspace_ \[E=\left\{A\mathbf{v}+B\phi(\mathbf{v})+I|A,B\in\mathbb{R}^{m\times m},I\in \mathbb{R}^{m\times 1}\right\}\] _If \(\|H-\Pi H\|>0\), then \(\inf_{\theta}\left\|H(\mathbf{v})-F_{\theta}^{\mathrm{SC}}(\mathbf{v})\right\| \geq\|(1-\Pi)H\|>0\)._ Proof of the proposition is given in Appendix B.1. The proposition says that no matter how the parameter of \(F_{\theta}^{\mathrm{SC}}\) is tuned, the difference between \(F_{\theta}^{\mathrm{SC}}\) and the target function cannot approach \(0\), and the lower bound of the error is given by the norm of the component in the target function that is orthogonal to the finite-dimensional function space \(E\). Therefore, synaptic current dynamics have a limited ability to match the score function and hence a limited ability to sample from complex probability distributions under the strict orthogonality constraint in Section 3.1. The conclusion holds even if we let the diffusion coefficient \(\sigma\) be tunable. Since \(\Sigma^{-1}\) is linear, \(\left\{F_{\theta}^{\mathrm{SC}}\right\}_{\theta}\) share the same set of basis functions as \(\left\{\Sigma^{-1}F_{\theta}^{\mathrm{SC}}\right\}_{\theta,\sigma}\). As we will see in the next section, the firing rate dynamics of a recurrent neural circuit with a separate output layer (a reservoir-sampler network) and a learnable diffusion coefficient \(\sigma\) can sample from arbitrary stationary distributions. ### Firing-rate dynamics could be a universal sampler #### 3.3.1 Sampler-only networks In this section, we consider the firing rate dynamics [16] (cf. eq. 7.11) that describe a recurrent neural circuit in terms of the firing rates of the neurons. We first consider the sampler-only network: \[d\mathbf{r}=D(-\mathbf{r}+\phi(W_{\mathrm{rec}}\mathbf{r}+I))dt+\sigma dB_{t}:= F_{\theta}^{\mathrm{FR}}(\mathbf{v})dt+\sigma d\mathbf{B}_{t}. \tag{8}\] Here, \(F_{\theta}^{\rm FR}(\mathbf{v})=D(-\mathbf{r}+\phi(W_{\rm rec}\mathbf{r}+I))\) and \(D\) is a diagonal matrix with decay constants as its diagonal elements. The stationary solution of the corresponding Fokker-Planck equation satisfies \[\nabla\cdot(\Sigma\nabla p-pF_{\theta}^{\rm FR})=0 \tag{9}\] where \(\Sigma:=\frac{1}{2}\sigma\sigma^{T}\) is symmetric positive definite (SPD) and \(F_{\theta}^{\rm FR}=D(-\mathbf{r}+\phi(W_{\rm rec}\mathbf{r}+I))\). Here \(\theta=\{D,W_{\rm rec},I\}\) are tunable parameters. If \(\Sigma\) is invertible, equation (9) becomes \(\nabla\cdot(\Sigma(\nabla p-p\Sigma^{-1}F_{\theta}^{\rm FR}))\). Therefore if \(\Sigma^{-1}F_{\theta}^{\rm FR}=\nabla\log p\) is a gradient field, then the stochastic dynamics of the recurrent neural network described by equation (8) have a stationary distribution \(p^{*}\) such that the score of this distribution \(\nabla\log p^{*}=\Sigma^{-1}F_{\theta}^{\rm FR}\). Compared to the synaptic current dynamics where we could only have functional basis \(\mathbf{v}_{i}\) and \(\phi(\mathbf{v})_{i}\), we can now freely choose the functional basis spanning \(\left\{F_{\theta}^{\rm FR}\right\}_{\theta}\) depending on \(W_{\rm rec}\) and \(I\), but since there is no linear term before the nonlinear transformation \(\phi\), the function set \(\left\{F_{\theta}^{\rm FR}\right\}_{\theta}\) is not closed under addition. If we view \(\Sigma^{-1}F_{\theta}^{\rm FR}\) as a neural network with one hidden layer, the number of hidden neurons must be the same as the input dimension, and the diffusion matrix \(\Sigma\) (hence \(\Sigma^{-1}\)) is restricted to be an SPD matrix. Therefore, we do not get universal approximation power from \(\Sigma^{-1}F_{\theta}^{\rm FR}\), and combined with results in Section 3.2, we see that an RNN by itself does not intrinsically produce samples from arbitrary distributions. As we will see below, if we let a population of output neurons receive inputs from a large reservoir of recurrently connected neurons (a reservoir-sampler network), we are able to obtain samples from complex distributions from the output neurons. #### 3.3.2 Reservoir-sampler networks Now we consider the reservoir-sampler network where there is a linear readout layer \(W_{\rm out}\in\mathbb{R}^{m\times n}\) of the reservoir whose dynamics is given by equation (8) (see also the upper row of Figure 1). As a special case of Ito's lemma, we have \[\begin{split} W_{\rm out}d\mathbf{r}=dW_{\rm out}\mathbf{r}& =W_{\rm out}F_{\theta}^{\rm FR}dt+W_{\rm out}\sigma dB_{t}\\ &=(-W_{\rm out}D\mathbf{r}+W_{\rm out}D\phi(W_{\rm rec}\mathbf{r} +I))dt+W_{\rm out}\sigma dB_{t}.\end{split} \tag{10}\] Now we assume that \(W_{\rm rec}\) is the product of \(\widetilde{W}_{\rm rec}\) and \(W_{\rm out}\), i.e. \(W_{\rm rec}=\widetilde{W}_{\rm rec}W_{\rm out}\) and \(D=\alpha\mathbf{I}\) is a scaled identity matrix. If we denote the output of the recurrent neural network as \(\mathbf{x}:=W_{\rm out}\mathbf{r}\in\mathbb{R}^{m}\), we derive the following stochastic dynamics for output neurons: \[d\mathbf{x}=(-\alpha\mathbf{x}+\alpha W_{\rm out}\phi(\widetilde{W}_{rec} \mathbf{x}+I))dt+W_{\rm out}\sigma dB_{t}:=\widetilde{F}_{\theta}^{\rm FR}( \mathbf{x})dt+\tilde{\sigma}dB_{t} \tag{11}\] where \(\widetilde{F}_{\theta}^{\rm FR}(\mathbf{x})=(-\alpha\mathbf{x}+\alpha W_{\rm out }\phi(\widetilde{W}_{\rm rec}\mathbf{x}+I))\) and \(\tilde{\sigma}=W_{\rm out}\sigma\). Therefore in order for the output neurons to sample from a stationary distribution \(p\), we need \(s_{\beta}(\mathbf{x})=(\frac{1}{2}\tilde{\sigma}\tilde{\sigma}^{T})^{-1} \widetilde{F}_{\theta}^{\rm FR}(\mathbf{x}):=\widetilde{\Sigma}^{-1} \widetilde{F}_{\theta}^{\rm FR}(\mathbf{x})\) to match the score \(\nabla\log p(\mathbf{x})\). Here \(\beta=\left\{\widetilde{W}_{\rm rec},W_{\rm out},I,\sigma\right\}\) are tunable parameters. Additionally, we assume that \(\widetilde{F}_{\theta}^{\rm FR}\) is \(\mathbf{0}\) outside a reasonable range of \(\mathbf{x}\). This assumption is used to prevent \(s_{\beta}(\mathbf{x})\) from behaving wildly outside the bounded region on which \(s_{\beta}(\mathbf{x})\) has the expressivity to match the score. The following theorem proves that with a large enough number of reservoir neurons, the score-matching loss can be arbitrarily small. The proof is given in Appendix C. **Theorem 3**.: _Suppose that we are given a probability distribution with continuously differentiable density function \(p(\mathbf{x}):\mathbb{R}^{m}\rightarrow\mathbb{R}^{+}\) and score function \(\nabla\log p(\mathbf{x})\) for which there exist constants \(M_{1},M_{2},a,k>0\) such that_ \[p(\mathbf{x}) <M_{1}e^{-a\left\|\mathbf{x}\right\|} \tag{12}\] \[\left\|\nabla\log p(\mathbf{x})\right\|^{2} <M_{2}\left\|\mathbf{x}\right\|^{k} \tag{13}\] _when \(\left\|\mathbf{x}\right\|>L\) for large enough \(L\). Then for any \(\varepsilon>0\), there exists a recurrent neural network whose firing-rate dynamics are given by (11), whose recurrent weights, output weights, and the diffusion coefficient are given by \(W_{\rm rec}\in\mathbb{R}^{n\times n}\) of rank \(m\), \(W_{\rm out}\in\mathbb{R}^{m\times n}\), and \(\sigma\in\mathbb{R}^{n\times m}\) respectively, such that, for a large enough \(n\), the score of the stationary distribution of the output units \(s_{\theta}(\mathbf{x})\) satisfies \(\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\left\|\nabla\log p(\mathbf{x})-s_{ \theta}(\mathbf{x})\right\|^{2}]<\varepsilon\)._ This theorem says that for any realistic data distribution with a smooth positive density function, there always exists a reservoir of recurrently-connected neurons whose output units give samples from a distribution whose score function approximates that of the data distribution to arbitrary precision. Given the bound on the score matching error, Block et al. [10] (cf. Theorem 13) gives bounds in Wasserstein 2-distance between the stationary distribution of the trained recurrent dynamics and the true data distribution. It is also worth noting that the recurrent weight matrix of the neural circuit in the theorem is of low-rank, so regardless of how many neurons there are in the reservoir, we can always find a low-rank recurrent weight matrix such that the output neurons sample from a correspondingly low-dimensional distribution. ### An efficient RNN weight learning algorithm The training procedure is derived from the proof of Theorem 3. The main idea is to first train an auxiliary neural network with one hidden layer, then transfer the weights of the auxiliary neural network to the weights of the recurrent network that we are considering. More specifically, we first optimize an auxiliary feedforward neural network with one hidden layer \(W_{\mathrm{out}}\phi(\widetilde{W}_{\mathrm{rec}}\mathbf{x}+I)\) using backpropagation with the denoising score matching loss (4) such that \[2\alpha(W_{\mathrm{out}}\phi(\widetilde{W}_{\mathrm{rec}}\mathbf{x}+I)-\mathbf{ x})\approx\nabla\log p(\mathbf{x}). \tag{14}\] Then we can calculate the diffusion coefficient \(\sigma=W_{\mathrm{out}}^{T}(W_{\mathrm{out}}W_{\mathrm{out}}^{T})^{-1}\) and the real recurrent weights \(W_{\mathrm{rec}}=\widetilde{W}_{\mathrm{rec}}W_{\mathrm{out}}\) accordingly. The noise magnitude added to the data samples is decreased exponentially over the entire training period. Figure 2 illustrates how the network can gradually learn the score function during training. We refer the readers to Appendix D for more details. Note that this method of using an auxiliary neural network is much more computationally efficient than directly matching the score of the stationary distribution of the dynamics (11), for which the score function, which involves the matrix inverse \((\tilde{\sigma}\tilde{\sigma}^{T})^{-1}\), needs to be recomputed at each optimization step. Furthermore, since the entire training procedure is equivalent to training a feedforward network with one hidden layer, it sidesteps the often challenging temporal computations associated with the Backpropagation Through Time (BPTT) algorithm used to train deterministic RNNs. Although we assumed that the divergence-free field \(G=0\) for the purpose of theoretical analysis, in practice, fast sampling is a major concern for implementing sampling-based inference models of the brain [25; 1; 19; 38], and reversible stochastic dynamics, i.e. if \(G=0\), are known for their slow sampling speed. Fortunately, there is a straightforward way to extend our framework and train RSNs to implement irreversible dynamics with a non-zero divergence-free field \(G\). This results in improvements in sampling speed (see Appendix E for details). ## 4 Experimental results In this section, we present the results from two tasks. First, we try to let the recurrent neural network learn and generate samples from a 1-D double-mode Gaussian mixture distribution and a 2-D mixture of heavy-tailed Laplace distributions. For the second task, we explore whether a reservoir-sample network with firing rate dynamics is able to sample from the distribution of internal representations computed from PCA filtering of image inputs. All dynamics are simulated with the Euler-Maruyama method. See Appendix H for hyperparameters used and other training details. ### Learning mixture distributions We consider a 1-D Gaussian mixture distribution whose density function is the average of two Gaussian distributions centered at \(\pm 1\), i.e. \(p_{\mathrm{data}}(x)=\frac{1}{2}(\mathcal{N}(-1,0.25)+\mathcal{N}(1,0.25))\). We artificially generate 10000 data points from this distribution and minimize the denoising score-matching loss: \[\mathcal{L}(W_{\mathrm{out}},\widetilde{W}_{\mathrm{rec}},I)=\mathbb{E}_{p_{ \mathrm{data}}(x)}\mathbb{E}_{\tilde{x}\sim\mathcal{N}(x,\sigma^{2})}\left[ \left\|2\alpha(W_{\mathrm{out}}\phi(\widetilde{W}_{\mathrm{rec}}x+I)-x)+ \frac{\tilde{x}-x}{\sigma^{2}}\right\|^{2}\right]. \tag{15}\] We use \(\phi(\cdot)=ReLU(\cdot)\) and set \(\alpha=1/2\). See Figure 2 for the numerical results. It is worth noting that if we use \(\tanh\) as the transfer function, the sampler-only networks are able to learn the score function perfectly, as the score function of the Gaussian mixture distribution we considered is exactly spanned by \(f_{1}(x)=x\) and \(f_{2}(x)=\tanh(2x)\). See Appendix F.1 for an example where sampler-only networks fail to learn the score function even if hyperbolic tangent nonlinearity is used. Next, we show that the model can learn mixtures of heavy-tailed distributions that are evident in natural image statistics and the neural representations in the primary visual cortex [47; 39]. We trained the Reservoir-Sampler network with FR dynamics (RS-FR) model on 20000 sampled data points from a 2-D Laplace mixture distribution, whose density is given by \(p_{\mathrm{data}}(\mathbf{x})=\frac{1}{2}\left(\mathrm{Lap}\left(\mathbf{0}, \begin{bmatrix}1&0.9\\ 0.9&1\end{bmatrix}\right)+\mathrm{Lap}\left(\mathbf{0},\begin{bmatrix}1&-0.9 \\ -0.9&1\end{bmatrix}\right)\right)\), where \(\mathrm{Lap}\) denotes the multivariate Laplace distribution. The model successfully learned the probability density of the mixture distribution (Figure 3 left vs. middle), and captured the heavy tails of the distribution as measured by the kurtosis (Figure 3 right). ### MNIST generation task We also tested the sampling ability of our model using the MNIST dataset [32] which contains 60,000 handwritten digits from 0 to 9. We projected MNIST images to a 300-D latent space spanned by the first 300 principal components and trained the weights of the recurrent neural network as described in Section 3.4 so that the RNN can sample from the latent distribution. To test the model, we generated images by applying inverse PCA projection to samples generated by the model. The schematics and generated images are shown in Figure 4. Note that since we are essentially using a shallow network to match the score, we should not expect comparable performance to generative models that use deep ANNs. Our main goal is to illustrate that the reservoir-sampler network using firing rate dynamics is qualitatively more expressive than other traditional neural sampling models (Appendix G). Finally, we also note that it is highly nontrivial for recurrent neural dynamics to complete such a generative task, and to the best of our knowledge, no previous work has achieved such results. Figure 2: **Bimodal distribution sampling results.** The 3 tractable cases shown are Sampler-only (SO) networks with both synaptic current (SC) dynamics and firing rate (FR) dynamics and Reservoir-sampler (RS) networks with FR dynamics, which are named SO-SC, SO-FR, and RS-FR respectively. a-c) The score function learned compared to the true score function (orange curve) as we gradually decrease the noise level (the darker the line, the lower the noise level). We see that RS-FR is capable of perfectly fitting the score function, while SO-SC and SO-FR are only able to fit the score function with piecewise linear functions when using the ReLU transfer function. d-f) Histogram of sampled points, and the (scaled) density function of the target distribution. Again the reservoir-sampler network is able to generate samples whose distribution matches the target distribution, while the sampler-only network is not able to do so due to the incorrectly matched score function. ## 5 Discussion From the perspective of functional analysis and SDE theory, we prove that under the strict orthogonality constraint, it is essential for neural circuits to have a drift term that has the expressivity to approximate complex score functions, despite the fact that the dynamics do not have to exactly implement the Langevin dynamics. We investigated whether a population of neurons can sample from an arbitrary distribution directly and proved that the drift term of the synaptic current dynamics can only sample from a finite-dimensional function space. Although the drift term of the firing Figure 4: **Learning to sample the MNIST image distribution.** a) An MNIST image is projected to a 300-D latent space (orange circle) spanned by the first 300 principal components using PCA. The sampler learns to sample from the distribution of this latent space and generates images by applying inverse PCA projection to these samples. The diagram illustrates the RS-FR model. b) The loss curves for 3 different models during training. Every 100 epochs the noise level added to the training samples is reduced (Appendix H), and the noise increases to a higher value because the score matching loss magnitude depends on the noise level. As shown in the inset, the loss of the RS-FR model decreases throughout the training process when using the lowest fixed noise level. Meanwhile, the losses of the other two models remain unchanged. c-f) The images generated for the 3 models compared to the digit images generated from latent training samples. Figure 3: **RS-FR model learning a mixture of 2-D Laplace distribution.** Left to right: sample density from the true distribution (brighter color denotes higher density); sample density from the learned distribution; marginalized kurtosis of each dimension from the true and learned distribution. rate dynamics can approximate functions spanned by different basis functions, the number of basis functions is limited. To address this problem, we proposed the reservoir-sampler network for firing rate dynamics. We found that with learnable diffusion coefficients and a sufficiently large reservoir of hidden neurons, the output neurons described using the firing rate dynamics are able to sample from arbitrary data distributions. Our results partly answer the question of what architecture recurrent neural circuits need so that they are able to sample from complex data distributions. Our analysis and empirical experiments affirm the universality of stochastic RNNs. However, this universality comes with limitations. First, we have only analytically shown the existence of weights that enable sampling from complex data distributions; there is no guarantee that one will find such weights through backpropagation. Additionally, in order to obtain the tunable diffusion coefficient during training, a matrix inverse is needed (likewise in the FORCE algorithm [50]). Further, the question of how biological circuits compute the specific gradient and implement the denoising score-matching algorithm remains an open question. Moreover, in our current formulation, we are only able to approximate the score function with a shallow network with one hidden layer. Our preliminary experiments show that one-hidden-layer RSNs cannot readily approximate high-dimensional heavy-tailed distributions (_e.g._, those of overcomplete sparse coding representations [39]). It is unclear if this is because of an insufficient number of reservoir neurons. Due to the limitation of the GPU memory, we did not try a higher number of reservoir neurons. Our model differs from recent diffusion models [26; 49], which can be seen as time-inhomogeneous SDEs, and has the advantage of being able to run indefinitely in time, making it a suitable candidate for modeling spontaneous activity in the brain. Moreover, while Song and Ermon [48] optimizes the denoising score matching loss at different noise levels jointly, we adopt a sequential learning procedure by gradually decreasing the noise level of the training samples. This procedure is more aligned with the developmental processes involved in forming visual representations in the infant brain, where the distribution of visual representations is thought to be noisier (less linearly separable) initially [4]. Our study therefore serves as a starting point for building a mechanistic model for probabilistic computation in the brain that has similar generative power to current AI generative models. Biologically, there are multiple ways to interpret the reservoir neurons and sampler neurons in an RSN. First, reservoir and sampler neurons could be seen as different types of neurons in a single brain area, where the dynamics of sampler neurons converge quickly to the equilibrium point. Second, even more straightforwardly, the sampler neurons could be seen as a more separate set of neurons located downstream of the reservoir. We also wish to suggest an alternative interpretation. Biological neural networks are known to have non-synaptic signaling networks (e.g. pervasive neuropeptidergic signaling [3], extensive aminergic signaling [6] or potential extrasynaptic signaling [53]) in addition to the synaptic connectivity that is typically modeled (i.e., via connection weights). We suggest that it is possible that the computations of the reservoir may be implemented by non-synaptic networks, and then "read out" by certain neurons' spikes. This possibility is supported by the recent finding of a low correlation between functional activity and the synaptic ("structural") connectome in C.elegans [53]. Moreover, if we only take the structural connectome into consideration, then the resulting model of C. elegans would correspond to the sampler-only network, which, as our theory predicts, will have limited sampling capability. ## 6 Conclusion In this paper, we explore how a recurrent neural circuit can sample from complex probability distributions, an important functional motif in probabilistic brain models. We start from a basic assumption that the recurrent neural circuit could be described as an SDE. We show that a recurrently connected neural population by itself has a limited capability to implement stochastic dynamics that can sample from complex data distributions. In contrast, we prove that firing rate dynamics of the output units of a recurrent neural circuit (a reservoir-sampler network) can sample from a richer range of probability distributions. These theoretical results, together with our preliminary experimental results, provide a sufficient condition for neural sampling-based models to exhibit universal sampling capability. Our results therefore provide a foundation for the next generation of sampling-based probabilistic brain models that can explain a wider range of cognitive behaviors. Acknowledgements We are thankful to Profs. Hong Qian, Bamdad Hosseini, and Edgar Walker for their guidance and insight with this project. We gratefully acknowledge the support of the grant NIH BRAIN R01 1RF1DA055669.
2306.03779
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex
One of the most impactful findings in computational neuroscience over the past decade is that the object recognition accuracy of deep neural networks (DNNs) correlates with their ability to predict neural responses to natural images in the inferotemporal (IT) cortex. This discovery supported the long-held theory that object recognition is a core objective of the visual cortex, and suggested that more accurate DNNs would serve as better models of IT neuron responses to images. Since then, deep learning has undergone a revolution of scale: billion parameter-scale DNNs trained on billions of images are rivaling or outperforming humans at visual tasks including object recognition. Have today's DNNs become more accurate at predicting IT neuron responses to images as they have grown more accurate at object recognition? Surprisingly, across three independent experiments, we find this is not the case. DNNs have become progressively worse models of IT as their accuracy has increased on ImageNet. To understand why DNNs experience this trade-off and evaluate if they are still an appropriate paradigm for modeling the visual system, we turn to recordings of IT that capture spatially resolved maps of neuronal activity elicited by natural images. These neuronal activity maps reveal that DNNs trained on ImageNet learn to rely on different visual features than those encoded by IT and that this problem worsens as their accuracy increases. We successfully resolved this issue with the neural harmonizer, a plug-and-play training routine for DNNs that aligns their learned representations with humans. Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy and neural prediction accuracy that assails current DNNs and offer a path to more accurate models of biological vision.
Drew Linsley, Ivan F. Rodriguez, Thomas Fel, Michael Arcaro, Saloni Sharma, Margaret Livingstone, Thomas Serre
2023-06-06T15:34:45Z
http://arxiv.org/abs/2306.03779v1
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex ###### Abstract One of the most impactful findings in computational neuroscience over the past decade is that the object recognition accuracy of deep neural networks (DNNs) correlates with their ability to predict neural responses to natural images in the inferotemporal (IT) cortex [1; 2]. This discovery supported the long-held theory that object recognition is a core objective of the visual cortex, and suggested that more accurate DNNs would serve as better models of IT neuron responses to images [3; 4; 5]. Since then, deep learning has undergone a revolution of scale: billion parameter-scale DNNs trained on billions of images are rivaling or outperforming humans at visual tasks including object recognition. Have today's DNNs become more accurate at predicting IT neuron responses to images as they have grown more accurate at object recognition? Surprisingly, across three independent experiments, we find that this is not the case. DNNs have become progressively worse models of IT as their accuracy has increased on ImageNet. To understand why DNNs experience this trade-off and evaluate if they are still an appropriate paradigm for modeling the visual system, we turn to recordings of IT that capture spatially resolved maps of neuronal activity elicited by natural images [6]. These neuronal activity maps reveal that DNNs trained on ImageNet learn to rely on different visual features than those encoded by IT and that this problem worsens as their accuracy increases. We successfully resolved this issue with the _neural harmonizer_, a plug-and-play training routine for DNNs that aligns their learned representations with humans [7]. Our results suggest that harmonized DNNs break the trade-off between ImageNet accuracy and neural prediction accuracy that assails current DNNs and offer a path to more accurate models of biological vision. Our work indicates that the standard approach for modeling IT with task-optimized DNNs needs revision, and other biological constraints, including human psychophysics data, are needed to accurately reverse-engineer the visual cortex. ## 1 Introduction The release of AlexNet [8] was significant not only for shifting the paradigm of computer vision into the era of deep learning, it also heralded a new approach for "systems identification," and approximating the transformations used by neurons in the visual cortex to build robust and invariant object representations. Over the past decade, deep neural networks (DNNs) like AlexNet, which are trained for object recognition on ImageNet [9], have been found to contain units that significantly better fit neural activity in inferior temporal (IT) cortex in non-human primates compared to classic hand-tuned models from computational neuroscience [1]. It was later found that DNN fits with neural data improved as these models grew more accurate at object recognition and began to rival humans on the task [10; 11]. This surprising similarity between such task-optimized DNNs and brains supported the extant theory that object recognition is a core principle shaping the organization of the visual cortex [12], and raised the question of how important the many biological details amassed by visual neuroscience actually are for predicting neural responses in visual cortex. In the years since those findings, deep learning has undergone a revolution of scale, and current DNNs which rival or exceed human accuracy in vision and language are significantly larger and trained with orders of magnitude more data than ever before [13]. Does the object recognition accuracy of a DNN still correlate with its ability to predict IT responses to natural objects? To answer this question, we turned to Brain-Score [3], the standard approach for benchmarking the accuracy of models at predicting neural activity in visual cortex of non-human primates. In brief, the Brain-Score evaluation method involves linearly mapping model unit responses to neural activity and then evaluating model unit predictions on held-out images. With this approach, we found a consistent trend across three different IT datasets hosted on the official Brain-Score website (Brain-score.org), which reflect neural responses to gray-scale versions of realistic rendered objects and natural images [14; 15; 16]. As DNNs have improved on ImageNet [9] over recent years, they have become progressively less accurate at predicting IT neuron responses to images (Fig. 1) 1 Footnote 1: In [11] it was noted that while there was an overall correlation of 0.92 between DNN accuracy on ImageNet and predicting IT responses, the correlation weakened for state-of-the-art models. In this study, we investigate two potential explanations for why task-optimized DNNs are turning into poor models of IT. (_i_) The internet data diets of DNNs and the routines used to train them to high accuracy on ImageNet leads them to learn the wrong visual features for explaining primate vision and IT responses to objects [17; 18; 7; 19]. (_ii_) Newer DNNs have become less brain-like, and this problem has been magnified as DNNs have grown larger and ultimately deeper than the visual cortex [20]. Contributions.To understand if the trade-off between ImageNet accuracy and IT predictions tjat we observed (Fig. 1) is due to the training routines and data diets of DNNs or their architectures, we turned to a new set of experimental recordings of IT neuronal responses to high-resolution color images [6]. The experimental images were significantly closer to the statistical distribution of images in ImageNet (Fig. 1), unlike prior studies of IT [14]. The recordings also provided a coarse estimate of which image features drove neuronal activity (Fig. 2), which helped characterize DNN errors in explaining neural responses. As we will show, compared to the recordings used in the official Brain-Score benchmark, such spatially-resolved neural data provide far greater insight into why DNNs are becoming worse models of IT. We adopted the Brain-Score evaluation method to measure the prediction accuracy of 135 DNNs on recordings from medial (ML) and posterior (PL) lateral IT in two different Monkeys. To summarize our findings: Figure 1: **Deep neural networks (DNNs) are becoming progressively worse models of inferior temporal (IT) cortex as they grow more accurate on ImageNet.** Experimental data shown here is taken from Brain-score.org, and each experiment utilized different stimuli. The 104 dots in each panel depict the ImageNet accuracy and neural prediction accuracy of DNNs, and the grey-shaded region denotes the pareto-front governing the trade-off between these variables. * We observed the same trade-off we found on Brain-Score.org data (Fig. 1) in each of our recordings: DNNs are becoming less accurate at predicting IT responses as they improve on ImageNet (Fig. 3). * DNNs trained on ImageNet learn different features than those encoded by IT (Fig. 4), and this mismatch is not helped by training on more internet data, using newer DNN architectures like Transformers, relying on self-supervision, or optimizing for adversarial robustness. * We successfully broke this trade-off by training DNNs with the _neural harmonizer_[7] and aligning the representations they learn for object recognition with those that are diagnostic for humans. * We further demonstrate that harmonized DNNs (hDNNs) are not only significantly better at explaining IT responses than any other DNN available, they also generate interpretable hypotheses on the features driving IT neuron responses. ## 2 Methods Neural recordings.We leveraged recordings of IT neuronal responses from two monkeys [6], which were designed to reveal feature preferences of putative face-selective neurons. The recorded neurons also exhibited selectivity for non-face object features [6]. Recordings were made using chronically implanted 32-channel multi-electrode arrays within the fMRI-defined middle lateral (ML) and posterior lateral (PL) face patches of one monkey (Monkey 1), and ML of another monkey (Monkey 2, Fig. 2a). The activating regions in these areas were mapped (1-3\({}^{o}\)), then each image was shown to the animals after they were cued to fixate at a specific position in space (Fig. 2b). The same images were shown multiple times while the monkeys fixated at a red dot in the center of the screen, which made it possible to derive spatial activity maps of neuronal responses (Fig. 2c). Fixation positions fell in a 16 \(\times\) 16 grid for Monkey 1 and 7 \(\times\) 7 grid for Monkey 2, and the average neuronal activity at each position was taken to generate spatial activity maps. A total of 14 images were shown to Monkey 1, and a different set of 14 images were shown to Monkey 2. Figure 2: **IT recordings that reveal spatial maps of neuronal responses to complex natural images offer unprecedented insights into their feature selectivity [6].****(a)** Neurons in posterior (PL) and/or medial (ML) lateral IT in two animals were localized using functional magnetic resonance imaging (fMRI), and neural responses to images were recorded using chronically implanted 32-channel multi-electrode arrays. **(b)** The monkeys were rapidly shown each image multiple times for 200ms each time (Monkey 1: \(n=256\), Monkey 2: \(n=49\)). Images were positioned differently each time to measure neural responses to every part of the image. **(c)** This procedure yielded spatially resolved maps of neural activity for an image, revealing the relative importance of different features for the recorded neurons. **(d)** DNNs were fit to these recordings following the Brain-Score evaluation method [11], in which partial least squares decoders were used to find the units in a DNN that provided the best match for neuronal responses to images. The recordings for Monkey 1 resulted in responses from 32 neurons in ML and 31 neurons in PL. For Monkey 2, we obtained responses from 32 neurons (see Appendix SS2 for more details). Following [6], we binned neuronal responses every 40ms of the recording, from 50ms to 250ms. Within each bin, we calculated the noise ceiling for every neuron, which represents the maximum correlation achievable between any two neurons within that time interval. Noise ceilings for each recording were similar to those reported for standard IT Brain-Score datasets (Appendix SS2). In the main text, we present modeling results from the time bins that exhibited the highest average noise ceiling for each Monkey and recording site. Additional results from other time bins, which are consistent with these findings, can be found in Appendix SS2. DNNs.We investigated the neural fits of 135 different DNNs representing a variety of approaches used in computer vision today: 62 convolutional neural networks trained on ImageNet [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] (CNNs), 23 DNNs trained on other datasets in addition to ImageNet (which we refer to as "DNN extra data") [32, 36, 48], 25 vision transformers [49, 50, 51, 52, 53, 54] (ViTs), 10 DNNs trained with self-supervision [55, 56], and 15 DNNs trained to be robust to noise or adversarial examples [57, 58]. Each model was implemented in PyTorch with the TIMM toolbox ([https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)) and pre-trained weights. Inference was executed on one NVIDIA TITAN X GPU. Additional model details, including the licenses used for each, are detailed in Appendix SS3. Neural Harmonizer.It was recently found that DNN representations are becoming less aligned with human perception as they evolve and improve on ImageNet [59, 7, 60]. A partial solution to this problem is the _neural harmonizer_, a training routine that can be combined with any DNN to align its representations with humans without sacrificing performance. Here, we test the hypothesis that aligning DNNs with human visual representations can similarly significantly improve their accuracy in predicting neural responses to images. Training DNNs with the _neural harmonizer_ on ImageNet involves an additional loss to standard cross-entropy for object recognition. This extra loss forces a model's gradients to appear as similar as possible to feature importance maps derived from human behavioral experiments (see [7] for details). To implement this loss, let \(\mathcal{P}_{i}(.)\) be a function that a multi-scale Gaussian pyramid of a human feature importance map \(\mathbf{\phi}\) to \(N\), with \(i\in\{1,...,N\}\). During training, we seek to minimize \(\sum_{i}^{N}||\mathcal{P}_{i}(\mathbf{g}(\mathbf{f}_{\theta},\mathbf{x}))-\mathcal{P}_{i} (\mathbf{\phi})||^{2}\) in order to align feature importance maps between humans and DNNs at every scale of the pyramid. Before they are compared, feature importance maps from humans and DNNs are normalized and rectified using \(\mathbf{z}(.)\), a function that transforms a feature importance map \(\mathbf{\phi}\) from either source to have 0 mean and unit standard deviation. This procedure yields the complete neural harmonization loss: \[\mathcal{L}_{\text{Harmonization}}= \lambda_{1}\sum_{i}^{N}||(\mathbf{z}\circ\mathcal{P}_{i}\circ\mathbf{g}( \mathbf{f}_{\theta},\mathbf{x}))^{+}-(\mathbf{z}\circ\mathcal{P}_{i}(\mathbf{\phi}))^{+}||_{2} \tag{1}\] \[+\mathcal{L}_{CCE}(\mathbf{f}_{\theta},\mathbf{x},\mathbf{y})+\lambda_{2}\sum _{i}\theta_{i}^{2} \tag{2}\] Following the original _neural harmonizer_ implementation [7] and training recipe, we trained six DNNs on ImageNet and human feature importance maps from the _ClickMe_ game [61]: VGG16 [25], LeViT [62], ResNetV2-50 [63], EfficientNet_b0 [22], ConvNext [32], and MaxViT [64]. Each model was trained with Tensorflow 2.0 on 8 V4 TPU cores with all of the images in the ImageNet training set, and _ClickMe_ human feature importance maps for the 200,000 images which were annotated. Images and feature importance maps were augmented with mixup [65], random left-right flips, and random crops during training. Only the object recognition loss was computed for images without human feature importance maps. Model weights were optimized with stochastic gradient descent and momentum, batches of 512 images, label smoothing [66], a learning rate of \(0.3\), and a learning rate schedule that began with a 5-epoch warm-up period followed by a cosine decay over 90 epochs at steps 30, 50 and 80. Neural fitting.We evaluated the neural fit of each model by computing their fit separately for each IT recording, using the Brain-Score evaluation method [11]. This method involved fitting image-evoked activities from a layer of a deep neural network (DNN) to the corresponding neural responses using the partial least squares regression algorithm from Scikit-Learn. To implement the Brain-Score evaluation method on the spatially-resolved recordings we investigate here, we first pass the image through the network and split the resulting feature map into an equal number of patches as there were fixation locations in the image shown to the animal. Each image patch captured the receptive field of recorded neurons, and in total, there were \(4,046\) image patches for ML and \(4,046\) image patches for PL in Monkey 1, and \(1,134\) image patches for ML in Monkey 2. We measured DNN neuronal fits as the Spearman correlations between model predictions and true neural responses for each patch of an image held out of training divided by each neuron's noise ceiling. We then stored the median Spearman correlation over neurons and repeated the training/testing procedure to get the mean correlation across all images viewed by a monkey. Separate fitting procedures were performed for every layer of activities in a model, and we report a DNN's Brain-Score as its best possible fit across layers. ## 3 Results Task Optimization is insufficient for reverse-engineering IT.Ever since it was found that DNNs optimized for object recognition produce accurate predictions of IT neural responses to images, it was suggested that prediction accuracy would continue improve alongside DNN performance on ImageNet [10; 11; 12]. Is this the case with today's DNNs that rival or exceed the performance of human object recognition? To answer this question, we leverage recordings of neuronal responses to high-resolution natural images in medial (ML) and posterior lateral (PL) IT (see Methods and Fig. 2). These images fall within the same statistical distribution as ImageNet images (Appendix SS1), ensuring that our findings are not influenced by distributional shifts that are a well-known problem faced by computer vision models [67]. Image-evoked neural responses were spatially mapped, enabling insights into the visual features driving the responses of IT neurons and DNNs. Moreover, while these regions were localized according to their selectivity to face stimuli, they were also noted to respond to non-face stimuli [6]. Across 135 different DNNs, representing the variety of approaches used today in computer vision, we found that DNNs pretrained on ImageNet have become progressively less accurate at predicting ML and PL responses in two separate Monkeys (Fig. 3). For instance, ConvNext tiny, which achieved 74.3% accuracy on ImageNet, is as accurate in predicting neural responses to images as the ResNetv2-152x4, which reached 84.9% accuracy on ImageNet. Moreover, training routines that have been suggested to be more biologically plausible or yield representations that are closer to biological vision, such as training with self-supervision [68] or for adversarial robustness [69], made no difference. All DNNs trained on internet data faced a pareto-front that bounded their accuracy in predicting IT responses as a function of their ImageNet accuracy. DNNs need biologically-aligned training routinesThere are at least two potential reasons why ImageNet-trained DNNs face a trade-off between object recognition and neural prediction accuracy: Figure 3: **DNNs trained on ImageNet face a trade-off between achieving high object recognition accuracy and predicting responses to natural images in IT..** We measured the accuracy of 135 DNNs at predicting image-evoked responses from neurons in posterior lateral (PL) and medial-lateral (ML) areas of IT [6] by computing the neural predicity of each model with the Brain-Score evaluation method [11]. DNN neural predictivity is progressively worsening as models improve on ImageNet. This problem can be partially fixed by training DNNs with the _neural harmonizer_, which aligns their learned object representations with humans. Error bars denote 95% bootstrapped confidence intervals. (_i_) DNN data diets and training routines work well for ImageNet but lead them to learn features that are misaligned with biological vision [7, 17, 18, 19] or (_ii_) their architectures are a poor match to visual cortex [20]. To test this first possibility, we turned to the _neural harmonizer_. Given prior work demonstrating that aligning DNNs with human perception using the _neural harmonizer_ significantly improved their ability to predict human behavior, we reasoned that harmonization might similarly improve DNN explanations of neural data by forcing them to rely on human-like visual features. Indeed, we found that harmonized DNNs (hDNNs) were significantly more accurate at predicting image-evoked responses in ML (\(T(5)=5.30\), \(p<0.01\)) and PL (\(T(5)=6.11\), \(p<0.001\)) neurons of Monkey 1 and ML (\(T(5)=6.89\), \(p<0.001\)) neurons of Monkey 2 (reported as the average of \(T\) score from independent samples \(t-\)tests comparing the predictivity of each hDNN to the DNN with the highest neural-predictivty). The success of hDNNs indicates that the architectural mismatch between DNNs and the visual cortex is far less of a problem for modeling the visual cortex than the training routines and data diets that are being used today to achieve high accuracy on ImageNet. Behavior - even from humans - is an important constraint on DNNs that is needed to build more accurate models of primate visual cortex. Spatially-mapped neural responses reveal a feature mismatch between IT and DNNsThe IT recordings used in the Brain-Score benchmark all utilized the same paradigm, in which hundreds or thousands of images were shown to multiple animals during passive viewing [14]. In contrast, the recordings we rely on involve many fewer unique images, but neural responses for each image are spatially mapped. This spatial mapping reveals what types of features are driving neural responses and makes it easier to understand the successes and failures of DNNs in explaining these data [6]. Although the neurons in ML and PL were located based on their selectivity to faces, the spatial maps of their responses revealed a much more complex response profile. IT neurons in both animals were strongly driven by faces, non-face objects, and contextual cues associated with faces [6]. In contrast, the best-fitting units of DNNs with state-of-the-art accuracy on ImageNet, like the ResNetv2-152x4 Figure 4: **DNNs optimized for object recognition on ImageNet rely on different features than those encoded by neurons in primate inferior temporal (IT) cortex..** The activity of PL IT neurons is plotted next to the predicted activity of a model representing each class of DNNs: harmonized DNNs (hDNNs), visual transformers, self-supervised DNNs, convolutional neural networks, DNNs trained on more data than ImageNet, and adversarially robust DNNs. (Fig. 4), responded strongly to background features. This problem was shared by DNNs trained on internet data but with routines that have been considered more biologically plausible, like self-supervised learning (e.g., SSL ResNet-18), or routines that yield perceptual robustness that is closer to humans, like adversarial training (e.g., Robust ResNetv2-50). hDNNs like the harmonized MaxViT, on the other hand, were, in general, reliably predictive of what parts of images elicited the most activity from IT neurons. hDNNs generate testable predictions for visual neuroscienceThe surprising effectiveness of DNNs pre-trained on object recognition for predicting IT responses to images was a significant finding in computational neuroscience, which raised the tantalizing possibility that these models could form the basis of neuroprosthetics, and reduce the need for animal experiments. However, a persistent problem with DNNs since their inception is their lack of interpretability, meaning that using DNNs for systems identification effectively swaps one black box (the visual system) with another (a DNN) without getting the field any closer to understanding how vision works. Recent strides in the field of explainable artificial intelligence (XAI) have begun to alleviate this problem. For instance, it is now possible to reliably locate and characterize the features in datasets, like ImageNet, that drive patterns of model behavior using concept recursive activation factorization (CRAFT [70]). When paired with accurate models of IT and recordings that reveal locations in images that elicit neuronal responses, CRAFT can generate testable predictions of _what_ those features are. We used CRAFT to extract features from harmonized and regular ResNet-50s trained on ImageNet that explain their predictions of neural activity in PL of Monkey 1 (Fig. 5). To do this, we decomposed a DNNs predicted activities for image patches into the most important and distinct features using non-negative matrix factorization, then scaned over every single image patch to find those which most strongly explained each discovered feature. While the harmonized ResNet-50 predicted that parts Figure 5: **Predictions of the visual features drive neuronal responses in PL of Monkey 1 from a harmonized ResNet-50 and a standard ResNet-50.** Image patches for each model depict what the important features are across all images shown to the animal. The relative importance of each feature for predicting the recordings is color-coded in the bar chart on the right. of faces, arms, and heads explained the majority of variance in IT, the less accurate and unharmonized ResNet-50 predicted that the background, oriented-edges, and a feature combining head, body, arm, and hand parts were most important for IT. Thus, hDNNs not only break the trade-off between ImageNet and neural prediction accuracy of ImageNet-trained DNNs, they can also generate testable hypotheses on the relative importance of different features for visual system neurons. ## 4 Related work The scaling laws of deep learningThe immense gains of DNNs in computer vision [13] and natural language processing [71] over recent years share a common theme: they have relied on unprecedented computational resources to train models with billions of parameters on internet-scale datasets. These benefits of scale have been studied over the past several years across a number of domains and often yield predictable improvements on standard internet benchmarks [72, 71]. Surprisingly, scale has also caused models to exhibit human-like behavior in psychophysics experiments [73, 74, 13, 57]. In other words, many aspects of primate behavior _are_ captured progressively better by DNNs as they scale-up and improve in accuracy on benchmarks like ImageNet. Scale is sufficient for modeling internet benchmarks, but not primate intelligenceIn parallel, there's a growing body of research suggesting that large-scale DNNs are becoming less aligned with human perception in multiple ways. For instance, the representations and visual decision-making behavior of DNNs are inconsistent with humans, and this problem has worsened as they have improved on ImageNet [7]. DNNs are also becoming less capable at predicting perceptual similarity ratings of humans, and they remain vulnerable [75] to so-called "adversarial attacks." Our work adds to this body of research, indicating that current scaling laws are incompatible with modeling primate behavior _and_ poorly suited for explaining the neural computations that shape it. Aligning DNNs with primate visionThere have been a number of methods proposed for aligning DNNs with primate vision beyond the _neural harmonizer_ that we leverage here [7]. It was found that co-training DNNs for object recognition and minimizing representational dissimilarity improved the adversarial robustness of a recurrent DNN [76]. Others have shown that similar forms of representational alignment improve few-shot learning and predictions of human semantic judgments in DNNs [77, 78]. The successes of these behavioral and representational alignment methods highlight the real limitations of current DNN training routines and data diets for generating artificially intelligent systems that act like biological ones. Biological learning routinesThere have been many efforts to align the object functions, learning rules, and data diets of DNNs more closely with biological systems. It was found that DNNs trained with self-supervision on ImageNet instead of supervised recognition achieve similar accuracy in predicting V1, V4, and IT responses to images [68]. DNNs trained with self-supervised learning on head-mounted camera video from infants instead of ImageNet were approximately as accurate at predicting neural data as DNNs trained on Imagenet but needed far less data to do this [68]. Others have found that adversarial training of DNNs yields models that are more accurate at predicting neural responses to images [79] and have similar tolerance to adversarial attacks as neurons in IT [69]. ## 5 Discussion A revised approach for reverse engineering visual cortexBiological data is expensive to gather and often noisy. This makes the prospect of accurately modeling the responses of IT to complex stimuli especially daunting. DNNs optimized for object recognition represented a potential solution for this problem: pretraining on internet datasets alleviated training issues associated with small-scale neural data, and the architectures and training routines of accurate DNNs could offer insights into the circuits that underlie visual perception as well as the developmental principles that shape those circuits. Our findings suggest that while DNNs still hold great potential for predicting image-evoked responses from IT neurons, new training routines and data diets are necessary for continued improvement. The _neural harmonizer_ is a partial solution to the problems that DNNs face in modeling primate IT. The success of hDNNs in breaking the pareto-front faced by 135 different DNNs indicates that significant aspects of primate perception and the neural circuits that shape it cannot be divined from internet data alone. We believe that the _neural harmonizer_ and learning constraints provided by large-scale human behavior data is only a short-term solution to this problem, and that if DNNs were able to learn about the visual world more like primates do, they would be even better at predicting neural data. In support of this goal, we release our code and data at [https://serre-lab.github.io/neural_harmonizer](https://serre-lab.github.io/neural_harmonizer). Limitations.One limitation of our work is that we use the responses of neurons with face-selectivity to replicate and understand the trade-off between ImageNet accuracy and neural predictivity faced by DNNs. As faces (and even humans) are not a category in ImageNet, it is possible that this dataset [6] could bias our results in ways that are difficult to predict2. However, we find the same ImageNet accuracy and neural predictivity trade-off on this dataset (Fig. 3) as we did on the three recordings of object selective neurons hosted on the Brain-Score website (Fig. 1), indicating that DNNs face similar issues in predicting each set of recordings. Moreover, the neural harmonizer was successful in breaking this trade-off, even though it involves ImageNet training using human feature importance maps for objects. Finally, it was noted in the original paper where our recordings came from that the neurons, while localized based on their face-selectivity, responded to non-face stimuli as well [6]. In summary, our work reliably demonstrates that new paradigms are needed to advance DNNs as models of IT, and spatially-resolved recordings such as those used in this work support this goal. Footnote 2: Note, however, that there are many animal and human faces in Imagenet images, giving DNNs ample opportunity to learn about them. Another limitation of our work is that while we found that hDNNs are significantly more predictive of IT neuron responses than any other DNN, they still explain only 50-60% of the variance in neuronal activity. One straightforward way of doing better is by expanding the dataset of human feature importance maps we used for harmonization from annotations for approximately 200,000 images to the entire 1.2M ImageNet dataset. Broader impacts.By building better models of primate IT, we are taking significant steps toward the reducing the reliance of visual neuroscience on animal models for experimentation, supporting the development of neuroprosthetic devices that resolve visual dysfunctions, and providing vision scientists with a richer understanding of how IT works. Our findings also highlight a main limitation of the scaling laws that are guiding progress throughout artificial intelligence today: scale is not sufficient for explaining biological intelligence. ## Acknowledgments and Disclosure of Funding This work was supported by ONR (N00014-19-1-2029), NSF (IIS-1912280 and EAR-1925481), DARPA (D19AC00015), NIH/NINDS (R21 NS 112743), and the ANR-3IA Artificial and Natural Intelligence Toulouse Institute (ANR-19-PI3A-0004). Additional support provided by the Carney Institute for Brain Science and the Center for Computation and Visualization (CCV). We acknowledge the Cloud TPU hardware resources that Google made available via the TensorFlow Research Cloud (TFRC) program as well as computing hardware supported by NIH Office of the Director grant S10OD025181.
2307.12575
Robust MIMO Detection With Imperfect CSI: A Neural Network Solution
In this paper, we investigate the design of statistically robust detectors for multi-input multi-output (MIMO) systems subject to imperfect channel state information (CSI). A robust maximum likelihood (ML) detection problem is formulated by taking into consideration the CSI uncertainties caused by both the channel estimation error and the channel variation. To address the challenging discrete optimization problem, we propose an efficient alternating direction method of multipliers (ADMM)-based algorithm, which only requires calculating closed-form solutions in each iteration. Furthermore, a robust detection network RADMMNet is constructed by unfolding the ADMM iterations and employing both model-driven and data-driven philosophies. Moreover, in order to relieve the computational burden, a low-complexity ADMM-based robust detector is developed using the Gaussian approximation, and the corresponding deep unfolding network LCRADMMNet is further established. On the other hand, we also provide a novel robust data-aided Kalman filter (RDAKF)-based channel tracking method, which can effectively refine the CSI accuracy and improve the performance of the proposed robust detectors. Simulation results validate the significant performance advantages of the proposed robust detection networks over the non-robust detectors with different CSI acquisition methods.
Yi Sun, Hong Shen, Wei Xu, Nan Hu, Chunming Zhao
2023-07-24T07:46:36Z
http://arxiv.org/abs/2307.12575v1
# Robust MIMO Detection With Imperfect CSI: A Neural Network Solution ###### Abstract In this paper, we investigate the design of statistically robust detectors for multi-input multi-output (MIMO) systems subject to imperfect channel state information (CSI). A robust maximum likelihood (ML) detection problem is formulated by taking into consideration the CSI uncertainties caused by both the channel estimation error and the channel variation. To address the challenging discrete optimization problem, we propose an efficient alternating direction method of multipliers (ADMM)-based algorithm, which only requires calculating closed-form solutions in each iteration. Furthermore, a robust detection network RADMMNet is constructed by unfolding the ADMM iterations and employing both model-driven and data-driven philosophies. Moreover, in order to relieve the computational burden, a low-complexity ADMM-based robust detector is developed using the Gaussian approximation, and the corresponding deep unfolding network LCRADMMNet is further established. On the other hand, we also provide a novel robust data-aided Kalman filter (RDAKF)-based channel tracking method, which can effectively refine the CSI accuracy and improve the performance of the proposed robust detectors. Simulation results validate the significant performance advantages of the proposed robust detection networks over the non-robust detectors with different CSI acquisition methods. Imperfect channel state information (CSI), multi-input multi-output (MIMO), robust detector, alternating direction method of multipliers (ADMM), deep unfolding ## I Introduction Efficient multi-input multi-out (MIMO) detection algorithms are essential to unleashing the full potential of MIMO techniques, which have been extensively studied for decades in literature [2, 3, 4]. As is well known, the maximum likelihood (ML) detectors can achieve the optimal performance while with a huge exponential complexity. Linear detectors, including the zero forcing (ZF) and the linear minimum mean squared error (LMMSE) detectors, enjoy lower complexity but suffer from limited performance. To strike a tradeoff between the performance and the complexity, some sub-optimal detection algorithms can be applied, such as sphere decoding (SD) [5], expectation propagation (EP) [6], and approximate message passing (AMP) [7]. The alternating direction method of multipliers (ADMM) [8] has also been widely used to solve the detection problems from the perspective of convex optimization. In general, the detection performance highly hinges on the quality of the available channel state information (CSI), which is inevitably imperfect due to some practical issues. However, these classical MIMO detectors are developed based on the assumption of perfect CSI and therefore non-robust to the CSI uncertainties. That means, when we apply these "mismatched" detectors by directly regarding the acquired imperfect CSI as if it is perfect, the detection performance can be severely degraded [9]. To handle this, the CSI imperfection should be considered during the design of a MIMO detector to enhance its robustness. In this paper, we focus on the statistically robust MIMO detection with a random CSI error model. Related works include [10, 11, 12, 13], where the robust detectors using the linear MMSE criterion and the widely-linear MMSE criterion were studied for the MIMO systems without and with in-phase or quadrature-phase imbalance (IQI), respectively. Alternatively, a linear minimum error probability (MEP) detector with a given length of pilots was derived to enable ultra-reliable and low latency communication (URLLC) under imperfect CSI [14]. Furthermore, in [15], the authors investigated a robust orthogonal approximate message passing (OAMP) algorithm as well as its corresponding network by exploiting the statistics of imperfect CSI, which can exhibit stronger robustness against CSI errors than the conventional OAMP algorithm [16]. The optimal robust ML detectors were discussed in [17, 18, 19, 20] by incorporating the statistical distribution of channel estimation errors. Although their performance advantages have been validated by both theoretical analysis and simulation results in [17], the exponential complexity impedes their practicality. On the other hand, in most of these works, the CSI errors were characterized by some relatively simple models, e.g., the independent and identically distributed (i.i.d.) Gaussian model, which, however, is inconsistent with practical systems with spatial correlation. Therefore, a more realistic CSI error model needs to be adopted for the robust MIMO detection design. Recently, the great advancement of deep learning (DL) has motivated the researches on the DL-aided wireless communications [21, 22, 23, 24, 25]. In particular, DL has been successfully applied in the physical layer communication techniques, such as channel estimation [26, 27], MIMO precoding [28, 29], and, as our main interest, MIMO detection [30, 31, 32, 33, 34, 35], which have been thoroughly reviewed in [24]. The DL-based designs can be categorized into data-driven and model-driven types. The data-driven DL regards the mapping function from the input to the desired output as a black box and directly learns the function by training a network of a specific structure [36], whose performance highly relies on the training dataset. Consequently, the overfitting problem may occur with an insufficient dataset or a large amount of trainable parameters. To relax the requirement on the training data, the model-driven DL that leverages the expert knowledge has emerged as an alternative [22]. One of the most popular model-driven DL approaches is "deep unfolding", which unfolds an existing iterative algorithm into network layers with trainable parameters [23]. Accordingly, the inherent mechanism of the original algorithm can be maintained and only several parameters need to be learned. The deep unfolding based MIMO detection was first investigated in [30], where a network called DetNet was built by mimicking the projected gradient algorithm for the ML optimization problem. Inspired by this, a variety of MIMO detection networks, such as OAMPNet [16], ADMMNet [31], and LcgNet [32], were proposed based on the idea of deep unfolding. However, these networks are designed based on the availability of perfect CSI at the receiver, which can suffer from performance degradation with mismatched CSI. Motivated by the advantage of the deep unfolding technique, in this paper, we advocate two model-driven robust MIMO detection networks, called RADMMNet and LCRADMMNet, to combat CSI imperfection. Specifically, inspired by the attractive performances of ADMM-based MIMO detectors under perfect CSI [37, 38], we first apply the ADMM framework to derive the solutions to two robust ML detection problems, based on which RADMMNet and LCRADMMNet are further developed. The performance superiorities of the proposed networks over conventional non-robust MIMO detectors are validated via numerical simulations. The main contributions of this paper include: * Focusing on a spectrum-efficient frame structure with pilots only used in the first block, we model the corresponding CSI imperfection by including both the channel estimation error and the channel variation, based on which a robust ML detection problem is formulated. * In order to address the complicated robust ML detection problem, we develop an ADMM-based algorithm that only involves the calculations of closed-form expressions in each ADMM iteration, which, to the best of our knowledge, has not been investigated in previous works. Furthermore, by unfolding the ADMM iterations with non-trivial simplifications, a robust detection network, termed as RADMMNet, is established to learn the layer-wise parameters instead of performing an exhaustive search. The network design is further enhanced by transferring intermediate variables and incorporating trainable convolutional layers, so as to enjoy the advantages of both the model-driven deep unfolding technique and the introduced data-driven structure. * By exploiting a Gaussian approximation for the CSI error, we obtain a simplified reformulation of the robust ML detection problem. Based on this formulation, an ADMM-based robust detector is proposed to approach the performance of our first detector with lower complexity, where closed-form solutions are also derived in each ADMM iteration. The corresponding deep unfolding network, termed as LCRADMMNet is further constructed in a similar way as RADMMNet. * A robust data-aided Kalman filter-based channel tracking method is established to mitigate the error propagation caused by channel aging, where the pre-estimated data symbols are utilized to improve the accuracy of the channel estimation, thereby enhancing the robust detection performance. The rest of this paper is organized as follows. In Section II, we describe the system model and formulate the robust ML detection problem. Section III and Section IV elaborate the design of a robust detector and a low-complexity robust detector, respectively, wherein their corresponding deep unfolding networks are also developed. Section V presents a robust data-aided Kalman filter based channel tracking method. Numerical results are provided in Section VI, followed by the conclusion drawn in Section VII. ## II System Model and Problem Formulation ### _System Model_ Consider an uplink MIMO system, where a base station (BS) equipped with \(M\) receive antennas serves \(K\) single-antenna users. The channel remains constant during a coherent block comprised of \(L\) symbols and varies across blocks. As shown in Fig. 1, we adopt the frame structure that contains \(N+1\) coherence blocks to save the pilot overhead. Specifically, in each frame, \(L_{P}\) pilot symbols followed by \(L_{D}\) data symbols are sent in the first block, while the remaining \(N\) blocks are only used for data transmission. Note that, compared to the conventional frame structure that inserts pilots in each coherence block [39], the pilot overhead reduces from \(\frac{L_{P}}{L}\) to \(\frac{L_{P}}{(N+1)L}\), which is thus much more spectrum-efficient. For a convenient reference, the main symbols used in this paper are summarized in Table I at the top of the next page. To characterize the channel aging effect, we use the first-order Gauss-Markov process as in [40, 41]. Define \(\mathbf{\Lambda}\triangleq\mathrm{diag}\left(\rho_{1},\rho_{2},\cdots,\rho_{K }\right)\otimes\mathbf{I}_{M}\) and \(\mathbf{\bar{\Lambda}}\triangleq\mathrm{diag}\left(\sqrt{1-\rho_{1}^{2}}, \sqrt{1-\rho_{2}^{2}},\cdots,\sqrt{1-\rho_{K}^{2}}\right)\otimes\mathbf{I}_{M}\), where \(\rho_{k}\) is the channel temporal correlation coefficient of the \(k\)-th user related to the channel aging speed. Then, the channel variation between two neighboring blocks can be represented \[\mathbf{h}[n]=\mathbf{\Lambda}\mathbf{h}[n-1]+\mathbf{\bar{\Lambda}}\mathbf{w }[n],n=2,3,\cdots,N+1, \tag{1}\] Fig. 1: Frame structure. where \(\mathbf{h}[n]\in\mathbb{C}^{MK\times 1}\) denotes the vectorized form of the channel matrix \(\mathbf{H}[n]\in\mathbb{C}^{M\times K}\) at the \(n\)-th block and follows a Gaussian distribution with zero mean and covariance \(\mathbf{C_{h}}\), and \(\mathbf{w}[n]\in\mathbb{C}^{MK\times 1}\) denotes the zero-mean spatially correlated innovation process with covariance \(\mathbf{C_{h}}\). Furthermore, it can be easily inferred that \[\mathbf{h}[n]=\mathbf{\Lambda}^{n-1}\mathbf{h}[1]+\hat{\mathbf{w}}[n],n=2,3, \cdots,N+1, \tag{2}\] where \(\mathbf{h}[1]\) represents the channel vector of the first block, and the channel error \(\mathbf{\tilde{w}}[n]\) follows a Gaussian distribution with zero mean and covariance \(\mathbf{C_{\hat{w}}}[n]\), which is given by \[\mathbf{C_{\hat{w}}}[n]=\sum_{n^{\prime}=1}^{n-1}\mathbf{\Lambda}^{n^{\prime} -1}\bar{\mathbf{\Lambda}}\mathbf{C_{h}}\bar{\mathbf{\Lambda}}\mathbf{\Lambda} ^{n^{\prime}-1},n=2,3,\cdots,N+1. \tag{3}\] For the first block, letting \(\mathbf{S}_{P}\in\mathbb{C}^{K\times L_{p}}\) denote the pilot sequences, the received pilot signal can be expressed as \[\mathbf{Y}_{P}=\mathbf{H}[1]\mathbf{S}_{P}+\mathbf{Z}_{P}, \tag{4}\] where \(\mathbf{Z}_{P}\in\mathbb{C}^{M\times L_{p}}\) is the noise matrix whose elements are independently identically distributed (i.i.d.) Gaussian variables with zero mean and variance \(\sigma^{2}\). Vectorizing (4) yields \[\mathbf{y}_{P}=\mathbf{Ph}[1]+\mathbf{z}_{P}, \tag{5}\] where \(\mathbf{y}_{P}=\mathrm{vec}(\mathbf{Y}_{P})\), \(\mathbf{P}=\mathbf{S}_{P}^{T}\otimes\mathbf{I}_{M}\), and \(\mathbf{z}_{P}=\mathrm{vec}(\mathbf{Z}_{P})\). Considering the commonly used orthogonal pilots with the normalized average power of each pilot symbol, i.e., \(\mathbf{S}_{P}\mathbf{S}_{P}^{H}=L_{P}\mathbf{I}_{K}\), it then follows that \(\mathbf{P}^{H}\mathbf{P}=L_{P}\mathbf{I}_{MK}\). From (2), (3), and (5), the linear MMSE (LMMSE) channel estimate of the \(n\)-th block with given \(\mathbf{y}_{P}\) can be obtained via [40] \[\begin{array}{c}\hat{\mathbf{h}}[n]=\mathbf{\Lambda}^{n-1}\mathbf{C_{h}} \big{(}L_{P}\mathbf{C_{h}}+\sigma^{2}\mathbf{I}_{MK}\big{)}^{-1}\mathbf{P}^{ H}\mathbf{y}_{P},\\ n=1,2,\cdots,N+1.\end{array} \tag{6}\] Accordingly, we define the CSI uncertainty as \(\mathbf{\Delta}\mathbf{h}[n]=\mathbf{h}[n]-\hat{\mathbf{h}}[n]\), which is Gaussian distributed with zero mean and the covariance written as \[\mathbf{\Sigma_{h}}[n]=\left\{\begin{array}{c}\sigma^{2}\mathbf{C_{h}}\big{(} L_{P}\mathbf{C_{h}}+\sigma^{2}\mathbf{I}_{MK}\big{)}^{-1},\qquad n=1,\\ \sigma^{2}\mathbf{\Lambda}^{n-1}\mathbf{C_{h}}\big{(}L_{P}\mathbf{C_{h}}+ \sigma^{2}\mathbf{I}_{MK}\big{)}^{-1}\mathbf{\Lambda}^{n-1}+\mathbf{C_{w}}[n ],\\ n=2,3,\cdots,N+1.\end{array}\right. \tag{7}\] ### _Problem Formulation_ We now consider the data transmission. Denote \(\mathbf{x}_{t}[n]\in\mathbb{C}^{K\times 1}\) as the transmitted vector at the \(t\)-th time slot of the \(n\)-th block, each entry of which is drawn from a quadrature amplitude modulation (QAM) constellation set \(\mathcal{X}\). Then, the received signal is given by \[\begin{array}{c}\mathbf{y}_{t}[n]=\mathbf{H}[n]\mathbf{x}_{t}[n]+\mathbf{z }_{t}[n],\\ t=1,2,\cdots,L,\quad n=1,2,\cdots,N+1,\end{array} \tag{8}\] where the noise vector \(\mathbf{z}_{t}[n]\sim\mathcal{CN}\left(\mathbf{0},\sigma^{2}\mathbf{I}_{M}\right)\). For simplicity of the notation, the indices \(t\) and \(n\) will be omitted if there is no confusion. With the imperfect CSI in (6) available, one can recover the transmit signals by directly using the channel estimate as if it is perfect, which corresponds to the mismatched ML criterion: \[\mathbf{\hat{x}}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{X}^{K}}\Big{\|} \mathbf{y}-\mathbf{\hat{H}}\mathbf{x}\Big{\|}_{2}^{2}, \tag{9}\] where \(\mathbf{\hat{H}}\) is the matrix form of \(\mathbf{\hat{h}}\). However, this is not the optimal criterion since the CSI uncertainty is neglected. To fill this gap, we first recast (8) by \[\mathbf{y}=\mathbf{X}\mathbf{h}+\mathbf{z}, \tag{10}\] where \(\mathbf{X}=\mathbf{x}^{T}\otimes\mathbf{I}_{M}\). Then, by regarding \(\mathbf{h}\) as a conditional Gaussian vector with conditional mean \(\mathbf{\hat{h}}\) and covariance \(\mathbf{\Sigma_{h}}\) and performing some mathematical manipulations as in [18], the robust ML function can be derived as \[p\left(\mathbf{y}\left|\mathbf{X},\mathbf{\hat{h}}\right.\right)=C\det\left( \mathbf{R}_{\mathbf{X}}^{-1}\right)\exp\left(\mathbf{q}_{\mathbf{X}}^{H} \mathbf{R}_{\mathbf{X}}^{-1}\mathbf{q}\mathbf{x}\right), \tag{11}\] where \(C\) is a constant, \(\mathbf{q}_{\mathbf{X}}=\frac{1}{\sigma^{2}}\mathbf{X}^{H}\mathbf{y}+\mathbf{ \Sigma}_{\mathbf{h}}^{-1}\mathbf{\hat{h}}\), \(\mathbf{R}_{\mathbf{X}}=\frac{1}{\sigma^{2}}\mathbf{X}^{H}\mathbf{X}+\mathbf{ \Sigma}_{\mathbf{h}}^{-1}\), and \(\mathbf{\Sigma_{h}}\) is given in (7). Furthermore, maximizing (11) yields the subsequent robust ML detection criterion: \[\mathbf{\hat{x}}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{X}^{K}}\ln\det \left(\mathbf{R}_{\mathbf{X}}\right)-\mathbf{q}_{\mathbf{X}}^{H}\mathbf{R}_{ \mathbf{X}}^{-1}\mathbf{q}_{\mathbf{X}}. \tag{12}\] Generally, due to the \(K\)-dimensional discrete constraint \(\mathbf{x}\in\mathcal{X}^{K}\), the global optimal solution of the above problem can only be acquired via an exhaustive search, which requires an unacceptable exponential complexity. In the following section, we provide a neural network based solution to the problem with excellent performance and much reduced complexity. ## III Robust ADMM Detector Based Network Design In this section, we first devise an ADMM-based algorithm to address problem (12). Based on this, we then provide the design of the proposed robust detection network RADMMNet. ### _Robust ADMM Detector_ In order to develop an efficient robust detector, we first reformulate problem (12) into a tractable form. It is known from [38] that the real and the imaginary parts of a \(4^{Q}\)-QAM signal can be decomposed into a weighted sum of \(Q\) binary variables, respectively. Using this representation, problem (12) can be equivalently expressed by \[\begin{split}&\min_{\mathbf{x},\{\mathbf{v}_{q},\mathbf{v}_{q}\}} \ln\det\left(\mathbf{R}_{\mathbf{X}}\right)-\mathbf{q}_{\mathbf{X}}^{H} \mathbf{R}_{\mathbf{X}}^{-1}\mathbf{q}_{\mathbf{X}}\\ &\text{s.t.}\quad\mathbf{x}-\frac{1}{\alpha_{Q}}\sum\limits_{q=1} ^{Q}2^{q-1}\mathbf{v}_{q}=\mathbf{0},\\ &\quad\mathrm{Re}\{\mathbf{v}_{q}\},\mathrm{Im}\{\mathbf{v}_{q}\} \in\{-1,1\}^{K},q=1,\cdots,Q,\end{split} \tag{13}\] where \(\alpha_{Q}\) is the power normalization factor of QAM signals and \(\{-1,1\}^{K}\) denotes the binary set of \(K\times 1\) vectors with each entry being \(1\) or \(-1\). Furthermore, we relax the discrete binary constraints by the boxed constraints and then impose a sum of quadratic penalty terms to the objective, yielding the following problem: \[\begin{split}&\min_{\mathbf{x},\{\mathbf{v}_{q},\mathbf{v}_{q}\}} \ln\det\left(\mathbf{R}_{\mathbf{X}}\right)-\mathbf{q}_{\mathbf{X}}^{H} \mathbf{R}_{\mathbf{X}}^{-1}\mathbf{q}\mathbf{x}-\sum\limits_{q=1}^{Q}\beta_{ q}\left\|\mathbf{v}_{q}\right\|_{2}^{2}\\ &\mathrm{s.t.}\quad\mathbf{x}-\frac{1}{\alpha_{Q}}\sum\limits_{q= 1}^{Q}2^{q-1}\mathbf{v}_{q}=\mathbf{0},\\ &\quad\mathrm{Re}\{\mathbf{v}_{q}\},\mathrm{Im}\{\mathbf{v}_{q}\} \in[-1,1]^{K},q=1,\cdots,Q,\end{split} \tag{14}\] where \(\beta_{q}>0,q=1,\cdots,Q\) are the penalty parameters, and \([-1,1]^{K}\) denotes the set of \(K\times 1\) vectors with each entry lying in the interval \([-1,1]\). Note that the added penalty term \(-\sum\limits_{q=1}^{Q}\beta_{q}\left\|\mathbf{v}_{q}\right\|_{2}^{2}\) makes the integer solutions more preferable and therefore tightens the boxed constraints [33]. Based on the reformulation, we apply the ADMM algorithm to tackle the problem as follows [8]. To facilitate the application of the ADMM framework, we construct the scaled augmented Lagrangian function of problem (14) as \[\begin{split}& L_{\mu}\left(\mathbf{x},\{\mathbf{v}_{q}\}_{q=1}^{Q}, \boldsymbol{\lambda}\right)=\ln\det(\mathbf{R}_{\mathbf{X}})-\mathbf{q}_{ \mathbf{X}}^{H}\mathbf{R}_{\mathbf{X}}^{-1}\mathbf{q}\mathbf{x}\\ &-\sum\limits_{q=1}^{Q}\beta_{q}\left\|\mathbf{v}_{q}\right\|_{2} ^{2}+\frac{\mu}{2}\left\|\mathbf{x}-\frac{1}{\alpha_{Q}}\sum\limits_{q=1}^{Q} 2^{q-1}\mathbf{v}_{q}+\boldsymbol{\lambda}\right\|_{2}^{2},\end{split} \tag{15}\] where \(\boldsymbol{\lambda}\) and \(\mu>0\) denote the scaled dual variables and the corresponding penalty parameter, respectively. Thus, the ADMM procedure can be described as follows: \[\{\mathbf{v}_{q}^{i}\}_{q=1}^{Q}=\operatorname*{arg\,min}_{\mathbf{v}_{q}\in \mathbb{B}^{K}}L_{\mu}\left(\mathbf{x}^{i-1},\{\mathbf{v}_{q}\}_{q=1}^{Q}, \boldsymbol{\lambda}^{i-1}\right), \tag{16a}\] \[\mathbf{x}^{i}=\operatorname*{arg\,min}_{\mathbf{x}}L_{\mu}\left(\mathbf{x}, \{\mathbf{v}_{q}^{i}\}_{q=1}^{Q},\boldsymbol{\lambda}^{i-1}\right),\] (16b) \[\boldsymbol{\lambda}^{i}=\boldsymbol{\lambda}^{i-1}+\mathbf{x}^{i}-\frac{1}{ \alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{v}_{q}^{i}, \tag{16c}\] where the superscript \(i\) denotes the index of ADMM iterations and \(\mathbb{B}^{K}\) denotes the set of \(K\times 1\) vectors, both the real and the imaginary parts of whose entries belong to the interval \([-1,1]\). For subproblem (16a), the difficulty of obtaining the optimal solution lies in the fact that \(\{\mathbf{v}_{q}\}_{q=1}^{Q}\) are coupled with each other. Nevertheless, we find that \(L_{\mu}(\mathbf{x}^{i-1},\{\mathbf{v}_{q}\}_{q=1}^{Q},\boldsymbol{\lambda}^{i -1})\) is convex with respect to (w.r.t.) each \(\mathbf{v}_{q}\) when the condition \(\mu 4^{q-1}-2\alpha_{Q}^{2}\beta_{q}>0\) is satisfied, which inspires us to apply the block coordinate descent (BCD) method [42]. Consequently, by taking the derivative of (15) w.r.t. each \(\mathbf{v}_{q}\) and setting it to zero, a closed-form solution to \(\mathbf{v}_{q}\) can be achieved by \[\mathbf{v}_{q}^{i}=\mathcal{P}_{\mathbb{B}^{K}}\left\{\frac{2^{q-1}\mu\boldsymbol {\eta}_{q}^{i}}{\mu^{q-1}-2\alpha_{Q}^{2}\beta_{q}}\right\},\quad q=1,2, \cdots,Q, \tag{17}\] where \(\boldsymbol{\eta}_{q}^{i}=\alpha_{Q}\left(\mathbf{x}^{i-1}+\boldsymbol{\lambda }^{i-1}\right)-\sum\limits_{p<q}2^{p-1}\mathbf{v}_{p}^{i}-\sum\limits_{p>q}2^{ p-1}\mathbf{v}_{p}^{i-1}\) and \(\mathcal{P}_{\mathbb{B}^{K}}\{\cdot\}\) means projecting the real and the imaginary parts of each entry of the input vector onto the interval \([-1,1]\). The remaining subproblem (16b) is more challenging due to its complicated non-convex objective. To make it tractable, we replace the non-trivial function \(\ln\det\left(\mathbf{R}_{\mathbf{X}}\right)-\mathbf{q}_{\mathbf{X}}^{H} \mathbf{R}_{\mathbf{X}}^{-1}\mathbf{q}_{\mathbf{X}}\) with an appropriate upperbound surrogate function in each ADMM iteration. In this way, the original minimization problem is reduced to minimizing the surrogate function that is easier to handle. Concretely, we first establish an upper bound to the function \(\ln\det\left(\mathbf{R}_{\mathbf{X}}\right)\). Since the function is concave in \(\mathbf{R}_{\mathbf{X}}\), it is majorized by its first-order Taylor expansion as \[\begin{split}\ln\det\left(\mathbf{R}_{\mathbf{X}}\right)& \leq\ln\det\left(\mathbf{R}_{\mathbf{X}^{i-1}}\right)+\mathrm{tr} \left(\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\left(\mathbf{R}_{\mathbf{X}}-\mathbf{R }_{\mathbf{X}^{i-1}}\right)\right)\\ &\stackrel{{(a)}}{{=}}\frac{1}{\sigma^{2}}\mathrm{tr} \left(\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\mathbf{X}^{H}\mathbf{X}\right)+\mathrm{ constant}\\ &\stackrel{{(b)}}{{=}}\mathbf{x}^{H}\mathbf{C}^{i-1} \mathbf{x}+\mathrm{constant},\end{split} \tag{18}\] where (a) uses the definition of \(\mathbf{R}_{\mathbf{X}}=\frac{1}{\sigma^{2}}\mathbf{X}^{H}\mathbf{X}+ \mathbf{\Sigma}_{\mathbf{h}}^{-1}\), and (b) is acquired based on \(\mathbf{X}=\mathbf{x}^{T}\otimes\mathbf{I}_{M}\) with \(\mathbf{C}^{i-1}\) defined by \[\mathbf{C}^{i-1}=\frac{1}{\sigma^{2}}\big{(}\mathbf{E}^{H}\left(\mathbf{R}_{ \mathbf{X}^{i-1}}^{-1}\odot\left(\mathbf{1}_{K\times K}\otimes\mathbf{I}_{M} \right)\right)\mathbf{E}\big{)}^{T}, \tag{19}\] where \(\mathbf{E}=\mathbf{I}_{K}\otimes\mathbf{1}_{M\times 1}\), \(\mathbf{A}\otimes\mathbf{B}\) and \(\mathbf{A}\odot\mathbf{B}\) represent the Kronecker product and the Hadamard product of \(\mathbf{A}\) and \(\mathbf{B}\), respectively, \(\mathbf{I}_{k}\) denotes the identity matrix of size \(k\times k\), and \(\mathbf{1}_{k_{1}\times k_{2}}\) denotes the all-ones vector or matrix of size \(k_{1}\times k_{2}\). On the other hand, concerning the term \(\mathbf{q}_{\mathbf{X}}^{H}\mathbf{R}_{\mathbf{X}}^{-1}\mathbf{q}_{\mathbf{X}}\) which is jointly convex in \(\mathbf{q}_{\mathbf{X}}\) and \(\mathbf{R}_{\mathbf{X}}\), we obtain its lower bound via the first-order Taylor expansion as \[\begin{split}&\mathbf{q}_{\mathbf{X}}^{H}\mathbf{R}_{\mathbf{X}}^{-1} \mathbf{q}_{\mathbf{X}}\\ &\geq-\mathrm{tr}\left(\mathbf{R}_{\mathbf{X}^{i-1}\mathbf{q}_{ \mathbf{X}^{i-1}}\mathbf{q}_{\mathbf{X}^{i-1}}^{H}\mathbf{R}_{\mathbf achieved by using \(\mathbf{X}=\mathbf{x}^{T}\otimes\mathbf{I}_{M}\). In addition, \(\mathbf{A}^{i-1}=\mathbf{y}\mathbf{q}_{\mathbf{X}^{i-1}}^{H}\mathbf{R}_{\mathbf{ X}^{i-1}}^{-1}\), \(\mathbf{B}^{i-1}=\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\mathbf{q}_{\mathbf{X}^{i-1} }\mathbf{q}_{\mathbf{X}^{i-1}}^{H}\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\), and \(\mathbf{A}^{i-1}\) and \(\mathbf{F}^{i-1}\) respectively take the forms: \[\mathbf{d}^{i-1}=\frac{1}{\sigma^{2}}\left(\mathbf{1}_{1\times M}\left( \mathbf{A}^{i-1}\odot(\mathbf{1}_{1\times K}\otimes\mathbf{I}_{M})\right) \mathbf{E}\right)^{T}, \tag{21}\] and \[\mathbf{F}^{i-1}=\frac{1}{\sigma^{2}}\big{(}\mathbf{E}^{H}\left(\mathbf{B}^{i- 1}\ \odot(\mathbf{1}_{K\times K}\otimes\mathbf{I}_{M})\right)\mathbf{E}\big{)}^{T}. \tag{22}\] Combining (18) and (20), problem (16b) boils down to minimizing the surrogate upper bound function as follows: \[\mathbf{x}^{i}=\operatorname*{arg\,min}_{\mathbf{x}}\mathbf{x}^{ H}\big{(}\mathbf{C}^{i-1}+\mathbf{F}^{i-1}\big{)}\mathbf{x}-2\mathrm{Re}\{ \mathbf{x}^{H}\mathbf{d}^{i-1}\} \tag{23}\] \[+\frac{\mu}{2}\left\|\mathbf{x}+\boldsymbol{\psi}^{i-1}\right\|_ {2}^{2},\] where \(\boldsymbol{\psi}^{i-1}=-\frac{1}{\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1} \mathbf{v}_{q}^{i}+\boldsymbol{\lambda}^{i-1}\). The objective function of problem (23) is convex quadratic w.r.t. \(\mathbf{x}\) since \(\mathbf{C}^{i-1}\) and \(\mathbf{F}^{i-1}\) are both positive semidefinite and \(\mu>0\). Thus, the update of \(\mathbf{x}\) can be easily obtained by \[\mathbf{x}^{i}=\left(2\left(\mathbf{C}^{i-1}+\mathbf{F}^{i-1}\right)+\mu \mathbf{I}_{K}\right)^{-1}\left(-\mu\boldsymbol{\psi}^{i-1}+2\mathbf{d}^{i-1} \right). \tag{24}\] At this step, we are able to address problem (14) by iteratively calculating the closed-form expressions in (17), (24) and (16c) until the convergence is reached, which is also summarized in Algorithm 1. Note that the convergence of ADMM to a general nonconvex problem remains an open issue, which is even harder with inexact optimization of the involved subproblems [8]. In this work, we only obtain inexact solutions to subproblems (16a) and (16b). Nonetheless, our ADMM algorithm can still converge according to the empirical results. In particular, the above ADMM algorithm will be used for developing a neural network based detector via the deep unfolding technique in the next subsection. ``` 0:\(\mathbf{R}\) and \(\mathbf{q}\). 1: Initialize: \(\mathbf{s}_{0}\), \(\mathbf{r}_{0}=\mathbf{q}-\mathbf{R}\mathbf{s}_{0}\), \(\mathbf{p}_{0}=\mathbf{r}_{0}\), \(\xi=0\). 2:repeat 3:\(\xi\leftarrow\xi+1\). 4:\(\tau_{\xi}=\mathbf{r}_{\xi-1}^{H}\mathbf{r}_{\xi-1}/\mathbf{p}_{\xi-1}^{H} \mathbf{R}\mathbf{p}_{\xi-1}\). 5:\(\mathbf{s}_{\xi}=\mathbf{s}_{\xi-1}+\tau_{\xi}\mathbf{p}_{\xi-1}\). 6:\(\mathbf{r}_{\xi}=\mathbf{r}_{\xi-1}-\tau_{\xi}\mathbf{R}\mathbf{p}_{\xi-1}\). 7:\(v_{\xi}=\mathbf{r}_{\xi}^{H}\mathbf{r}_{\xi}/\mathbf{r}_{\xi-1}^{H}\mathbf{r} _{\xi-1}\). 8:\(\mathbf{p}_{\xi}=\mathbf{r}_{\xi}+v_{\xi}\mathbf{p}_{\xi-1}\). 9:until convergence. 10:\(\hat{\mathbf{s}}=\mathbf{s}_{\xi}\). ``` **Algorithm 2** CG Algorithm for Calculating \(\mathbf{R}^{-1}\mathbf{q}\) ### _Proposed RADMMNet_ In order to facilitate the proposed network design, we first perform some simplifications for the above ADMM algorithm. Specifically, the \(MK\)-dimensional matrix inversion \(\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\) involved in the calculation of \(\mathbf{C}^{i-1}\) in (24) requires a high complexity up to \(\mathcal{O}\left(M^{3}K^{3}\right)\), which can lead to a huge computational cost for both offline training and online calculation. To handle this, we further relax the upper bound in (18) by \[\ln\det\left(\mathbf{R}_{\mathbf{X}}\right) \leq\mathbf{x}^{H}\mathbf{C}^{i-1}\mathbf{x}+\mathrm{constant} \tag{25}\] \[\leq\varepsilon^{i-1}\|\mathbf{x}\|_{2}^{2}+\mathrm{constant},\] where \(\mathbf{C}^{i-1}\) is given in (19) and \(\varepsilon^{i-1}\) is set to satisfy \(\varepsilon^{i-1}\mathbf{I}_{K}\succeq\mathbf{C}^{i-1}\). We note that we regard \(\varepsilon^{i-1}\) as a trainable parameter in the proposed network, whose value is determined after the network training is completed. In addition, when computing \(\mathbf{x}^{i}\) using (24), we need to calculate \(\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\mathbf{q}_{\mathbf{X}^{i-1}}\) to obtain \(\mathbf{d}^{i-1}\) and \(\mathbf{F}^{i-1}\) (see (21) and (22)), which will also introduce a very high complexity of \(\mathcal{O}\left(M^{3}K^{3}+M^{2}K^{2}\right)\). To avoid this, we adopt the conjugate gradient descent (CG) method to approximate \(\hat{\mathbf{s}}=\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\mathbf{q}_{\mathbf{X}^{i-1}}\) by iteratively minimizing the quadratic function \(f(\mathbf{s})=\mathbf{s}^{H}\mathbf{R}_{\mathbf{X}^{i-1}}\mathbf{s}-\mathbf{q} _{\mathbf{X}^{i-1}}^{H}\mathbf{s}\)[43]. The detailed procedure of the CG method is described in Algorithm 2, where the subscript \(\mathbf{X}^{i-1}\) is temporarily omitted for the brevity of the notations. Specifically, line 5, line 6, and line 8 represent the update of the solution \(\mathbf{s}_{\xi}\), the residual \(\mathbf{r}_{\xi}\), and the conjugate direction \(\mathbf{p}_{\xi}\) in the \(\xi\)-th CG iteration,respectively, while line 4 and line 7 give the expressions of the corresponding step sizes \(\tau_{\xi}\) and \(v_{\xi}\). By resorting to the CG method, the complexity of computing \(\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\mathbf{q}_{\mathbf{X}^{i-1}}\) is reduced to \(\mathcal{O}\left(I_{\mathrm{CG}}M^{2}K^{2}\right)\), where \(I_{\mathrm{CG}}\) denotes the number of CG iterations. Moreover, the iterative nature of the CG method is also friendly to the neural network design1. On the other hand, it is noteworthy that a good initialization of \(\mathbf{s}_{0}\) can improve the convergence of the CG method, which is generally set to \(\mathbf{0}\) if there is no _a prior_ knowledge. Fortunately, we find that the value of \(\mathbf{R}_{\mathbf{X}^{i}}^{-1}\mathbf{q}_{\mathbf{X}^{i}}\) does not change dramatically as the iteration index \(i\) increases. Therefore, when using the CG method to approximate \(\mathbf{R}_{\mathbf{X}^{i-1}}^{-1}\mathbf{q}_{\mathbf{X}^{i-1}}\) in the \(i\)-th ADMM iteration (\(i>1\)), \(\mathbf{s}_{0}\) can be initialized to \(\mathbf{R}_{\mathbf{X}^{i-2}}^{-1}\mathbf{q}_{\mathbf{X}^{i-2}}\) obtained in the \((i-1)\)-th ADMM iteration, which can efficiently reduce the required number of CG iterations down to \(1\). This processing can also be regarded as transferring the intermediate variables to strength the connection between the network layers. Footnote 1: Although there exist other iterative methods for the same purpose, such as the Newton method and the steepest gradient descent method, they suffer from a high complexity and a low convergence rate, respectively [32]. As for the CG method, it only involves matrix multiplications per iteration with a much lower complexity. Meanwhile, the solution is updated along an independent conjugate direction with an exactly calculated step size in each CG iteration, which means that one will go to the extreme along one direction and there will be no repeated directions in the search procedure, leading to a faster convergence rate. Therefore, the CG method is more preferable. After performing the above modifications for the proposed ADMM algorithm, we are ready to develop a neural network solution to the robust ML detection problem in (12). We employ the deep unfolding technique to represent the ADMM iterations by \(I_{\text{max}}\) layers. Note that the involved penalty parameters \(\mu\) and \(\{\beta_{q}\}_{q=1}^{Q}\), as well as \(\varepsilon\) involved in (25) can remarkably affect the performance and convergence of the proposed ADMM algorithm. In the proposed network, we set them to be trainable parameters which can be optimized via offline training, thus effectively avoiding a cumbersome numerical search. Furthermore, we adopt layer-wise parameters, i.e., \(\{\mu^{i},\{\beta_{q}^{i}\}_{q=1}^{Q},\varepsilon^{i}\}_{i=1}^{I_{\text{max}}}\), which can provide more degrees of freedom and potentially accelerate the convergence and improve the performance. Note that (21) and (22) can be conveniently implemented in Tensorflow by convolutional layers with an identity matrix as the fixed filter. Instead of strictly following the derived expressions, we improve the learning ability of our network by replacing the fixed filter with a trainable one, so that the resulting deep unfolding network can also take advantage of the introduced data-driven structure. It yields: \[\begin{split}\mathbf{d}^{i-1}&=\frac{1}{\sigma^{2} }\left(\mathbf{1}_{1\times M}\left(\mathbf{A}^{i-1}\odot(\mathbf{1}_{1\times K }\otimes\mathbf{W})\right)\mathbf{E}\right)^{T}\\ &=\mathrm{conv}\left(\frac{1}{\sigma^{2}}\left(\mathbf{A}^{i-1} \right)^{\mathrm{T}};\mathbf{W}\right),\end{split} \tag{26}\] and \[\begin{split}\mathbf{F}^{i-1}&=\frac{1}{\sigma^{2 }}\big{(}\mathbf{E}^{H}\left(\mathbf{B}^{i-1}\odot(\mathbf{1}_{K\times K} \otimes\mathbf{W})\right)\mathbf{E}\big{)}^{T}\\ &=\mathrm{conv}\left(\frac{1}{\sigma^{2}}\left(\mathbf{B}^{i-1} \right)^{\mathrm{T}};\mathbf{W}\right),\end{split} \tag{27}\] where \(\mathbf{E}=\mathbf{I}_{K}\otimes\mathbf{1}_{M\times 1}\), and \(\mathrm{conv}\left(\cdot;\mathbf{W}\right)\) means the convolutional operation with \(\mathbf{W}\in\mathbb{R}^{M\times M}\) being the trainable filter. To guarantee the consistence of the output dimensions, the stride and the depth of the convolutional layers are set to \(M\) and \(1\), respectively. Unlike other layer-wise parameters, we share the same \(\mathbf{W}\) for each network layer to limit the number of trainable parameters, which can effectively alleviate the training difficulty and avoid the overfitting problem. To summarize, the \(i\)-th layer of the proposed network can be constructed as follows: \[\mathbf{v}_{q}^{i}=\mathcal{P}_{\mathbb{B}^{K}}\left\{\frac{2^{q-1}\mu^{i} \boldsymbol{\eta}_{q}^{i}}{\mu^{i}4^{q-1}-2\alpha_{Q}^{2}\beta_{q}^{i}}\right\},\quad q=1,2,\cdots,Q, \tag{28a}\] \[\mathbf{x}^{i}=\left(2\mathbf{F}^{i-1}+\left(\varepsilon^{i}+\mu^{i} \right)\mathbf{I}_{K}\right)^{-1}\left(-\mu^{i}\boldsymbol{\psi}^{i-1}+2 \mathbf{d}^{i-1}\right),\] (28b) \[\boldsymbol{\lambda}^{i}=\boldsymbol{\lambda}^{i-1}+\mathbf{x}^{i}-\frac{1}{ \alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{v}_{q}^{i}, \tag{28c}\] where \(\boldsymbol{\eta}_{q}^{i}=\alpha_{Q}\left(\mathbf{x}^{i-1}+\boldsymbol{\lambda }^{i-1}\right)-\sum\limits_{p<q}2^{p-1}\mathbf{v}_{p}^{i}-\sum\limits_{p>q}2^ {p-1}\mathbf{v}_{p}^{i-1}\), and \(\boldsymbol{\psi}^{i-1}=-\frac{1}{\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1} \mathbf{v}_{q}^{i}+\boldsymbol{\lambda}^{i-1}\). Furthermore, in the expressions of \(\boldsymbol{\psi}^{i-1}\) and (28c), we additionally introduce a trainable parameter \(\varsigma^{i}\) by replacing the original term \(-\frac{1}{\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{v}_{q}^{i}\) with the term \(-\frac{\varsigma^{i}}{\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{v}_{q}^{i }-\left(1-\varsigma^{i}\right)\mathbf{x}^{i-1}\). This corresponds to the over-relaxation scheme as suggested by [8, 44], which is expected to improve the convergence of ADMM iterations. Finally, the block diagram of the proposed RADMMNet is illustrated in Fig. 2. ## IV Low-Complexity Robust ADMM Detector Based Network Design Although we have bypassed the high-dimensional matrix inversion, RADMMNet can still be computationally intensive for the system with a large number of antennas, which imposes huge burdens on both offline and online processes. In this section, we derive a low-complexity robust ADMM detector and further provide the corresponding neural network design. ### _Low-Complexity Robust ADMM Detector_ To begin with, we rewrite (8) as \[\mathbf{y}=\hat{\mathbf{H}}\mathbf{x}+\boldsymbol{\Delta}\mathbf{H}\mathbf{x}+ \mathbf{z}, \tag{29}\] where \(\hat{\mathbf{H}}\) is the matrix form of \(\hat{\mathbf{h}}\) and \(\boldsymbol{\Delta}\mathbf{H}=\mathbf{H}-\hat{\mathbf{H}}\) is the corresponding zero-mean Gaussian distributed channel error. The two error covariance matrix of \(\boldsymbol{\Delta}\mathbf{H}\), i.e., \(\boldsymbol{\Sigma}_{\mathbf{H}}\triangleq\mathbb{E}\left\{\left(\boldsymbol{ \Delta}\mathbf{H}\mathbf{)}\!\left(\boldsymbol{\Delta}\mathbf{H}\right)^{H} \right\}\), can be derived from \(\boldsymbol{\Sigma}_{\mathbf{h}}\) in (7) by \[\boldsymbol{\Sigma}_{\mathbf{H}}=\left(\mathbf{1}_{1\times K}\otimes\mathbf{I}_ {M}\right)\left(\boldsymbol{\Sigma}_{\mathbf{h}}\odot\left(\mathbf{I}_{K} \otimes\mathbf{1}_{M\times M}\right)\right)\left(\mathbf{1}_{K\times 1}\otimes \mathbf{I}_{M}\right). \tag{30}\] Based on (29), the key idea behind the low-complexity design is to impose a Gaussian approximation on the residual term \(\tilde{\mathbf{r}}=\boldsymbol{\Delta}\mathbf{H}\mathbf{x}+\mathbf{z}\). That is, we assume that \(\tilde{\mathbf{r}}\) follows a Gaussian distribution with zero mean and the covariance \(\mathbf{C}_{\mathbf{r}}\triangleq\mathbb{E}\left\{\left(\boldsymbol{\Delta} \mathbf{H}\mathbf{x}+\mathbf{z}\right)\!\left(\boldsymbol{\Delta}\mathbf{H} \mathbf{x}+\mathbf{z}\right)^{H}\right\}=\boldsymbol{\Sigma}_{\mathbf{H}}+ \sigma^{2}\mathbf{I}_{M}\), where Fig. 2: Block diagram of RADMMNet. the expectation is taken w.r.t. \(\mathbf{\Delta}\mathbf{H}\), \(\mathbf{z}\), and \(\mathbf{x}\). Thus, the robust ML criterion can be simplified to \[\mathbf{\hat{x}}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathcal{K}^{K}}\left( \mathbf{y}-\hat{\mathbf{H}}\mathbf{x}\right)^{H}\mathbf{C}_{\tilde{\mathbf{r}} }^{-1}\left(\mathbf{y}-\hat{\mathbf{H}}\mathbf{x}\right). \tag{31}\] We also resort to the ADMM framework to address problem (31), where the details are analogous to those in Section III-A. Specifically, following the reformulation of problem (14), we recast problem (31) by \[\min_{\mathbf{x},\{\mathbf{u}_{q}\},\forall q\}\left(\mathbf{y} -\hat{\mathbf{H}}\mathbf{x}\right)^{H}\mathbf{C}_{\tilde{\mathbf{r}}}^{-1} \left(\mathbf{y}-\hat{\mathbf{H}}\mathbf{x}\right)-\sum\limits_{q=1}^{Q} \kappa_{q}\left\|\mathbf{u}_{q}\right\|_{2}^{2} \tag{32}\] \[\mathrm{s.t.}\quad\mathbf{x}-\frac{1}{\alpha_{Q}}\sum\limits_{q=1 }^{Q}2^{q-1}\mathbf{u}_{q}=\mathbf{0},\] \[\mathrm{Re}\{\mathbf{u}_{q}\},\mathrm{Im}\{\mathbf{u}_{q}\}\in[-1,1]^{K},q=1, \cdots,Q,\] where \(\kappa_{q}>0,q=1,\cdots,Q\) are the penalty parameters. Then, the corresponding augmented Lagrangian function can be expressed as \[L_{\delta}\left(\mathbf{x},\{\mathbf{u}_{q}\}_{q=1}^{Q},\mathbf{ \theta}\right)=\left(\mathbf{y}-\hat{\mathbf{H}}\mathbf{x}\right)^{H}\mathbf{ C}_{\tilde{\mathbf{r}}}^{-1}\left(\mathbf{y}-\hat{\mathbf{H}}\mathbf{x}\right) \tag{33}\] \[-\sum\limits_{q=1}^{Q}\kappa_{q}\left\|\mathbf{u}_{q}\right\|_{2} ^{2}+\frac{\delta}{2}\left\|\mathbf{x}-\frac{1}{\alpha_{Q}}\sum\limits_{q=1}^ {Q}2^{q-1}\mathbf{u}_{q}+\mathbf{\theta}\right\|_{2}^{2},\] where \(\mathbf{\theta}\) and \(\delta>0\) denote the scaled dual variables and the corresponding penalty parameter, respectively. Therefore, the \(i\)-th ADMM iteration can be given by \[\{\mathbf{u}_{q}^{i}\}_{q=1}^{Q}=\operatorname*{arg\,min}_{\mathbf{u}_{q}\in \mathbb{B}^{K}}L_{\delta}\left(\mathbf{x}^{i-1},\{\mathbf{u}_{q}\}_{q=1}^{Q}, \mathbf{\theta}^{i-1}\right), \tag{34a}\] \[\mathbf{x}^{i}=\operatorname*{arg\,min}_{\mathbf{x}}L_{\delta}\left(\mathbf{x}, \{\mathbf{u}_{q}^{i}\}_{q=1}^{Q},\mathbf{\theta}^{i-1}\right),\] (34b) \[\mathbf{\theta}^{i}=\mathbf{\theta}^{i-1}+\mathbf{x}^{i}-\frac{1}{\alpha_{Q}}\sum \limits_{q=1}^{Q}2^{q-1}\mathbf{u}_{q}^{i}. \tag{34c}\] Note that subproblem (34a) is almost the same as subproblem (16a) whose solution can be similarly achieved by \[\mathbf{u}_{q}^{i}=\mathcal{P}_{\mathbb{B}^{K}}\left\{\frac{2^{q-1}\delta \mathbf{\omega}_{q}^{i}}{\delta 4^{q-1}-2\alpha_{Q}^{2}\kappa_{q}}\right\},\quad q=1,2,\cdots,Q, \tag{35}\] where \(\mathbf{\omega}_{q}^{i}=\alpha_{Q}\left(\mathbf{x}^{i-1}+\mathbf{\theta}^{i-1}\right) -\sum\limits_{p<q}2^{p-1}\mathbf{u}_{p}^{i}-\sum\limits_{p>q}2^{p-1}\mathbf{u} _{p}^{i-1}\). Moreover, by dropping the terms independent of \(\mathbf{x}\), we equivalently rewrite subproblem (34b) as \[\mathbf{x}^{i}=\operatorname*{arg\,min}_{\mathbf{x}}\mathbf{x}^{H}\mathbf{\Phi} \mathbf{x}-2\mathrm{Re}\{\mathbf{x}^{H}\mathbf{\gamma}^{i-1}\}, \tag{36}\] where \(\mathbf{\Phi}=\hat{\mathbf{H}}^{H}\mathbf{C}_{\tilde{\mathbf{r}}}^{-1}\hat{ \mathbf{H}}+\frac{\delta}{2}\mathbf{I}_{K}\) and \(\mathbf{\gamma}^{i-1}=\hat{\mathbf{H}}^{H}\mathbf{C}_{\tilde{\mathbf{r}}}^{-1} \mathbf{y}+\frac{\delta}{2\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{u}_{q }^{i}-\frac{\delta}{2}\mathbf{\theta}^{i-1}\). Since \(\mathbf{\Phi}\) is positive definite, the optimal solution of \(\mathbf{x}\) in each ADMM update is \[\mathbf{x}^{i}=\mathbf{\Phi}^{-1}\mathbf{\gamma}^{i-1}. \tag{37}\] The above procedure is summarized in Algorithm 3. Different from Algorithm 1 for problem (14), the update of \(\mathbf{x}\) in Algorithm 3 is obtained in an exact closed form without the need to construct a surrogate function. The simplification can effectively reduce the computational complexity (see Section VI-C for details). ``` 0:\(\mathbf{y}\), \(\hat{\mathbf{H}}\), and \(\mathbf{\Sigma}_{\mathbf{H}}\). 1:Initialize: \(\mathbf{x}^{0}\), \(\{\mathbf{u}_{q}^{0}\}_{q=1}^{Q}\), \(\mathbf{\theta}^{0}\), \(i=0\). 2:repeat 3:\(i\gets i+1\). 4: Sequentially update \(\mathbf{u}_{q}^{i}\) for \(q=1,2,\cdots,Q\) via (35). 5: Update \(\mathbf{x}^{i}\) via (37). 6: Update \(\mathbf{\theta}^{i}\) via (34c). 7:until convergence. 8:\(\hat{\mathbf{x}}=\mathbf{x}^{i}\). ``` **Algorithm 3** Low-Complexity Robust ADMM Detection Algorithm ### _Proposed LCRADMMNet_ Similar to RADMMNet, it is straightforward to build a model-driven network, referred to as LCRADMMNet, by unfolding the iterations of the derived low-complexity robust ADMM detection algorithm. Note that \(\mathbf{\Phi}^{-1}=\left(\hat{\mathbf{H}}^{H}\mathbf{C}_{\tilde{\mathbf{r}}}^{-1} \hat{\mathbf{H}}+\frac{\delta}{2}\mathbf{I}_{K}\right)^{-1}\) in (37) involves an \(M\)-dimensional matrix inversion (\(\mathbf{C}_{\tilde{\mathbf{r}}}^{-1}\)) and a \(K\)-dimensional matrix inversion (\(\mathbf{\Phi}^{-1}\)), which, fortunately, only needs to be calculated once during the ADMM iterations as long as \(\delta\) remains constant. To inherit the complexity advantage, we set \(\delta\) to be trainable and share it for different layers. Besides, a set of layer-wise trainable parameters, denoted by \(\{\{\kappa_{q}^{i}\}_{q=1}^{Q}\}_{i=1}^{I_{\mathbf{\gamma}}}\), are used to take the role of the penalty parameters \(\{\kappa_{q}\}_{q=1}^{Q}\) involved in Algorithm 3. Therefore, the \(i\)-th layer of the proposed network can be expressed as: \[\mathbf{u}_{q}^{i}=\mathcal{P}_{\mathbb{B}^{K}}\left\{\frac{2^{q-1}\delta\mathbf{ \omega}_{q}^{i}}{\delta 4^{q-1}-2\alpha_{Q}^{2}\kappa_{q}^{i}}\right\},\quad q=1,2,\cdots,Q, \tag{38a}\] \[\mathbf{x}^{i}=(\mathbf{\Phi})^{-1}\mathbf{\gamma}^{i-1},\] (38b) \[\mathbf{\theta}^{i}=\mathbf{\theta}^{i-1}+\mathbf{x}^{i}-\frac{1}{\alpha_{Q}}\sum \limits_{q=1}^{Q}2^{q-1}\mathbf{u}_{q}^{i}, \tag{38c}\] where \(\mathbf{\omega}_{q}^{i}=\alpha_{Q}\left(\mathbf{x}^{i-1}+\mathbf{\theta}^{i-1}\right) -\sum\limits_{p<q}2^{p-1}\mathbf{u}_{p}^{i}-\sum\limits_{p>q}2^{p-1}\mathbf{u}_ {p}^{i}\), \(\mathbf{\Phi}=\hat{\mathbf{H}}^{H}\mathbf{C}_{\tilde{\mathbf{r}}}^{-1}\hat{\mathbf{H}} +\frac{\delta}{2}\mathbf{I}_{K}\), and \(\mathbf{\gamma}^{i-1}=\hat{\mathbf{H}}^{H}\mathbf{C}_{\tilde{\mathbf{r}}}^{-1} \mathbf{y}+\frac{\delta}{2\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{u}_ {q}^{i}-\frac{\delta}{2}\mathbf{\theta}^{i-1}\). Here we can also apply the over-relaxation scheme by substituting \(-\frac{1}{\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{u}_{q}^{i}\) involved in the expressions of \(\mathbf{\gamma}^{i-1}\) and (38c) with \(-\frac{v^{i}}{\alpha_{Q}}\sum\limits_{q=1}^{Q}2^{q-1}\mathbf{u}_{q}^{i}-\left(1-v^ {i}\right)\mathbf{x}^{i-1}\), where \(v^{i}\) is the trainable relaxation parameter. We illustrate the proposed LCRADMMNet in Fig. 3. ## V RDAKF-Based Channel Tracking We have developed two robust detection networks to suppress the performance deterioration caused by the imperfect CSI. However, the channel estimates of all accumulated for the subsequent data blocks without pilots, leading to degraded detection performance. To mitigate the error propagation, we now present a novel robust data-aided Kalman filter (RDAKF)-based channel tracking method, which incorporates not only the channel estimate acquired from the previous block as prior information but also the robust data estimates of the current block for further updating. The proposed channel tracking method can be described from the perspective of a Kalman filter, which consists of two key stages as detailed in the following. ### _One-step prediction stage_ First, the state equation is utilized for one-step prediction. In this work, the channel variation model (1) can be regarded as the state equation. Then, the channel vector of the \(n\)-th block can be predicted as \[\hat{\mathbf{h}}[n\left|n-1\right.]=\mathbf{\Lambda}\hat{\mathbf{h}}[n-1 \left|n-1\right.],\quad n=2,3,\cdots,N+1, \tag{39}\] and the corresponding channel error covariance is given by \[\mathbf{\Sigma_{h}}[n\left|n-1\right.]=\mathbf{\Lambda}\mathbf{ \Sigma_{h}}[n-1\left|n-1\right.]\mathbf{\Lambda}+\mathbf{\tilde{\Lambda}} \mathbf{C_{h}}\mathbf{\tilde{\Lambda}}, \tag{40}\] \[n=2,3,\cdots,N+1,\] where \(\hat{\mathbf{h}}[n\left|n-1\right.]\) and \(\mathbf{\Sigma_{h}}[n\left|n-1\right.]\) represent the _a prior_ channel estimate of the \(n\)-th block based on the previous estimate of the \((n-1)\)-th block and the corresponding _a prior_ error covariance, respectively, while \(\hat{\mathbf{h}}[n-1\left|n-1\right.]\) and \(\mathbf{\Sigma_{h}}[n-1\left|n-1\right.]\) represent the _a posterior_ channel estimate of the \((n-1)\)-th block obtained by utilizing the observation of the \((n-1)\)-th block and the corresponding _a posterior_ error covariance, respectively. To guarantee the accuracy of the prediction, \(\hat{\mathbf{h}}[1]\) and \(\mathbf{\Sigma_{h}}[1]\) acquired via (6) and (7) are used for the initialization of \(\hat{\mathbf{h}}[1\left|1\right.]\) and \(\mathbf{\Sigma_{h}}[1\left|1\right.]\). ### _Updating stage_ We now refine the one-step prediction with the Kalman updating stage according to the observation equation in (10). Note that the measurement matrix in (10), i.e., \(\mathbf{X}\), is unknown. To remedy this, we propose to reconstruct a measurement matrix using a robust MMSE detector [10], which yields the data estimates as \[\hat{\mathbf{x}}_{t}[n\left|n-1\right.]=\hat{\mathbf{H}}^{H}[n\left|n-1 \right|]\Big{(}\hat{\mathbf{H}}[n\left|n-1\right.]\hat{\mathbf{H}}^{H}[n\left| n-1\right.] \tag{41}\] \[+\mathbf{\Sigma_{H}}[n\left|n-1\right.]+\sigma^{2}\mathbf{I}_{M} \Big{)}^{-1}\mathbf{y}_{t}[n],\] where \(\hat{\mathbf{H}}[n\left|n-1\right.]\) is the matrix form of \(\hat{\mathbf{h}}[n\left|n-1\right.]\) and \(\mathbf{\Sigma_{H}}[n\left|n-1\right.]\) is the row covariance of the channel error \(\mathbf{\Delta H}[n\left|n-1\right.]=\mathbf{H}[n\left|n-1\right.]-\hat{ \mathbf{H}}[n\left|n-1\right.]\), which can be derived from \(\mathbf{\Sigma_{h}}[n\left|n-1\right.]\) by (30). Based on (41), the row covariance of the data estimation error \(\Delta\mathbf{x}_{t}[n\left|n-1\right.]=\mathbf{x}_{t}[n\left|n-1\right.]- \hat{\mathbf{x}}_{t}[n\left|n-1\right.]\) can be expressed by \[\mathbf{\Sigma_{x_{t}}}[n\left|n-1\right.]=\mathbf{I}_{K}-\hat{ \mathbf{H}}^{H}[n\left|n-1\right.]\Big{(}\hat{\mathbf{H}}[n\left|n-1\right.] \hat{\mathbf{H}}^{H}[n\left|n-1\right.] \tag{42}\] \[+\mathbf{\Sigma_{H}}[n\left|n-1\right.]+\sigma^{2}\mathbf{I}_{M} \Big{)}^{-1}\mathbf{\hat{H}}[n\left|n-1\right.],\] where \(\mathbf{\Sigma_{x_{t}}}[n\left|n-1\right.]\) is independent of the index \(t\) and thus can be simply denoted by \(\mathbf{\Sigma_{x}}[n\left|n-1\right.]\). Using the estimate \(\hat{\mathbf{x}}_{t}[n\left|n-1\right.]\), the observation equation in (10) can be rewritten as \[\mathbf{y}_{t}[n]=\hat{\mathbf{X}}_{t}[n\left|n-1\right.]\mathbf{h}[n]+\tilde{ \mathbf{z}}_{t}[n], \tag{43}\] where the measurement matrix \(\hat{\mathbf{X}}_{t}[n\left|n-1\right.]=\hat{\mathbf{x}}_{t}^{T}[n\left|n-1 \right.]\otimes\mathbf{I}_{M}\), and the equivalent noise \(\tilde{\mathbf{z}}_{t}[n]=\Delta\mathbf{X}_{t}^{T}[n\left|n-1\right.]\mathbf{h }[n]+\mathbf{z}_{t}[n]=\left(\Delta\mathbf{x}_{t}^{T}[n\left|n-1\right.] \otimes\mathbf{I}_{M}\right)\mathbf{h}[n]+\mathbf{z}_{t}[n]\) is assumed to be zero-mean Gaussian distributed with the covariance given by \[\mathbf{C_{\tilde{\mathbf{x}}_{t}}}[n] =\mathbb{E}\left\{\tilde{\mathbf{z}}_{t}[n]\tilde{\mathbf{z}}_{t }^{H}[n]\right\} \tag{44}\] \[\overset{(a)}{=}\mathbb{E}\left\{\Delta\mathbf{X}_{t}^{T}[n\left|n -1\right.]\mathbf{C_{h}}\left(\Delta\mathbf{X}_{t}^{T}[n\left|n-1\right.] \right)^{H}\right\}+\sigma^{2}\mathbf{I}_{M}\] \[\triangleq\mathbf{\Xi_{t}}[n\left|n-1\right.]+\sigma^{2}\mathbf{ I}_{M},\] where (a) is obtained by taking the expectation w.r.t. \(\mathbf{h}[n]\) and \(\mathbf{z}_{t}[n]\) simultaneously. Based on the definition of \(\mathbf{\Sigma_{x}}[n\left|n-1\right.]=\mathbb{E}\left\{\Delta\mathbf{x}_{t}[n \left|n-1\right.]\Delta\mathbf{x}_{t}^{H}[n\left|n-1\right.]\right\}\) and the property of Kronecker product, the \((i,j)\)-th entry of \(\mathbf{\Xi_{t}}[n\left|n-1\right.]\) can be derived as (45) at the top of the next page, where \(\mathbf{0}_{m_{1}\times m_{2}}^{\left\backslash i,j\right.}\) denotes the all-zeros matrix of size \(m_{1}\times m_{2}\) except the \((i,j)\)-th entry being \(1\). Similar to \(\mathbf{\Sigma_{x}}[n\left|n-1\right.]\), the index \(t\) of \(\mathbf{C_{\tilde{\mathbf{z}}_{t}}}[n]\) and \(\mathbf{\Xi_{t}}[n\left|n-1\right.]\) can also be omitted since (44) and (45) are both independent of \(t\). Fig. 3: Block diagram of LCRADMMNet. Traditional Kalman updating can be readily performed by stacking the \(L\) observation equations over a block into one \(ML\)-dimensional equation. However, the computational complexity to calculate the Kalman gain is up to \(\mathcal{O}\left(M^{3}L^{3}\right)\) per data block. Note that the variables at different time slots can be considered uncorrelated with each other. Therefore we can update the estimates recursively for \(t=1,2,\cdots,L\) based on the sequential filter method [41] via (46)-(48), where \(\mathbf{K}\) is the Kalman gain matrix and \(\left(\cdot\right)^{(t)}\) stands for the value of the input variable updated at time slot \(t\). \(\mathbf{\hat{h}}^{(0)}[n\left|n\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}^{(0)}[n\left|n\right|]\) are set to the one-step prediction \(\mathbf{\hat{h}}[n\left|n-1\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}[n\left|n-1\right|]\), respectively. After completing the updating process along the \(L\) time slots over a block, we obtain \(\mathbf{\hat{h}}^{(L)}[n\left|n\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}^{(L)}[n\left|n\right|]\), which are then regarded as the _a posterior_ channel estimate and the corresponding error covariance, i.e, \(\mathbf{\hat{h}}[n\left|n\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}[n\left|n\right|]\). It can be seen that the high-dimensional Kalman updating operation has been equivalently realized in a sequential manner, resulting in a lower complexity of \(\mathcal{O}\left(M^{3}L\right)\) per data block. The overall procedure of the proposed channel tracking method is summarized in Algorithm 4. Based on this, we acquire the refined channel estimate and the corresponding error covariance, which can be readily utilized by RADMMNet and LCRADMMNet to improve the robust detection performance. The block diagram of the RDAKF-based receiver for the \(n\)-th block is shown in Fig. 4. ``` 0:\(\mathbf{Y}_{P}\), \(\mathbf{S}_{P}\), \(\mathbf{A}\), \(\mathbf{\bar{A}}\), \(\mathbf{C}_{\mathbf{h}}\), and \(\mathbf{y}_{t}[n]\) for \(t=1,2,\cdots,L\), \(n=2,\cdots,N+1\). 0:\(\mathbf{\hat{x}}\). 1: Initialize \(\mathbf{\hat{h}}[1\left|1\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}[1\left|1\right|\) via (6) and (7), respectively, \(n=1\). 2:for\(n=2,\cdots,N+1\)do 3: Predict \(\mathbf{\hat{h}}[n\left|n-1\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}[n\left|n-1\right|]\) via (39) and (40), respectively. 4: Calculate \(\mathbf{\hat{x}}_{t}[n\left|n-1\right|]\) for \(t=1,2,\cdots,L\) via (41). 5: Calculate \(\mathbf{C}_{\mathbf{h}}[n]\) via (44). 6:for\(t=1,2,\cdots,L\)do 7: Update \(\mathbf{K}^{(t)}[n]\), \(\mathbf{\hat{h}}^{(t)}[n\left|n\right|]\) and \(\mathbf{\Sigma}_{\mathbf{h}}^{(t)}[n\left|n\right|]\) via (46)-(48), respectively. 8:endfor 9:\(\mathbf{\hat{h}}[n\left|n\right|]\leftarrow\mathbf{\hat{h}}^{(L)}[n\left|n\right|]\), \(\mathbf{\Sigma}_{\mathbf{h}}[n\left|n\right|]\leftarrow\mathbf{\Sigma}_{ \mathbf{h}}^{(L)}[n\left|n\right|]\). 10:endfor 11:\(\left\{\mathbf{\hat{h}}[n\left|n\right|]\right\}_{n=1}^{N+1}\) and \(\left\{\mathbf{\Sigma}_{\mathbf{h}}[n\left|n\right|]\right\}_{n=1}^{N+1}\). ``` **Algorithm 4** RDAKF-Based Channel Tracking Algorithm ## VI Numerical Results An uplink multiuser MIMO system is considered, where the number of the receive antennas at the base station and the number of the served users are set to \(M=8\) and \(K=4\), respectively, unless otherwise specified. The spatially correlated channels between the users and the base station are described by the Kronecker model [45], i.e., \(\mathbf{H}=\mathbf{R}_{r}^{1/2}\mathbf{H}_{\mathrm{i.i.d}}\mathbf{R}_{t}^{1/2}\), where \(\mathbf{H}_{\mathrm{i.i.d}}\) is the i.i.d. Rayleigh fading channel matrix, each entry of which follows a Gaussian distribution with zero mean and unit variance, \(\mathbf{R}_{t}\) and \(\mathbf{R}_{r}\) are the exponential correlation matrices of the users and the receiver, which can be characterized by the correlation coefficients \(\rho_{t}\) and \(\rho_{r}\), respectively. In the simulation, we set \(\rho_{t}=0.3\), \(\rho_{r}=0.7\), and the length of a coherence block \(L=10\) with \(L_{P}=K\). Without loss of generality, the channel temporal correlation coefficients of different users are assumed to be the same, i.e., \(\rho_{1}=\rho_{2}=\cdots=\rho_{K}=\rho\), where a layer \(\rho\) means a slower channel variation. The signal-to-noise ratio (SNR) of the system is defined as \(\mathrm{SNR}=\frac{K}{\sigma^{2}}\). Regarding the network training, both RADMMNet and LCRADMMNet are trained offline in an end-to-end manner with 10000 training samples and 2000 validation samples for each data block under each SNR point. We propose to employ a two-step training strategy, which includes a pre-training step and a fine-tuning step. The Adam optimizer is applied for the parameter learning with the initial learning rates of 0.01 for pre-training and 0.001 for fine-tuning. The batchsize for the two steps are set to 500 and 200, respectively, and the number of epochs are both set to 200. To prevent the overfitting and save the training time, we adopt the early-stopping mechanism, which will be triggered if the loss on the validation dataset does not decrease in successive 5 epochs. In order to take the advantage of the supervised learning, we choose the MSE as Fig. 4: Block diagram of the proposed RDAKF-based receiver. the loss function for both networks, which is given by \[\mathcal{L}=\frac{1}{|\mathcal{S}|}\sum_{(\mathbf{x}^{\prime}\text{ }\text{\tiny{mm}},\mathbf{x})\in\mathcal{S}}\big{\|}\mathbf{x}^{I_{\text{mm}}}- \mathbf{x}\big{\|}_{2}^{2}, \tag{49}\] where \(\mathbf{x}^{I_{\text{mm}}}\) and \(\mathbf{x}\) denote the network output and the true transmitted vector (also known as the label), respectively, and \(\mathcal{S}\) is the training dataset. By minimizing the loss function, the deep unfolding networks are prompted to learn from the data in a supervised fashion, while maintaining the inherent mechanism of the original iterative algorithm. During the online simulation, we generate random testing samples continuously until 1000 symbol errors are collected so as to obtain more reliable SER results. On the other hand, the values of \(L\), \(N\), and other parameters used in the testing stage, are set to be identical as those in the training stage, which is reasonable since the system parameters such as the frame structure and the number of antennas are generally fixed in a given scenario. Apart from the proposed networks RADMMNet and LCRADMMNet, we also present the performances of the following baselines for comparisons: 1) the linear MMSE detector that directly uses imperfect CSI (Mismatched MMSE), 2) the ML detector that directly uses imperfect CSI in (9) (Mismatched ML), 3) The robust linear MMSE detector proposed in [10] (Robust MMSE), 4) The ADMM-based detection network proposed in [34] (ADMM-PSNet), 5) the inexact ADMM-based detection network proposed in [31] (ADMMNet), 6) the orthogonal approximate message passing-based deep unfolding detection network proposed in [16] (OAMPNet), 7) the robust orthogonal approximate message passing algorithm (ROAMPP) and its corresponding deep unfolding detection network (ROAMPNet) [15]2, and 8) The robust ML detector in (12) (Robust ML). Footnote 2: Note that ROAMP and ROAMPNet refer to OAMP and OAMPNet2 in [15], respectively. We rename them to emphasize that they belong to the robust designs and clearly distinguish them from the original algorithms OAMP and OAMPNet [16], which are designed under the perfect CSI assumption. **Remark 1**: _Note that ADMM-PSNet, ADMMNet, and OAMPNet are all designed under the perfect CSI assumption but trained with imperfect CSI, while the proposed RADMMNet and LCRADMMNet are specially designed for the imperfect CSI case by explicitly incorporating the statistics of the imperfect CSI into the network structure. Consequently, both the proposed networks can exhibit stronger robustness in the presence of CSI errors. As for the robust detection network ROAMPNet, it regards the sum of the CSI error and the receiver noise \(\tilde{\mathbf{r}}=\mathbf{\Delta Hx}+\mathbf{z}\) as an equivalent colored Gaussian noise, which is similar to the design principle of LCRADMMNet. However, LCRADMMNet requires a much lower complexity than ROAMPNet while with a comparable or even better performance. On the other hand, different from the above two robust detection networks that are developed based on a suboptimal robust ML metric using the Gaussian approximation, RADMMNet is established according to the optimal robust ML metric, which exploits not only the model-driven deep unfolding technique but also the introduced data-driven structure. Therefore, it can achieve better performance than LCRADMMNet and ROAMPNet. The advantages of the proposed networks over the existing DL structures will be validated by the following numerical results._ ### _Convergence Property_ Different from iterative algorithms where the number of iterations can be adjusted adaptively, the numbers of layers for deep unfolding-based networks needs to be pre-determined. For this purpose, we first investigate the convergence property of the proposed RADMMNet and LCRADMMNet. Fig. 5 plots the symbol error rate (SER) performances of the proposed networks versus the number of layers under different modulation orders, where the corresponding two ADMM-based algorithms, denoted by "RADMM" and "LCRADMM", are also provided for comparison. Note that in the model considered in our work, the channel variation becomes more serious as the data index increases or \(\rho\) gets smaller. Meanwhile, it is intuitive that higher-order modulated signals are more vulnerable to the channel variation, which means that a milder condition is required in the 16QAM case to guarantee an acceptable performance. Therefore, for the QPSK case, we take the fifth data block with SNR = 15 dB and \(\rho=0.99\) for illustration, while for the 16-QAM case, we consider the third data block with SNR = 30 dB and \(\rho=0.995\). It Fig. 5: SER performances versus the number of layers under different modulation orders. can be observed that both RADMMNet and LCRADMMNet can considerably outperform the corresponding ADMM-based algorithms with faster convergence rates, i.e., within 10 layers, indicating the effectiveness of the designed network and the advantage of the parameters learned via offline training. Meanwhile, we would like to note that, for the same number of layers/iterations, LCRADMMNet requires nearly the same level of complexity as LCRADMM, while RADMMNet can efficiently reduce the complexity of RADMM due to the introduced simplifications. Consequently, in the following simulation, we set the number of layers of both RADMMNet and LCRADMMNet to 10 and do not present the results of RADMMM and LCRADMM for the simplicity of the curves. We also mention that the number of CG iterations involved in the first layer of RADMMNet, i.e., \(I_{\text{CG}}\), is fixed to 15 empirically, which can reach almost the same SER performance as that with an exact matrix inversion. ### _Performance Evaluation_ In this subsection, we compare the performances of different detectors using the LMMSE channel estimation presented in Section II and the RDAKF-based channel tracking presented in Section IV, respectively. Then, the performances with larger MIMO systems are also provided. #### Iv-B1 LMMSE channel estimation We first consider the case that the imperfect CSI is acquired by the LMMSE channel estimation. Fig. 6(a) shows the SER performances of different detection algorithms versus the index of data blocks for SNR = 15 dB and \(\rho=0.99\) under QPSK modulation. Since the channel estimates of all data blocks are obtained using the pilot in the first block, the channel errors will be accumulated due to the time variation of channels. As a result, the performances of all the detection algorithms degrade as the index increases, among which even the mismatched ML detector suffers from an unbearable performance loss. In contrast, the robust detectors can effectively compensate the deterioration caused by the channel aging and achieve satisfying performance, validating the necessity of the robust design. In particular, the proposed RADMMNet yields a relatively low SER and approaches the optimal robust ML detector. LCRADMMNet performs a little worse than RADMMNet while with lower complexity (see Section VII-C). With the significant performance advantage of RADMMNet and LCRADMMNet over the non-robust detectors, more data blocks can be transmitted with only one pilot block, which can improve the spectrum efficiency. The SER performances versus SNR for the fifth data block are illustrated in Fig. 6(b). As can be seen, the detectors that neglect the imperfection of CSI suffer from high error floors in the high-SNR region. We notice that both ADMM-PSNet and ADMMNet show slightly better performances than the mismatched ML detector. The reason is that they are trained to approach the true transmitted vectors using the imperfect CSI via supervised learning. In this way, the embedded statistical information of CSI errors can be implicitly utilized, which consequently provides the networks partial robustness against imperfect CSI. On the other hand, by fully incorporating the statistical information of the imperfect CSI into the network design, the proposed RADMMNet exhibits a much better performance. In addition, the lower-complexity LCRADMMNet can also considerably outperform other baselines including the Fig. 6: SER performances with LMMSE channel estimation for the scenario of (a)(b) \(M=8\), \(K=4\), and \(\rho=0.99\) under QPSK modulation, (c)(d) \(M=8\), \(K=4\), and \(\rho=0.995\) under 16QAM modulation, and (e)(f) \(M=8\), \(K=8\), and \(\rho=0.995\) under QPSK modulation. existed non-robust detectors and the robust ones. Next we present the SER performances under 16-QAM constellation with \(\rho=0.995\), which corresponds to a lower-mobility scenario. Figs. 6(c) and 6(d) present the SER performances versus the index of data blocks for SNR = 30 dB and those versus SNR for the third data block, respectively. It can be observed that, even with such a slower channel variation, the increasing index will lead to a serious performance degradation for each detection algorithm. Hence, in the 16-QAM case, the pilot block can only be followed by a few data blocks in a frame to guarantee the performance. Nevertheless, the proposed robust detection networks still perform much better than the non-robust detectors. Besides, as shown in Fig. 6(d), compared with the robust MMSE detector, noticable performance gains up to 12 dB and 15 dB in the high-SNR region can be achieved by RADMMNet and LCRADMMNet, respectively. Although the performances of LCRADMMNet and ROAMPNet are close, the complexity of LCRADMMNet is much lower than that of ROAMPNet, which will be analyzed in detail in Section VII-C. To verify the advantages of our proposed networks in the challenging scenario when the ratio of the number of receive antennas to that of transmit antennas equals \(1\), we conduct simulations by setting \(M=K=8\) and \(\rho=0.995\) under the QPSK modulation. The SER performances versus the index of data blocks for SNR = 20 dB and those versus SNR for the third data block are plotted in Figs. 6(e) and 6(f), respectively, where the mismatched ML detector and the robust ML detector are not incorporated due to their high complexities. Compared with the previous results with \(M=8\) and \(K=4\) in Figs. 6(a) and 6(b), the performance gains of the proposed RADMMNet and LCRADMMNet over other baselines, especially the mismatched MMSE detector and the robust MMSE detector, are more distinct. We also note that, for the high SNR region where the CSI error is the dominant factor of deteriorating the performance, the non-robust detectors and detection networks suffer from an error floor or even an abnormally rising SER, while the proposed robust detection networks can still yield satisfying performances. #### Vii-C2 Rdmaf-based channel tracking As an improved alternative of the LMMSE channel estimation, the RDAKF-based channel tracking method is evaluated in terms of the NMSE performances of channel estimates and the SER performances, respectively, where the NMSE is defined as \(\frac{\left\|\mathbf{H}-\mathbf{H}\right\|_{\text{F}}^{2}}{\left\|\mathbf{H} \right\|_{\text{F}}^{2}}\). Figs. 7(a) and 7(b) compare the NMSE performances between the two CSI acquisition methods for different indices of data blocks under different modulation orders. As expected, the RDAKF-based method can effectively track the channel variation with the aid of the pre-estimated data symbols in both the QSPK and 16-QAM cases. For the larger index of data blocks, the estimation performance gap between the RDAKF and the LMMSE-based methods becomes more evident. This is due to the fact that the error propagation caused by the coarse prediction of the LMMSE method gets more severe, which can be suppressed in the updating stage of the RDAKF method. We also notice from Fig. 7(a) that, in the low-SNR region, the inaccurate data estimates may deteriorate Fig. 7: (a)(b) NMSE performances for different indices of data blocks for the scenario of \(M=8\) and \(K=4\) under different modulation orders, and SER performances with RDAKF channel tracking for the scenario of (c)(d) \(M=8\), \(K=4\), and \(\rho=0.99\) under QPSK modulation and (e)(f) \(M=8\), \(K=4\), and \(\rho=0.995\) under 16QAM modulation. the accuracy of channel estimates, leading to an even poorer NMSE performance of the RDAKF method than that of the LMMSE-based method. The SER performances versus the index of data blocks are plotted in Fig. 7(c) with the same simulation settings as in Fig. 6(a). Based on the refined CSI, all the detection algorithms enjoy a performance improvement compared with the results in Fig. 6(a). Therefore, a frame structure containing more data blocks can be supported with the RDAKF-based channel tracking method. However, there is almost no advantage of the robust MMSE-based detector over the mismatched MMSE-based one, indicating the limitation of a linear robust design when the channel estimate is relatively accurate. Besides, the performance gaps between non-robust detection networks and LCRADMMNet are narrowed with the reduced channel error. Fig. 7(d) shows the performances versus SNR for the fifth data block. High error floors can be observed for the non-robust detectors as well as the robust MMSE detector, since they are unable to make good use of the relatively small CSI uncertainty. In contrast, RADMMNet still shows a comparable performance to the robust ML detector by fully exploiting the statistics of CSI errors. Similar trends can be found in the 16-QAM case as shown in Figs. 7(e) and 7(f), where the simulation settings are the same as in Figs. 6(c) and 6(d), respectively. The proposed RADMMNet can maintain the superiority over other baselines, demonstrating its stronger robustness against imperfect CSI. However, the SER performance gains obtained by utilizing the RDAKF-based channel tracking method are less obvious than those in the QPSK case, albeit with the similar level of improvement in the NMSE performance as illustrated in Fig. 7(b). This is because the detection of 16-QAM signals requires more accurate channel estimates. In addition, the performance gap from the robust ML detector is more evident than in the QPSK case, since 16-QAM signals are more susceptible to CSI errors. Hence, we conclude that the proposed RDAKF-based method is more applicable for QPSK modulation. The results for the more challenging scenario of \(M=K=8\), corresponding to the simulation settings in Figs. 6(e) and 6(f), also exhibit a similar behavior with improved performances benefited from the RDAKF method, which are not presented here due to the space limitation. #### V-B3 Large-scale MIMO We now investigate the SER performances for MIMO system with larger sizes. Since a larger array can effectively make up the performance loss caused by the imperfect CSI, it can be inferred that a faster channel variation can be well handled with more receive antennas. To validate this, we test the detection algorithms under higher-speed scenarios by setting smaller \(\rho\)'s for larger-scale MIMO systems, i.e., \(\rho=0.98\) for \(M=32\) and \(K=8\), and \(\rho=0.95\) for \(M=128\) and \(K=16\) in Figs. 8(a) and 8(b), respectively. The results of the mismatched ML detector, the robust ML detector, and RADMMNet are not presented due to their unaffordable complexities. It can be observed that the proposed LCRADMMNet clearly outperforms most benchmark schemes and even the MMSE detector with larger \(\rho\)'s. Additionally, LCRADMMNet can yield a slightly better performance than ROAMPNet while with a much lower complexity, which will be illustrated in the next subsection. On the other hand, the performance gains brought by the RDAKF-based channel tracking are conspicuous, especially for the scenario of \(M=128\) and \(K=16\), thanks to the large antenna array gain that results in a high probability of obtaining correct pre-estimated symbols. ### _Complexity Analysis_ The computational complexities of different detectors are analyzed in Table II. For the proposed RADMMNet, the major computations lie in the CG iterations and the matrix inversion3, which leads to a complexity of \(\mathcal{O}\left(\left(I_{\mathrm{CG}}+I_{\mathrm{max}}-1\right)M^{2}K^{2}+I_{ \mathrm{max}}K^{3}\right)\), while the other proposed network LCRADMMNet only needs to calculate one \(M\)-dimensional matrix inversion, one \(K\)-dimensional matrix inversion, and a few matrix multiplications with a total complexity of \(\mathcal{O}\left(M^{3}+K^{3}+I_{\mathrm{max}}K^{2}\right)\). The robust detection network ROAMPNet, as one of the baselines, requires a higher complexity of \(\mathcal{O}\left(I_{\mathrm{max}}M^{3}\right)\). On the other hand, the Fig. 8: SER performances versus SNR for the fifth data block under QPSK modulation in large-scale MIMO scenarios. complexities of the mismatched ML detector and the robust ML detector are \(\mathcal{O}\left(4^{QK}K^{3}\right)\) and \(\mathcal{O}\left(4^{QK}M^{3}K^{3}\right)\), respectively, which can be prohibitively high for a high-modulation order or a large number of antennas. The average CPU time requirements of different detectors under different scenarios are also provided for a more intuitive comparison, which show a consistent trend as our analysis. In particular, the low-complexity design LCRADMMNet requires nearly an order of magnitude less CPU time than RADMMNet. Concerning the SER performances presented before, although the non-robust detection networks may require less complexity than the proposed networks, they suffer from serious performance degradation. Meanwhile, it can be found that RADMMNet can achieve comparable performance to the optimal robust ML detection within only a few layers, while LCRADMMNet can always outperform ROAMPNet with much less complexity. Hence, we conclude that both the proposed networks achieve attractive tradeoffs between the performance and the complexity. Regarding the considered two channel acquisition methods, the complexity of LMMSE channel estimation is \(\mathcal{O}\left(M^{3}K^{3}\right)\), while the proposed RDAKF-based channel tracking method with much better NMSE performance additionally requires a complexity of \(\mathcal{O}\left(M^{3}L\right)\) per data block, due to the matrix inversion involved in the calculation of the Kalman gain. ## VII Conclusion This paper investigated the statistically robust detector design for MIMO systems by taking into account both channel estimation error and channel variation. We first derived an ADMM-based robust detection algorithm, which admits the calculations of closed-form expressions in each iteration. Then, by deep unfolding the ADMM iterations and introducing some data-driven structures, we advocated a robust detection network RADMMNet with trainable parameters. Furthermore, by adopting a Gaussian approximation for the CSI error, a low-complexity robust MIMO detector was further developed along with the corresponding deep unfolding network LCRADMMNet. In addition, as a complementary way to combat the channel variation and enhance the detection performance, we also presented a Kalman-filter-based channel tracking method by fully exploiting the pre-estimated data symbols. Simulation results confirmed that, the proposed two networks can considerably outperform the non-robust detectors and even approach the optimal robust ML detector with much lower complexities under both the LMMSE channel estimation and the proposed RDAKF-based channel tracking method.
2305.04208
Segmentation and Vascular Vectorization for Coronary Artery by Geometry-based Cascaded Neural Network
Segmentation of the coronary artery is an important task for the quantitative analysis of coronary computed tomography angiography (CCTA) images and is being stimulated by the field of deep learning. However, the complex structures with tiny and narrow branches of the coronary artery bring it a great challenge. Coupled with the medical image limitations of low resolution and poor contrast, fragmentations of segmented vessels frequently occur in the prediction. Therefore, a geometry-based cascaded segmentation method is proposed for the coronary artery, which has the following innovations: 1) Integrating geometric deformation networks, we design a cascaded network for segmenting the coronary artery and vectorizing results. The generated meshes of the coronary artery are continuous and accurate for twisted and sophisticated coronary artery structures, without fragmentations. 2) Different from mesh annotations generated by the traditional marching cube method from voxel-based labels, a finer vectorized mesh of the coronary artery is reconstructed with the regularized morphology. The novel mesh annotation benefits the geometry-based segmentation network, avoiding bifurcation adhesion and point cloud dispersion in intricate branches. 3) A dataset named CCA-200 is collected, consisting of 200 CCTA images with coronary artery disease. The ground truths of 200 cases are coronary internal diameter annotations by professional radiologists. Extensive experiments verify our method on our collected dataset CCA-200 and public ASOCA dataset, with a Dice of 0.778 on CCA-200 and 0.895 on ASOCA, showing superior results. Especially, our geometry-based model generates an accurate, intact and smooth coronary artery, devoid of any fragmentations of segmented vessels.
Xiaoyu Yang, Lijian Xu, Simon Yu, Qing Xia, Hongsheng Li, Shaoting Zhang
2023-05-07T07:26:41Z
http://arxiv.org/abs/2305.04208v1
Segmentation and Vascular Vectorization for Coronary Artery by Geometry-based Cascaded Neural Network 1 ###### Abstract Segmentation of the coronary artery is an important task for the quantitative analysis of coronary computed tomography angiography (CCTA) images and is being stimulated by the field of deep learning. However, the complex structures with tiny and narrow branches of the coronary artery bring it a great challenge. Coupled with the medical image limitations of low resolution and poor contrast, fragmentations of segmented vessels frequently occur in the prediction. Therefore, a geometry-based cascaded segmentation method is proposed for the coronary artery, which has the following innovations: 1) Integrating geometric deformation networks, we design a cascaded network for segmenting the coronary artery and vectorizing results. The generated meshes of the coronary artery are continuous and accurate for twisted and sophisticated coronary artery structures, without fragmentations. 2) Different from mesh annotations generated by the traditional marching cube method from voxel-based labels, a finer vectorized mesh of the coronary artery is reconstructed with the regularized morphology. The novel mesh annotation benefits the geometry-based segmentation network, avoiding bifurcation adhesion and point cloud dispersion in intricate branches. 3) A dataset named CCA-200 is collected, consisting of 200 CCTA images with coronary artery disease. The ground truths of 200 cases are coronary internal diameter annotations by professional radiologists. Extensive experiments verify our method on our collected dataset CCA-200 and public ASOCA dataset, with a Dice of 0.778 on CCA-200 and 0.895 on ASOCA, showing superior results. Especially, our geometry-based model generates an accurate, intact and smooth coronary artery, devoid of any fragmentations of segmented vessels. Segmentation Coronary Artery Geometry-based Mesh Annotation ## 1 Introduction Knowledge of the coronary artery anatomy is a prerequisite for many clinical applications. The segmentation and vascular vectorization in coronary computed tomography angiography (CCTA) images can be very valuable for the analysis of the anatomy and functions of the coronary artery. With the modeling of the coronary artery, doctors can quickly and accurately locate, assess and diagnose plaques and stenoses in the blood vessels. Beyond diagnosis, coronary segmentation can also inform the navigation and planning of cardiac interventions by determining the optimal catheter path, stent location and size, among other information, which can improve the safety and efficiency of the procedure. In this context, the automatic segmentation of the coronary artery is of great importance in clinics. However, automating the segmentation of coronary artery remains arduous. The coronary artery has a unique tree structure with thin and narrow branches that vary greatly. Distal branches are too slender to be segmented precisely, especially when other blood vessels interfere. Moreover, the sparsity and anisotropy of CCTA images result in most segmentation methods being voxel-based. The reconstructed mesh from the voxel-based segmentation mask is rough with a noticeable lattice shape. Furthermore, CCTA images have limitations such as low resolution and poor contrast, which make the coronary artery segmentation more difficult. Currently, deep learning methods have been widely employed in the coronary artery segmentation [1, 2, 3, 4, 5, 6, 7, 8, 9], which mainly generated voxel-based masks based on the Unet architecture. Nonetheless, automatic segmentation that preserves the integrity and continuity of the coronary artery remains challenging due to the common fragmentation of the segmented vessels. Meanwhile, the mesh-deformation-based methods have been increasingly drawing the attention of the community. Nevertheless, they only focus on large and regular organs, such as the liver and hippocampus. The coronary artery with its intricate structures and narrow branches is hard to go directly from voxel-based segmented results to mesh. To tackle the aforementioned problems, a new workflow is firstly designed from voxel-based coronary artery labels for producing realistic annotation of vectorized mesh. Subsequently, the generated vectorized meshes are utilized as training annotations in the following neural networks, providing a more accurate, intact and smooth morphology of the coronary artery, especially at the stenosis regions. Furthermore, a novel cascaded mesh segmentation network is presented, where the generated vectorized coronary artery mesh becomes more integrated compared to the voxel-based segmentation results. Finally, the coronary artery mesh results are smoother with plentiful details, particularly in tiny and narrow branches. Point cloud from the coronary artery mesh is capable of being directly used in the diagnosis, skipping the step of reconstruction from voxel-based segmentation results. Finally, extensive experiments demonstrate the robustness and feasibility of our method. ## 2 Related Work ### Coronary Artery Segmentation Traditional methods for coronary artery segmentation are mainly divided into two categories: region growing [10, 11, 12, 13, 14, 15, 16] and partitioning methods [17, 18]. Region growing performs iteratively adding similar neighboring voxels so that each final region encompasses a single class. It mainly includes level sets methods [10, 12], snake models [11] and tracking method [13, 14]. Whereas, region growing relies on several flexible parameters, which are difficult to be determined in specific cases. Partitioning methods implement grouping regions with similar properties together including preserving the coronary artery as a separate region. The main method used for partitioning is clustering, where the Hessian matrix is usually assisted to enhance the image. But, the segmentation results of the coronary artery are not precise, absent smoothness and details of the shape. Recently, deep learning has shown its feasibility of coronary artery segmentation with excellent performance, surpassing traditional algorithms in terms of accuracy. Meanwhile, most of the current methods [1, 2, 3, 4, 5, 6, 7, 8, 9] perform voxel-based segmentation and achieve improvements based on the Unet. 3D-FFR-Unet [1] proposes integrating the dense convolutional block to achieve effective feature extraction and fusion, improving the segmentation accuracy of the coronary artery. TETRIS [2] proposes a template transformer network to improve the segmentation performance of the coronary artery, where a shape template is deformed to match the underlying structure of interest through a trained spatial transformer network. FFNet [3] fuses spatio-temporal features, which are extracted by the Unet, to improve the segmentation results. PDS [4] achieves coronary artery segmentation by leveraging contextual anatomical information and vascular topologies through their proposed SAD module and HTL module. TreeConvGRU [19] designs the tree-structured convolutional gated recurrent unit (ConvGRU) model to learn the anatomical structure of the coronary artery. Besides, the centerline exhibits a crucial facilitator in the segmentation of the coronary artery. Along centerlines, GCN predicts the radii to obtain the coronary artery mesh [20, 21]. Similarly, WHD [22] uses the centerline to separately segment the inner lumen and outer vessel wall with contour-regularized weighted Hausdorff distance loss. TreeConvGRU [19] traverses the entire coronary artery tree through the centerline. ### Mesh Segmentation Network Instead of traditional voxel-based segmentation, more studies are concentrating on integrating the mesh deformation neural network into segmentation tasks. SAN [23] explicitly incorporates 3D geometry into classical 3D FCNs for better liver segmentation. The 3D point cloud is projected from voxel-based extracted image features and deformed via a GCN-based shape-aware network for segmentation. Similarly, Voxel2Mesh [24] extends pixel2mesh [25] to 3D images for segmentation tasks of the liver, synaptic junction, and hippocampus. MSMR [26] applies mesh segmentation in the lumen of aortic dissection (AD), which has an explicit tubular structure. AD morphology constrains the initial mesh and guides the deformation, which improves the efficiency of the deep network and avoids down-sampling. GMB [27] exploits point net to refine voxel-based coronary artery segmentation results by removing irrelevant vessels, where point cloud and voxel-based segmentation results are converted into each other. However, current methods of integrating mesh deformation networks are limited to big organs with regular shapes, such as the liver. The coronary artery has an explicit tree structure, with tiny and narrow branches, that current graph neural networks are hard to perform such complex mesh deformations. It is a great challenge to achieve vectorial segmentation of the coronary artery. ## 3 Methodology In this section, a new method of generating elaborate mesh annotation is firstly introduced for geometrical regularization of the coronary artery segmentation. Then, we concentrate on the proposed geometry-based cascaded segmentation for the coronary artery. ### Fine Mesh Annotation for Geometrical Regularization ``` Input: Voxel Annotation of the Coronary Artery \(L\). Output: Mesh Annotation of the Coronary Artery \(M\). 1 Obtain key points \(P\) of the coronary artery by skeletonizing the voxel annotation \(L\); 2Acquire key points \(P_{K}\) of each branch \(K\) by splitting the coronary artery. 3foreach Coronary Artery Branch \(K\)do 4Simulate the centreline of the coronary artery branch through key points \(P_{K}\) by B-spline; 5foreach\(P_{Ki}\)do 6Compute the Tangential direction of the key point \(P_{Ki}\); 7Sample rays at the cross-section of the key point \(P_{Ki}\); 8Calculate the intersection between each ray and voxel annotation \(L\) of the coronary artery boundary. 9Smooth the radius from the key point \(P_{Ki}\) to the intersection. 10Generate the cross-sectional boundary \(M_{Ki}\) of the vectorized mesh of the coronary artery. 11end 12Derive the vectorized mesh of each branch \(M_{K}\) by connecting adjacent cross-sectional boundaries \(M_{Ki}\). 13Smooth the vectorized mesh of each branch \(M_{K}\) along the centerline \(P_{K}\). 14 15Generate the complete vectorized mesh annotation \(M\) of the coronary artery by merging each branch \(M_{K}\). ``` **Algorithm 1**Framework of Generating Elaborate Mesh Annotation for Coronary Artery. The framework of our method for generating the fine mesh annotation of the coronary artery is shown in Algorithm.1, consisting of three main processes: skeletonization, reconstruction and integration. Through skeletonization, key points of the coronary artery tree are extracted and split into individual branches. Using these key points and coronary artery annotation, each branch is reconstructed with a smooth surface. While dealing with the intricate multi-forks of the coronary artery, individual branches are integrated to form a more realistic vessel shape. The above steps will be described in detail. In terms of skeletonization, the Deep Reinforced Tree-Traversal Agent (DRT) [28] is employed to extract the key points of the coronary artery and establish the tree structure, which is our preliminary work. Considered the key points of the coronary artery tree, the line connecting the head point and each branch endpoint of the coronary artery is considered a centerline of the coronary artery branch. The centerline of each branch is interpolated with a cubic B-spline curve, and key points are sampled at every 0.2 mm. Then, reconstruction is applied with the key points of each coronary artery branch. The tangent of the key point is calculated and served as the normal vector to form the cross-section of the coronary artery. At each cross-section, rays are sampled at every \(15^{\circ}\) in a counterclockwise direction from the key points and intersect with the voxel annotation to form a mesh layer of the coronary artery boundary. However, since the sparsity of the voxel annotation, which consists of discrete voxels, the sampled boundary of the coronary artery in the cross-section is rough and not entirely smooth. In order to restore the original morphology of the coronary artery as much as feasible, 1-d gaussian filter is applied to smooth the radii from keypoint \(P_{Ki}\) to every boundary point \(A^{j}_{Ki}\), where \(j\) denotes the angle of the ray. Following the formation of smooth coronary artery borders in each cross-section, boundary points of two adjacent cross-sections form triangular patches to compose the coronary artery mesh. Besides the smoothness of the cross-sectional boundary, the vectorized mesh of the coronary artery needs to be flattened along the centerline, demonstrating the context smoothness. Finally, with the coronary artery mesh for each branch, the mesh boolean union operation is implemented to merge them and finish the complete coronary artery mesh. Unlike prior methods, we reconstruct each branch mesh and merge them separately, rather than generating the entire coronary artery mesh together. It avoids intricate modeling and massive computation of the coronary artery forks, particularly in trifurcation. Furthermore, since each branch of the bifurcation contains the same trunk, the transition is smoother and closer to the real coronary artery vessel. Generated vectorized mesh annotations are shown in Fig.1. The left is reconstructed results from voxel-based segmentation labels using the marching cube method, and the right denotes generated mesh masks by our algorithm. Our approach is capable of reconstructing the smooth coronary artery surface, with abundant details of tiny and narrow branches. Moreover, it is generalized to cope with various complex coronary artery structures, such as trifurcation, and even four-forks. The transition at the junction of the multi-forks is natural and realistic. For vessels that are compressed at the plaques, our reconstructions are also closer to reality, conserving the tubular morphology of the coronary artery. ### Geometry-based Cascaded Segmentation Network Aiming at generating the vectorized mesh of the coronary artery directly, an geometry-based cascaded neural network is presented as shown in Fig.2, consisting of two steps: mesh deformation and refinement. At stage I, given a cropped 3D patch of the CCTA images \(\mathbf{X}\in\mathbb{R}^{L\times H\times W}\), a classical U-shape neural network is trained to extract image features of the coronary artery under the framework of voxel-based segmentation. Guided by the projected image features of the U-shape network, a graph convolutional network (GCN) is applied to deform the mesh, achieving the vectorization of the segmentation results. The U-shape network and the GCN are trained together. At stage II, the previous U-shape network is fixed and applied to extract image features without training. The coarse mesh of the coronary artery is input into a new GCN without unpooling, cascading the two steps and generating the fine mesh of the coronary artery. The details are as follows. Figure 1: Results of generated vectorized mesh annotation. ��⃃� **Graph Convolutional Network**: a sphere mesh \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) with 162 vertices and 480 edges is initialized as the input of the GCN, where \(\mathcal{V}\) denotes the set of vertices and \(\mathcal{E}\) represents the set of edges. The mesh with \(N\) vertices \(v_{i}\in\mathcal{V}\) in the GCN has its adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) and diagonal degree \(\mathbf{\hat{D}}_{ii}=\sum_{j=0}\mathbf{\hat{A}}_{ij}\), where \(\mathbf{\hat{A}}=\mathbf{A}+\mathbf{I}\). The graph convolution is executed as Eq.1. \[\mathbf{V}^{\prime}=\mathbf{\hat{D}}^{-1/2}\mathbf{\hat{A}}\mathbf{\hat{D}}^{- 1/2}\mathbf{V}\mathbf{\Theta} \tag{1}\] where \(\mathbf{\Theta}\) represents the parameters of the neural network and \(\mathbf{V}\in\mathbb{R}^{N\times C}\) symbolizes the feature vector with \(C\)-dimension for each node \(v_{i}\). In addition, the residual block is applied to predict the deformation of the mesh instead of predicting the vertices location of the target mesh directly, which simplifies the difficulty of training. Furthermore, the initial sphere is easily deformed but lacks enough details of the coronary artery. Graph unpooling is implemented in our GCN at stage I, dividing one triangular face into four parts along the midpoint of each side and assigning the mean feature vector of one edge to the node of the midpoint. It supplements more vertices and edges, retouching the mesh of the coronary artery. The LNS [24] strategy is performed to project extracted image features into the mesh space. **Optimization of Segmentation Network**: For jointly training the U-shape neural networks and GCN, various loss functions are adopted to optimize them. First, image loss is mainly driving the U-shape network under the voxel-based segmentation framework, consisting of SoftDice loss and cross-entropy loss. Second, mesh loss optimizes the GCN, including chamfer distance loss, laplacian smoothing, normal consistency loss and edge loss. The chamfer distance dominates the optimization of the GCN, which measures the distance of two point clouds between the prediction and ground truth as Eq.2, guiding the deformation of the mesh. \[\begin{split}\mathcal{L}_{CD}\left(\mathcal{V}_{1},\mathcal{V}_{2 }\right)&=\frac{1}{|\mathcal{V}_{1}|}\sum_{x\in\mathcal{V}_{1}} \min_{y\in\mathcal{V}_{2}}\|x-y\|_{2}^{2}\\ &+\frac{1}{|\mathcal{V}_{2}|}\sum_{y\in\mathcal{V}_{2}}\min_{x \in\mathcal{V}_{1}}\|x-y\|_{2}^{2}\end{split} \tag{2}\] Laplacian smoothing (Lap) and normal consistency loss (NC) are utilized to regularize the smoothness of the mesh. Laplacian smoothing \(\mathcal{L}_{Lap}\) computes the uniform weights of all edges connected at a vertex. Normal consistency loss computes the angle of the normal \(n_{0}\) and \(n_{1}\) for each pair of neighboring faces as Eq.3. \[\mathcal{L}_{NC}=\sum_{e\in\mathcal{E}}1-\cos\left(n_{0},n_{1}\right) \tag{3}\] Besides, edge loss \(\mathcal{L}_{EG}\) computes the length of each edge, avoiding outlier vertices. In summary, the total loss of the GCN is shown in Eq.4. \[\mathcal{L}_{GCN}=\lambda_{1}\mathcal{L}_{CD}+\lambda_{2}\mathcal{L}_{Lap}+ \lambda_{3}\mathcal{L}_{NC}+\lambda_{4}\mathcal{L}_{EG} \tag{4}\] Figure 2: Our geometry-based cascaded segmentation network for generating mesh of the coronary artery. where \(\lambda_{1-4}\) represents the weight of each loss. **Regularization of The GCN Training**: The intricate structure of the coronary artery presents a great challenge for the neural network. GCN is hard to learn such complicated morphology. Therefore, cropped coronary artery mesh is classified into two categories: tube and bifurcation. Compared with twisted, irregular and multi-forks coronary artery mesh, tube and bifurcation have simpler morphology, which is more straightforward to be learned by the neural network. Hence, morphological regularization is presented to regularize cropped mesh into tube or bifurcation. Through morphological regularization, the geometry-based neural networks can learn more precisely the geometrical features of the coronary artery. ## 4 Experiments In this section, the datasets and evaluation metrics are first introduced. Then, the improvement brought by vectorized mesh annotation is validated through the ablation experiments. Finally, two datasets are used to extensively demonstrate the robustness and feasibility of coronary artery segmentation results generated by our model. ### Implementation Details Our proposed method is evaluated on a public coronary artery dataset ASOCA and a collected dataset CCA-200. **1. ASOCA**[29, 30]: ASOCA dataset contains 40 training cases and 20 testing cases, and 30 of these patients report having coronary artery disease. The collected images have an anisotropic resolution, with an in-plane resolution of 0.3-0.4 mm and out of plane resolution of 0.625 mm. **2. CCA-200**: 200 cases with coronary artery disease are collected named CCA-200 dataset. To demonstrate the robustness of our model in small-scale data, comparative experiments are designed: 20 cases are used for training, and 180 cases for testing. The collected images are acquired with an isotropic resolution of 0.5 mm. Ground truths of 200 cases are coronary artery internal diameter annotations labeled by four radiologists. The evaluation consists of various metrics, including Dice, Hausdorff distance (HD), average symmetric surface distance(ASSD), chamfer distance (CD), Smooth and our proposed Nunn of Segments (NoS). Dice assesses the overlap between the predicted results and the ground truth. HD, ASSD and CD measure the geometrical morphology of the generated results. Smooth is determined by calculating the normal consistency of the adjacent faces in the reconstructed mesh, revealing the smoothness and flatness of the results. Furthermore, to highlight the fragmentation problem encountered by voxel/pixel-based methods on the segmentation of the coronary artery, the metric Nunn of Segments (NoS) is proposed to count the number of connecting vessels for assessing the integrity and continuity of the coronary artery. We run all the experiments on NVIDIA A100 (80GB) GPU, Pytorch 2.0. The Adam is used to optimize the network with the initial learning rate of 0.001. ### Ablation Experiments Ablation experiments are performed to demonstrate the feasibility and improvement of generated vectorized mesh annotation for the geometry-based coronary artery segmentation network in our collected dataset. For comparison, the traditional marching cube method is utilized to produce mesh annotations, which usually appear in other geometry-based segmentation methods, such as liver segmentation. The generated coronary artery point clouds are exhibited in Fig.3, clearly presenting the internal structure of the predicted mesh. Using our refined centerline-based annotation, the geometry-based segmentation network is qualified to outline the boundary of the coronary artery, absent dispersion of the point cloud. As shown at 1, the trifurcation of the coronary artery in Fig.3, a clear and natural intersection is formed devoid of the diffusion of points. As for tiny and narrow branch ends of 2, the geometry-based segmentation network trained by our vectorized mesh annotation will not induce points dispersion as marching cube annotations do. Besides, the adhesion effect is particularly pronounced at closely adjacent branches of 2 using marching cube mesh annotations, whereas segmentation results trained by our centerline-based annotations can clearly maintain the morphology of each branch. In CCTA images with low image resolution and poor contrast, our centerline-based approach can still generate a refined mesh with compact and clear coronary artery boundaries, whereas the marching cube will synthesize blurred and sticky mesh annotation at multi-forks and tiny branches, especially in complicated coronary artery structures. Moreover, quantitative evaluation verifies the effect of our centerline-based mesh annotation on improving geometry-based coronary artery segmentation. Points in generated coronary artery point cloud less than 0.5 mm from the voxel-based coronary artery annotation are considered hits, and vice versa are considered misclassified. By counting the number of hits, the point cloud hit ratio is calculated, that the higher the more accurate the coronary artery segmented point cloud. Compared with the traditional marching cubes (MC) methods for generating coarse mesh annotation, our model achieves a precision of 0.96, a recall of 0.85, an F1 of 0.88 and an accuracy of 0.85, surpassing the MC method with a precision of 0.92, a recall of 0.85, an F1 of 0.8 and an accuracy of 0.76. It presents that our fine centerline-based mesh annotations can significantly improve the segmentation results for the complicated coronary artery. ### Overall Evaluation In this part, three main representative types of coronary artery segmentation methods are conducted to compare comprehensive experiments on our collected CCA-200 dataset and ASOCA dataset, which are 2D pixel-based, 3D voxel-based and geometry-based segmentation methods, respectively. Intuitively, Fig.4 presents the coronary artery segmentation results of different methods on our collected CCA-200 dataset and public ASOCA dataset, respectively. It can be seen in Fig.4, that fragmentations of segmented vessels frequently occur in voxel-based segmentation, especially for the coronary artery with complicated and twisted structures, such as our collected CCA-200. Conversely, our geometry-based method preserves the complete and elaborate coronary artery, elegantly avoids the fragmentations attributing to the geometry-based segmentation network. Moreover, the tiny and narrow branches of the coronary artery are more accurately and precisely delineated, eliminating the limitations of sparsity and the low resolution of CCTA images. Besides, with the vectorization, the overall segmentation results of the coronary artery conserve the smoothness of the vessel, compensating for more realistic morphology. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Types & Dice & HD & NoS & Smooth & CD \\ \hline ResUnet [31] & 2D Pixel & 0.579 & 3.79 & 110.6 & 0.76 & 105.88 \\ H-DenseUnet [32] & based & 0.586 & 6.08 & 117.3 & 0.79 & 195.47 \\ \hline Unet3D [33] & & 0.641 & 3.39 & 61.8 & 0.63 & 68.11 \\ nnUnet [34] & 3D Voxel & 0.753 & 1.83 & 12.9 & 0.79 & 34.90 \\ FFNet [3] & based & 0.685 & 3.26 & 100.0 & 0.74 & 59.80 \\ 3D-FFR-Unet [1] & & 0.758 & 0.84 & 161.6 & 0.81 & 7.08 \\ \hline Voxel2Mesh [24] & Geometry & 0.191 & 28.86 & 2.0 & 0.06 & 519.61 \\ **Ours** & based & **0.778** & **0.31** & **2.0** & **0.05** & **2.57** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative Evaluation Results of the Coronary Artery Segmentation for Different Methods on CCA-200 Dataset. Figure 3: Comparison Results of Ablation Experiments.1⃃ Quantitatively, the evaluation results of the coronary artery segmentation for different methods on CCA-200 dataset are presented on Table.1. The complicated and elaborate structures of the coronary artery on our CCA-200 dataset allow for a comprehensive assessment of the method's performance in sophisticated and realistic scenarios. From the aspect of overlap with the ground truth, our method achieves the Dice of 0.778, surpassing other methods. In term of the geometrical morphology, the HD of 0.31 and the chamfer distance of 2.57 are smaller than others, indicating a more similar morphology to the realistic coronary artery. Particularly, voxel/pixel-based methods inevitably produce fragmentations of the segmented vessels in their predicted results, exhibiting a high NoS. Whereas, the mesh deformation of our geometry-based segmentation network guarantees the continuous integrity of the coronary artery results with NoS of 2. Left and right coronary arteries are produced completely. Voxel2Mesh also generates only 2 parts of the coronary artery, but it cannot cope with the mesh deformation from one initial sphere into multiple complex branches of the coronary artery, resulting in a particularly low Dice. In addition, benefiting from the vectorization of our geometry-based segmentation network, the coronary artery results of our model have a more smooth and flat surface with a Smooth of 0.05, overcoming the limitations of sparsity and low resolution of the CCTA images. To further verify the generalizability and robustness of our model, comparison experiments are carried out on the public ASOCA dataset, where the structures of the coronary artery are simple and clear. we follow the baseline provided by GCB-Net [36]. The results are shown in Table. 2, evidencing the feasibility and robustness of our method with the Dice of 0.895, HD of 0.193, ASSD of 0.38, Smooth of 0.054 and NoS of 2. What's more, the error map between the generated coronary artery mesh and the ground truth is calculated to further illustrate our algorithm, and the detailed results of our geometry-based segmentation network and radiologists' annotations in CCTA images are exhibited as shown in Fig.5. In CCTA images, the red denotes the radiologists' annotations and the green represents the coronary artery mesh predicted by our model. As shown in the error map, the overall difference between the predicted mesh and the ground truth is particularly small, from -0.5 mm to 1.5 mm. The morphology of the coronary artery mesh generated by our geometry-based segmentation network is closely similar to the radiologists' annotations. From the details of the coronary artery shown in CCTA images, the generated coronary artery mesh has a naturally continuous transition at the multi-forks. Besides, the branches of the coronary artery mesh Figure 4: Comparison Results of Comprehensive Experiments on our collected CCA-200 dataset and public ASOCA dataset with current mainstream methods. Compared with the ASOCA, the coronary artery in our collected CCA-200 has more complicated and elaborate structures, including more multi-forks, and more twisted and narrower branches, which raises higher demand for the segmentation of the coronary artery. have a smooth, rounded and tubular structure, particularly at the ends with only a few discrete voxels such as \(\mathfrak{R}\) and \(\mathfrak{R}\) in Fig.5. ## 5 Conclusion In this paper, aiming at the complicated structures of the coronary artery with tiny and narrow branches, we propose a novel geometry-based segmentation network. With the assistance of the regularized mesh annotation, our model is competent for generating complete, smooth and elaborate results of the coronary artery, without the fragmentations of vessels. Extensive experiments verify our model, including our collected dataset CCA-200 and ASOCA, which show excellent quantitative results. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Dice & HD & ASSD & Smooth & NoS \\ \hline ResUnet [31] & 0.780 & 0.723 & 1.13 & 0.605 & 49.2 \\ H-DenseUnet [32] & 0.853 & 0.388 & 0.73 & 0.785 & 36.7 \\ Unet3D [33] & 0.846 & 0.395 & 0.72 & 0.633 & 36.5 \\ nnUnet [34] & 0.859 & 0.614 & 1.06 & 0.764 & 16.9 \\ FFNet [3] & 0.775 & 2.529 & 3.14 & 0.785 & 22.3 \\ 3D-FFR-Unet [1] & 0.859 & 0.262 & 0.53 & 0.785 & 39.7 \\ PSP-Net* [35] & 0.841 & - & 0.59 & - & - \\ GCB-Net® [36] & 0.899 & - & 0.34 & - & - \\ HMSA* [37] & 0.862 & - & 0.56 & - & - \\ DVS* [38] & 0.873 & - & 0.58 & - & - \\ DDT* [39] & 0.882 & - & 0.57 & - & - \\ \hline **Ours** & **0.895** & **0.193** & **0.38** & **0.054** & **2** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative Evaluation Results of the Coronary Artery Segmentation for Different Methods on ASOCA Dataset. * denotes the results are quoted without the source code and more detailed metrics. Figure 5: **Left. Error maps between our results and ground truth. Right. Segmentation details of our methods. The red denotes the annotation labelled by radiologists, and the green represents our coronary artery segmented mesh results.**
2303.08964
CS-TGN: Community Search via Temporal Graph Neural Networks
Searching for local communities is an important research challenge that allows for personalized community discovery and supports advanced data analysis in various complex networks, such as the World Wide Web, social networks, and brain networks. The evolution of these networks over time has motivated several recent studies to identify local communities in temporal networks. Given any query nodes, Community Search aims to find a densely connected subgraph containing query nodes. However, existing community search approaches in temporal networks have two main limitations: (1) they adopt pre-defined subgraph patterns to model communities, which cannot find communities that do not conform to these patterns in real-world networks, and (2) they only use the aggregation of disjoint structural information to measure quality, missing the dynamic of connections and temporal properties. In this paper, we propose a query-driven Temporal Graph Convolutional Network (CS-TGN) that can capture flexible community structures by learning from the ground-truth communities in a data-driven manner. CS-TGN first combines the local query-dependent structure and the global graph embedding in each snapshot of the network and then uses a GRU cell with contextual attention to learn the dynamics of interactions and update node embeddings over time. We demonstrate how this model can be used for interactive community search in an online setting, allowing users to evaluate the found communities and provide feedback. Experiments on real-world temporal graphs with ground-truth communities validate the superior quality of the solutions obtained and the efficiency of our model in both temporal and interactive static settings.
Farnoosh Hashemi, Ali Behrouz, Milad Rezaei Hajidehi
2023-03-15T22:23:32Z
http://arxiv.org/abs/2303.08964v1
# CS-TGN: Community Search via Temporal Graph Neural Networks ###### Abstract. Searching for local communities is an important research challenge that allows for personalized community discovery and supports advanced data analysis in various complex networks, such as the World Wide Web, social networks, and brain networks. The evolution of these networks over time has motivated several recent studies to identify local communities in temporal networks. Given any query nodes, Community Search aims to find a densely connected subgraph containing query nodes. However, existing community search approaches in temporal networks have two main limitations: (1) they adopt pre-defined subgraph patterns to model communities, which cannot find communities that do not conform to these patterns in real-world networks, and (2) they only use the aggregation of disjoint structural information to measure quality, missing the dynamic of connections and temporal properties. In this paper, we propose a query-driven Temporal Graph Convolutional Network (CS-TGN) that can capture flexible community structures by learning from the ground-truth communities in a data-driven manner. CS-TGN first combines the local query-dependent structure and the global graph embedding in each snapshot of the network and then uses a GRU cell with contextual attention to learn the dynamics of interactions and update node embeddings over time. We demonstrate how this model can be used for interactive community search in an online setting, allowing users to evaluate the found communities and provide feedback. Experiments on real-world temporal graphs with ground-truth communities validate the superior quality of the solutions obtained and the efficiency of our model in both temporal and interactive static settings. community search, temporal networks, graph neural networks + Footnote †: 10.05 + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA_ + Footnote †: 10.05: _Journalournal of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, USA candidate set of nodes based on user input and lacks the ability to learn the dynamics of user queries to adjust the results accordingly. To mitigate the aforementioned limitations, we design a query-driven temporal graph convolutional neural network, called CS-TGN. To address structure inflexibility, we design two encoders: **(1)**_Query Encoder_ to encode the structural information from query nodes and capture the local topology around the queries in each snapshot of the network, and **(2)**_Snapshot Encoder_ to learn the query-independent node embeddings by combining the global structure and attributes of vertices in each snapshot of the network. To take advantage of node attributes, structural properties, and their dynamics in different snapshots of the network, CS-TGN employs Gated Recurrent Unit (GRU) cells (Gat et al., 2017) and extends the idea of hierarchical node states (Song et al., 2019) by using a contextual attention mechanism that can combine the hidden states for long-term patterns and the window information containing the short-term patterns. Next, we showcase the applicability of CS-TGN in interactive community search and propose a meta-learning framework. Within this framework, each query of the user is treated as an input of the _Query Encoder_ in a snapshot of the underlying network. We then consider the identification of communities based on user queries as distinct tasks. This design enables the model to **(1)** learn the dynamics of the network and better adapt to different user queries, and **(2)** learn the dynamic of user queries, resulting in better performance. To summarize, We make the following contributions: * We present CS-TGN, a query-driven temporal graph convolutional neural network that integrates the local query-dependent structure and global node embeddings at each timestamp. CS-TGN represents the local query-dependent and global node embeddings as hierarchical node states at different GCN layers and uses attention-based GRU cells to capture the network dynamics and update query-dependent and global embeddings over time. * We demonstrate how CS-TGN can be utilized for interactive community search. Specifically, we approach interactive community search as a meta-learning problem over temporal networks, treating the network in each user-model interaction as one snapshot in CS-TGN framework. We then consider identifying communities based on queries from different users as distinct tasks. * By conducting extensive experiments on real-world temporal and static networks with ground-truth communities, we demonstrate that our method is capable of efficiently and effectively discovering communities in both online and interactive settings. ## 2. Related Work **Community Search.** The concept of community search, which aims to find query-dependent communities in a graph, was first introduced by Sozio and Gionis (Sozio and Gionis, 2018). Since then, various community models have been proposed based on different pre-defined dense subgraphs (Song et al., 2019), including \(k\)-core (Song et al., 2019; Song et al., 2019), \(k\)-truss (Song et al., 2019; Song et al., 2019), quasi-clique (Song et al., 2019), \(k\)-plex (Song et al., 2019), and densest subgraph (Song et al., 2019). Recently, community search has also been explored in directed (Song et al., 2019; Song et al., 2019), weighted (Song et al., 2019), geo-social (Song et al., 2019; Song et al., 2019), multilayer (Song et al., 2019; Song et al., 2019; Song et al., 2019), multi-valued (Song et al., 2019), and labeled (Song et al., 2019) graphs. Inspired by the success of Graph Neural Networks (GNNs), recently, several GNN-based approaches have been proposed for community search, such as (Song et al., 2019; Song et al., 2019; Song et al., 2019). However, these models only focus on static networks and do not capture the dynamics of interactions and temporal properties. **Community Search in Dynamic Networks.** CS in temporal networks recently has attracted attention and several community models have been proposed (Song et al., 2019; Song et al., 2019; Song et al., 2019; Song et al., 2019; Song et al., 2019). Li et al. (Li et al., 2019) have defined the persistent community search as the maximal \(k\)-core, where each vertex's cumulative degree satisfies the \(k\)-core requirement within a specified time interval. Qin et al. (Qi et al., 2019) have proposed stable communities by first selecting centroid vertices with a certain number of neighbors having the desired similarity and a star-shaped structure of the centroid vertex and its neighbors existing frequently over a period of time, followed by clustering network vertices into stable groups based on the selected centroids. Similarly, Lin et al. (Lin et al., 2019) have defined frequency-based dense subgraphs that satisfy the quasi-clique structure with at least \(\theta\) vertices, and each vertex has a degree more than a given threshold. Finally, Tang et al. (Tang et al., 2019) have introduced the Reliable CS problem based on \(k\)-core structure and the duration of interactions. These methods differ from our approach, as they are based on pre-defined patterns. **Temporal Graph Learning.** Many approaches have been proposed in the literature to address the problem of learning from temporal networks (Song et al., 2019; Song et al., 2019; Song et al., 2019; Song et al., 2019; Song et al., 2019; Song et al., 2019). One group of methods u Recurrent Neural Networks (RNN) and replace the linear layer with a graph convolution layer (Song et al., 2019; Song et al., 2019). Another group deploys a GNN as a feature encoder and a sequence model on top of the GNN to capture temporal properties (Song et al., 2019; Song et al., 2019; Song et al., 2019). However, these models have limitations in both their design and training strategies (Song et al., 2019). To overcome these limitations, recent frameworks such as ROLAND and its variants (Song et al., 2019; Song et al., 2019) have been proposed to re-purpose static GNNs for dynamic graphs. However, these approaches only incorporate node embeddings from the previous snapshot in the GRU cell, failing to capture short-term patterns. In contrast, our proposed framework extends ROLAND by incorporating both long-term and short-term patterns using a contextual attention mechanism. Notably, our framework is designed for the CS problem and can capture query-dependent structural properties, which is not addressed by previous approaches. ## 3. Problem Formulation We first precisely define temporal networks, and then we formalize the problem of community search in temporal graphs. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{X})=\{\mathcal{G}^{t}\}_{t=1}^{T}\) denote a temporal network, where \(\mathcal{G}^{t}=(\mathcal{V}^{t},\mathcal{E}^{t},\mathcal{X}^{t})\) represents the \(t\)-th snapshot of the network, \(\mathcal{V}=\bigcup_{t=1}^{T}\mathcal{V}^{t}\) is the set of nodes, \(\mathcal{E}=\bigcup_{t=1}^{T}\mathcal{E}^{t}\) is the set of edges, and \(\mathcal{X}=\mathcal{X}^{T}\in\mathbb{R}^{|\mathcal{V}|\times f}\) is a matrix that encodes node attribute information for nodes in \(\mathcal{V}\). Given attribute matrix \(\mathcal{X},\mathcal{X}^{t}_{w}\) represents the attribute set of vertex \(v\in\mathcal{V}^{t}\) at timestamp \(t\). We denote the set of vertices in the neighborhood of \(u\in\mathcal{V}\) in \(t\)-th snapshot as \(\mathcal{N}^{t}(u)\). Each edge \(e=(u,v)\in\mathcal{E}\) is associated with a timestamp \(\tau_{e}\), and each node \(v\in\mathcal{V}\) is associated with a timestamp \(\tau_{v}\). Problem 1 (Community Search in Temporal Networks).: _Given a temporal network \(\mathcal{G}=\left\{\mathcal{G}^{1},\dots,\mathcal{G}^{t}\right\}=(\mathcal{V },\mathcal{E},\mathcal{X})\), and a vertex query set \(\mathcal{V}_{q}\subseteq\mathcal{V}\), the problem of Community Search in Temporal Networks (CST) is to find the query-dependent community \(\mathcal{C}_{q}\subseteq\mathcal{V}\) that is connected and has a cohesive structure._ In this paper, we formulate the problem as a binary classification task. Given a set of query vertices \(\mathcal{V}_{q}\subseteq\mathcal{V}\), we classify the nodes in \(\mathcal{V}\) into: **1**: being a part of the community \(\mathcal{C}_{q}\), **0**: not being a part of it. We use a one-hot vector \(\mathbf{c}_{q}^{\text{out}}\in\{0,1\}^{|\mathcal{V}|}\) to represent the output community \(\mathcal{C}_{q}\) produced by model \(\mathcal{M}\). Accordingly, if \(\mathbf{c}_{q_{o}}^{\text{out}}=1\), vertex \(v\) is a part of the predicted community \(\mathcal{C}_{q}\) by \(\mathcal{M}\). ## 4. CS-TGN Framework This section presents our proposed framework for Community Search (CS) in temporal networks. The framework consists of two main stages: offline training and online query, as illustrated in Figure 1. In the offline training stage, we train a model denoted as \(\mathcal{M}\) to predict the membership of each vertex to the corresponding community of query vertices. Specifically, \(\mathcal{M}\) learns to capture the flexible community structures and the dynamic temporal properties of the network from ground-truth communities. In the online query stage, given a query vertex set \(\mathcal{V}_{q}\), we utilize the trained model \(\mathcal{M}\) to identify the corresponding community of \(\mathcal{V}_{q}\). ### Architecture In our framework, given a timestamp \(t\), we begin by utilizing GCN layers to encode both the graph structure and the features of the nodes present in the recent snapshot \(\mathcal{G}^{t}=(\mathcal{V}^{t},\mathcal{E}^{t},\mathcal{X}^{t})\). We then use GCN layers and the query vector \(\mathbf{c}_{q}^{t}\) as the features of nodes to better capture the local query structure information. This part, called _Query Encoder_, propagates from the query vertices to its surrounding nodes, allowing for query-centered structural propagation. We then use a contextual attention-based model with GRU cells to capture both short-term and long-term patterns of nodes and update the node embeddings over time. Accordingly, we incorporate the historical and temporal properties of the network. Finally, a feedforward neural network (FNN) is employed for classification. The architectures of _Snapshot Encoder_ and _Query Encoder_ are illustrated in Figure 1(a). Next, we explain each part in detail. **Snapshot Encoder.** Given a timestamp \(t\) and a snapshot of the network \(\mathcal{G}^{t}=(\mathcal{V}^{t},\mathcal{E}^{t},\mathcal{X}^{t})\), _Snapshot Encoder_ captures the structural properties as well as attributes of vertices at timestamp \(t\). To this end, it employs the layer-wise forward propagation of GCN with self-feature modeling as: \[\hat{h}_{u}^{t\,(\ell+1)}=\text{Dr}\left\{\sigma\Bigg{(}\hat{h}_{u}^{t\,( \ell)}\text{W}_{s}^{(\ell+1)}+\sum_{v\in\mathcal{N}^{t}(u)}[\frac{h_{v}^{t\,( \ell)}}{\sqrt{p_{u}^{t\,p_{u}^{t}}}}\text{W}^{(\ell+1)}+\text{b}^{(\ell+1)}] \Bigg{)}\right\}, \tag{1}\] where at timestamp \(t\), \(h_{u}^{t\,(\ell+1)}\in\mathbb{R}_{\ell^{\prime}}^{d^{(\ell+1)}}\) is the learned new features of node \(u\) in the \((\ell+1)\)-th GCN layer, \(h_{u}^{t\,(\ell)}\in\mathbb{R}_{\ell^{\prime}}^{d^{(\ell)}}\) is the hidden feature of \(u\) in \(\ell\)-th GCN layer, and \(\text{W}_{s}^{(\ell+1)},\text{W}^{(\ell+1)}\in\mathbb{R}^{d^{(\ell)}\times d ^{(\ell+1)}}\), and \(\text{b}^{(\ell+1)}\in\mathbb{R}^{d^{(\ell+1)}}\) are trainable weights. \(\sigma(.)\) is a nonlinearity, e.g., ReLU, and \(\text{Dr}(.)\) is the dropout method (Krizhevsky et al., 2015) to avoid overfitting. Given a vertex \(u\in\mathcal{V}\), \(p_{u}^{t}\) denotes the degree of node \(u\) plus one at timestamp \(t\), i.e., \(p_{u}^{t}=|\mathcal{N}^{t}(u)|+1\). The input feature of node \(u\) in the first layer, \(h_{u}^{r\,(0)}\in\mathbb{R}^{d}\), is the normalized feature vector \(\mathcal{X}_{u}^{t}\). **Query Encoder.** At the time \(t\), we transform a set of query vertices, denoted as \(\mathcal{V}_{q}^{t}\), into a one-hot vector representation, called \(\mathbf{c}_{q}^{t}\). That is, if a vertex \(u\) is in the query set, then the value of \(\mathbf{c}_{q_{u}}^{t}\) is set to \(1\), otherwise it is \(0\). Next, we apply the propagation function defined in Equation 1 to each vertex \(u\in\mathcal{V}^{t}\). We denote the query-dependent hidden features of vertex \(u\) at \(\ell\)-th layer as \(h_{Q_{u}}^{t\,(\ell)}\), and use trainable query-dependent weights denoted by \(\text{W}_{Q}^{(\ell+1)}\)s, \(\text{W}_{Q}^{(\ell+1)}\), and \(\text{b}_{Q}^{(\ell+1)}\). Unlike _Snapshot Encoder_, where the input feature for vertex \(u\) in the first layer is the node feature vector, in _Query Encoder_, we use the one-hot query vector \(\mathbf{c}_{q_{u}}^{t}\) as the input for the first GCN layer. As shown in Figure 1(a), we combine the output of each layer in both _Snapshot Encoder_ and _Query Encoder_ and use it as the input for the next layer of _Query Encoder_. This enables us to provide a stable and reliable knowledge of the graph's structure and node features that is independent of the query. **Attention-based Update Module.** In order to capture the short-term patterns of nodes, inspired by (Krizhevsky et al., 2015), we employ a contextual attention-based mechanism proposed by (Krizhevsky et al., 2015). Specifically, for a given local window size \(w\), we construct the short state of the Figure 1. The design of CS-TGN framework. window as follows: (2) \[C_{u}^{t\,(\ell)} =\left[h_{u}^{t-\text{w}^{t\,(\ell)}},\dots,h_{u}^{t-1\,(\ell)}\right],\] (3) \[E_{u}^{t\,(\ell)} =\operatorname{softmax}\left(\mathbf{r}^{t\,(\ell)}\tanh\left( \mathbf{Q}^{t\,(\ell)}\left(C_{u}^{t\,(\ell)}\right)^{T}\right)\right),\] (4) \[\operatorname{short}_{u}^{t^{\prime}\,(\ell)} =\left(E_{u}^{t\,(\ell)}C_{u}^{t\,(\ell)}\right)^{T},\] (5) where \(h_{u}^{t^{\prime}\,(\ell)}\) is the node embedding of vertex \(u\) at time \(t^{\prime}\) and after the \(\ell\)-th GCN layer, and \(\mathbf{r}^{t\,(\ell)}\) and \(\mathbf{Q}^{t\,(\ell)}\) are trainable weights. Next, we use a GRU cell (Chen et al., 2017) and update node embeddings: \[\zeta_{u}^{t\,(\ell)}=\operatorname{GRU}\left(h_{u}^{t^{\prime}\,(\ell)}, \operatorname{short}_{u}^{t\,(\ell+1)}\right). \tag{6}\] Note that the formulation of the _Update Module_ for _Query Encoder_ is the same as above, while we replace \(h_{u}^{t^{\prime}\,(\ell)}\) with \(h_{u}^{t^{\prime}\,(\ell)}\). In this case, we denote the output as \(\zeta_{uQ}^{t\,(\ell)}\). This process is illustrated in Figure 1(b). **FNN Layer.** Finally, a feedforward neural network is used to classify nodes based on the concatenation of the obtained embedding from the previous part: \[\psi_{u}^{t\,(L)}=\operatorname{FNN}\left(\zeta_{u}^{t\,(L)}\parallel\zeta_{uQ }^{t\,(L)}\right), \tag{7}\] where \(L\) is the number of GCN layers. **Loss Function.** As discussed, the CS problem is treated as a binary classification task, where the output of model \(\mathcal{M}\) is \(\psi_{q}^{t}\in\mathbb{R}^{|\mathcal{V}^{t}|}\), representing the probability of each vertex \(u\in\mathcal{V}^{t}\) being a member of the community \(\mathcal{C}_{q}^{t}\). Ground-truth labels \(y_{q_{u}}^{t}\) are defined for each vertex \(u\) based on the community structure of the query set \(\mathcal{V}_{q}^{t}\), and Binary Cross Entropy (BCE) is used as the loss function. The query-dependent loss function \(\mathcal{L}_{q}^{t}\) is defined as: \[\mathcal{L}_{q}^{t}=\frac{1}{|\mathcal{V}^{t}|}\sum_{u\in\mathcal{V}^{t}}- \left(\psi_{q_{u}}^{t}\log(\psi_{q_{u}}^{t})+(1-\psi_{q_{u}}^{t})\log(1-\psi_ {q_{u}}^{t})\right). \tag{8}\] By minimizing \(\mathcal{L}_{q}^{t}\), the model learns to detect the community structure of the network by classifying vertices into their respective communities. Remark 1 ().: _Communities in real-world networks are known to be dynamic, with nodes joining or leaving communities over time. The proposed formulation allows for the adaptation of the model in scenarios where the ground-truth communities evolve over time. Consequently, during the training phase, the model can learn to capture the dynamics of the evolving ground-truth communities._ ### Training and Online Query **Training.** We begin with a given set of query vertex sets denoted as \(Q_{\text{train}}=\{q_{1},q_{2},\dots,q_{n}\}\) along with their respective ground-truth communities \(\mathcal{C}_{\text{train}}=\{C_{1},\dots,C_{n}\}\). First, we encode all query inputs as one-hot vectors. Then, at each time \(t\), we repeatedly feed a query \(q\) from \(\mathcal{Q}_{\text{train}}\) into a model \(\mathcal{M}\). The output \(\psi_{q}^{t}\) from \(\mathcal{M}\) is used to compute the loss and gradients of the model parameters. The updated parameters are then used for the next iteration, where \(\mathcal{M}\) receives another query \(q^{\prime}\in\mathcal{Q}_{\text{train}}\) as input. This process continues until all queries in \(\mathcal{Q}_{\text{train}}\) have been processed. The overall loss function is the sum of all query-dependent loss functions: \[\mathcal{L}^{t}=\sum_{q\in\mathcal{Q}_{\text{train}}}\mathcal{L}_{q}^{t} \tag{9}\] **Online Query.** Next, we describe the process of utilizing the pre-trained model \(\mathcal{M}\) at time \(t\) to identify the community \(\mathcal{C}_{q}^{t}\) of an online query \(\mathcal{V}_{q}^{t}\) without the need for re-training \(\mathcal{M}\). First, we construct a representative one-hot vector for \(\mathcal{V}_{q}^{t}\) and refer to \(\mathcal{M}\). The output of \(\mathcal{M}\) is denoted as \(\psi_{q}^{t}\in\mathbb{R}^{|\mathcal{V}^{t}|}\), where \(\psi_{q_{u}}^{t}\in[0,1]\) represents the probability of vertex \(u\) belonging to \(\mathcal{C}_{q}^{t}\) at time \(t\). The online query stage is presented in Algorithm 1, where a threshold \(\eta\in[0,1]\) is utilized to identify the vertices in \(\mathcal{C}_{q}^{t}\) as those with \(\psi_{q_{u}}^{t}\geq\eta\). Since communities are known to be connected, we use Breadth-First Search (BFS) traversal starting from query vertices to ensure connectivity. **Time Complexity.** In Algorithm 1, the time complexity of GCN inferring (line 2) depends on the number of layers and the architecture of GCNs. BFS traversal (lines 3-7) takes \(\mathcal{O}(|\mathcal{E}^{t}|)\) time. Accordingly, the time complexity is dominated by GCN inferring (line 2). ``` 1:A snapshot of a temporal network \(\mathcal{G}^{t}=(\mathcal{V}^{t},\mathcal{E}^{t},\mathcal{X}^{t})\), a query vertex set \(\mathcal{V}_{q}\), a trained model \(\mathcal{M}\), and a threshold \(\eta\) 2:A temporal community \(\mathcal{C}_{q}\) 3:\(\mathcal{Q},\mathcal{C}_{q}\leftarrow\mathcal{V}_{q}\); 4:\(\mathcal{V}_{q}\gets\) feed \(\mathcal{V}_{q}\) into \(\mathcal{M}\); 5:while\(\mathcal{Q}\) is not empty do 6: pick and remove a vertex \(u\) from \(Q\); 7:for\(v\in N^{t}(u)\) and \(\psi_{q_{u}}\geq\eta\)do 8:\(\mathcal{Q}\gets\mathcal{Q}\cup\{v\}\); 9:\(\mathcal{C}_{q}\leftarrow\mathcal{C}_{q}\cup\{v\}\); return\(\mathcal{C}_{q}\) ``` **Algorithm 1** Temporal Community Identification ### Interactive Community Search The majority of community search methods in static and dynamic graphs utilize a progressive approach to identify the community (Han et al., 2017). Nonetheless, to obtain a high-quality outcome that meets the user's expectations, the user may need to adjust their query, choose representative features, and modify the size of the community, based on each presented result for a given set of queries. In this section, we propose a meta-learning framework, where each query submitted by the user is considered an input to the _Query Encoder_ in a snapshot of the underlying network. We view the identification of communities based on user queries as separate tasks. In this work, we consider a scenario where each user query is treated as a new snapshot of the underlying network, which can be either temporal or static. We aim to improve the performance of the model and user satisfaction by allowing the user to provide feedback on the output community, label data, and query new nodes. To update the model based on the user's feedback, we propose a meta-learning framework that uses a meta-model \(\mathcal{M}^{(meta)}\) as a good initialization for deriving specialized models for future unseen user queries. To achieve this, we draw inspiration from previous work (Zhu et al., 2019) and adopt the Reptile algorithm (Rapid et al., 2019). Specifically, for each user, we first initialize the model \(\mathcal{M}\) using \(\mathcal{M}^{(meta)}\) and fine-tune it using back-propagation. We then update the meta-model by computing the moving average of the trained model: \[\mathcal{M}^{(meta)}=(1-\alpha)\mathcal{M}^{(meta)}+\alpha\mathcal{M},\] where \(\alpha\in[0,1]\) is the smoothing factor. It is worth noting that always fine-tuning the previous model may not be optimal, as the dynamics of each user's query can be different. Therefore, our proposal of finding a meta-model \(\mathcal{M}^{(meta)}\) as a good initialization can lead to better specialized models. The process is presented in Algorithm 2 and is illustrated in Figure 1(c). ``` 0: A (temporal) network \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{X})\), #epochs, a meta-model \(\mathcal{M}^{(meta)}\), smoothing factor \(\alpha\), and a threshold \(\eta\) 0: A temporal community \(\mathcal{C}_{q}\), and updated meta-model \(\mathcal{M}^{(meta)}\) 1:\(\mathcal{M}\leftarrow\mathcal{M}^{(meta)}\); 2:while user query vertex set \(\mathcal{V}_{q}\) is not empty do 3:\(\mathcal{C}_{q}\gets \text{feed }\mathcal{V}_{q}\) into \(\mathcal{M}\) and find a community; \(\triangleright\) Algorithm 1 4:\(y_{q}\leftarrow\text{user provides feedback on }\mathcal{C}_{q}\) (labels data); 5:for epoch = 1,..., #epochs do 6: Update \(\mathcal{M}\) via backprop based on \(\mathcal{V}_{q}\), \(y_{q}\), and \(\mathcal{G}^{\prime}\); 7: user terminates or inputs a new query vertex set \(\mathcal{V}_{q}\leftarrow\mathcal{V}_{q}^{(new)}\); 8:\(\mathcal{M}^{(meta)}\leftarrow(1-\alpha)\mathcal{M}^{(meta)}+\alpha\mathcal{M}\); 9:return\(\mathcal{M}^{(meta)}\) ``` **Algorithm 2** Interactive Community Identification ## 5. Experiments **Datasets.** We evaluate the performance of our proposed model using five real-world networks with ground-truth communities [1; 4; 20; 22; 50], from diverse domains, including social, communication, co-authorship, and brain networks. The statistics of these networks can be found in Table 1. Notably, the ground-truth communities in the _brain_ dataset correspond to functional systems in the human brain, making it a valuable demonstration of the effectiveness of our approach in identifying such systems. We conduct experiments using 100 query-community pairs for the smaller _football_ and _brain_ networks, and 350 query-community pairs for the other networks. We divide the queries into training, validation, and test sets, with a ratio of 40% for training, 30% for validation, and 30% for testing. We use these sets to train the model, perform early stopping and hyperparameter tuning, and evaluate the end-to-end performance of our model, respectively. **Setup.** Our models are implemented in Python 3.7 with _PyTorch_ and _torch-geometric_ libraries. The encoders consist of two GCN layers and a GRU unit with \(h\) neurons. We fine-tuned the optimal value of \(h\in\{32,64,128,256\}\). In the ablation study, the effect of varying \(h\) on the quality is evaluated. The reported times for the training and inference are based on the experiments on a Linux machine with _nvidia A4000_ GPU with 16GB of memory. We train the model for 100 epochs with a learning rate of 0.01. We used ReLU as the activation function and a dropout rate of 0.5. When dealing with large datasets, we follow the approach outlined in [3; 18; 25] and choose nodes within 2-hops of query nodes as possible subgraphs for each query. The model is then trained on these subgraphs to predict communities. Our code repository ([https://github.com/joint-em/CS-TGN](https://github.com/joint-em/CS-TGN)) contains more information about the implementation of our framework. **Baselines.** We compare our CS-TGN with state-of-the-art temporal and static community search (CS) methods. Reliable [44] is a \(k\)-core-based approach in temporal networks that finds a weighted \(k\)-core containing query nodes. Truss [49] is an approach based on \(k\)-truss structure in temporal networks. MLGCN [3] is a learning-based CS method in multiplex networks that leverages an attention mechanism to incorporate the information of different relation types. FTCS [4] finds the connected FirmTruss containing query nodes with the minimum diameter. In multiplex methods, each snapshot of the network is treated as one layer or view. Finally, we compare our approach with a state-of-the-art learning-based method in static graphs, QD-GNN [25]. In the interactive setting, we compare our method with ICS-GNN [18], a GNN-based interactive community search model that requires re-training for each query. **Queries and Evaluation Metrics.** The performance of all algorithms is evaluated using varying numbers of query nodes. We generated the queries randomly with a size between 1 to 10. The quality of the found communities \(C\) is assessed through their F1-score, which measures the alignment with the ground truth \(\tilde{C}\). The F1 score is defined as \(F1(C,\tilde{C})=\frac{2pre(C,\tilde{C})rec(C,\tilde{C})}{pre(C,\tilde{C})+rec(C, \tilde{C})}\), where \(pre(C,\tilde{C})=\frac{|C\cap\tilde{C}|}{|C|}\) and \(rec(C,\tilde{C})=\frac{|C\cap\tilde{C}|}{|C|}\). **Quality Evaluation.** The average F1-scores of our CS-TGN and the baselines on datasets with the ground-truth community are presented in Figure 2. Our CS-TGN outperforms all baselines on all networks, with an average improvement of 12.6%. This can be attributed to two factors. Firstly, the flexible nature of community structures is not well-captured by approaches that rely on pre-defined subgraph patterns, such as Reliable, Truss, \(k\)-core, and FTCS, resulting in inaccuracies for communities lacking such patterns. Secondly, while other learning-based methods neglect the temporal and historical properties of datasets, our approach can learn the community structure from the data and employs GRU cells with an attention mechanism to capture both short-term and long-term patterns in the network, thereby utilizing all the structural, temporal, and historical properties of the data. **Query Efficiency.** The efficiency of query processing for CS-TGN is evaluated in the test set. Figure 3 presents the average query processing time of all methods for test queries. Our approach, CS-TGN, demonstrates a query time that is comparable to that of QD-GNN and surpasses other baselines in terms of efficiency. Notably, this finding is achieved by our method while also exhibiting superior performance compared to the baselines in quality evaluation. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Dataset & _football_ & _brain_ & _email_ & _dblp_ & _youtube_ \\ \hline \hline \(|\mathcal{V}|\) & 114 & 190 & 1005 & 472K & 1.1M \\ \(|\mathcal{E}|\) & 613 & 720 & 25.5K & 1M & 2.9M \\ \# Snapshots & 5 & 4 & 6 & 8 & 4 \\ \# Communities & 12 & 15 & 42 & 5000 & 5000 \\ \hline \hline \end{tabular} \end{table} Table 1. Network Statistics **Effect of Temporal Learning.** We conduct the following experiments to test our main hypothesis, that learning community structure over time is beneficial. First, we report the performance of the same architecture but without the GRU unit on the same networks and set of queries to quantify the quality difference and overhead caused by GRU units. Following that, we demonstrate how the number of snapshots affects the quality of results produced by the CS-TGN architecture. Table 2 presents a comparative analysis of the performance of CS-TGN against two of its variants that share the same architecture, albeit without the GRU cells. The first variant, referred to as the GRU-less version, is no longer temporal and is trained on the last snapshot of the network. The second variant, denoted as SUM-CS-TGN, employs a simple summation method to update node embeddings over time instead of using the GRU cell. In order to ensure a fair comparison, all test queries and answers are based on the last snapshot. Our experimental results demonstrate that CS-TGN outperforms both of its variants on all datasets in terms of F1-score, suggesting that capturing the community structure over time is beneficial for achieving superior performance. To investigate the impact of the number of snapshots on the performance of CS-TGN, we conduct experiments by varying the number of snapshots on different versions of the DBLP dataset. As shown in Figure 4, increasing the number of snapshots leads to a significant improvement in performance. This result demonstrates the effectiveness of the attention-based update mechanism and the architecture of CS-TGN. **Effects of Hyperparameters.** We evaluate the effect of hyperparameters on the quality of our proposed model: **(1)**Number of neurons in the hidden layer (hidden dimension of GCN and GRU units) **(2)** threshold \(\eta\). We demonstrated the effect of these hyperparameters for three datasets- _email_, _youtube_, and _brain_-in Figure 4. Grid search techniques can be used to tune these hyperparameters according to the dataset. Based on these results, the optimal threshold value \(\eta\) is between 0.4 and 0.6, and the optimal hidden dimension is correlated with network size. **Interactive Community Search on Brain Networks.** Detecting and monitoring functional systems in the human brain is an important and fundamental task in neuroscience (Beng et al., 2017; Wang et al., 2018). A brain network is represented as a graph, where nodes correspond to the brain regions and edges depict co-activation between regions. To identify the functional systems of each brain region, a community search method can be used. However, health-related domains are sensitive and require expert supervision. In this experiment, we apply two interactive community search methods on the _brain_ datasets while varying the size of the query vertex set. The results are presented in Table 3. Our proposed CS-TGN outperforms the ICS-GNN model at any stage and number of interactions, thereby highlighting its effectiveness for interactive community search. ## 6. Conclusion In this paper, we propose a novel approach for community search in temporal networks called CS-TGN. Our approach is a query-driven temporal graph convolutional neural network that takes a data-driven approach to capture flexible community structures and incorporates dynamic temporal properties from ground-truth communities. To achieve this, CS-TGN employs two encoders to encode the local query-dependent structure and global query-independent graph structure. To capture both short-term and long-term patterns and update node embeddings over time, CS-TGN uses a contextual attention mechanism and GRU cells. Additionally, we show how CS-TGN can be used for interactive community search and formulate the problem as a meta-learning approach over temporal networks. We evaluate our model on several real-world datasets with ground-truth communities and demonstrated its superior performance compared to existing state-of-the-art methods in terms of accuracy and efficiency. \begin{table} \begin{tabular}{l|c c c} \hline \hline Dataset & CS-TGN & CS-TGN w/o GRU & SUM-CS-TGN \\ \hline \hline _brain_ & **0.51** & 0.39 & 0.44 \\ _football_ & **0.93** & 0.84 & 0.89 \\ _email_ & **0.63** & 0.50 & 0.51 \\ _youtube_ & **0.74** & 0.47 & 0.57 \\ _dblp_ & **0.94** & 0.66 & 0.83 \\ \hline \hline \end{tabular} \end{table} Table 2. The effect of GRU cells on F1-score Figure 4. Effects of #Snapshots (DBLP dataset) and Hyperparameters on F1-score. \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline \#interactions & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \hline CS-TGN & **0.33** & **0.35** & **0.39** & **0.41** & **0.38** & **0.47** & **0.55** & **0.56** \\ ICS-GNN & 0.24 & 0.27 & 0.29 & 0.35 & 0.31 & 0.36 & 0.4 & 0.42 \\ \hline \hline \end{tabular} \end{table} Table 3. Interactive community search on brain networks Figure 3. Query Efficiency. Figure 2. Quality evaluation.
2304.03896
Spiking Neural Networks for Detecting Satellite-Based Internet-of-Things Signals
With the rapid growth of IoT networks, ubiquitous coverage is becoming increasingly necessary. Low Earth Orbit (LEO) satellite constellations for IoT have been proposed to provide coverage to regions where terrestrial systems cannot. However, LEO constellations for uplink communications are severely limited by the high density of user devices, which causes a high level of co-channel interference. This research presents a novel framework that utilizes spiking neural networks (SNNs) to detect IoT signals in the presence of uplink interference. The key advantage of SNNs is the extremely low power consumption relative to traditional deep learning (DL) networks. The performance of the spiking-based neural network detectors is compared against state-of-the-art DL networks and the conventional matched filter detector. Results indicate that both DL and SNN-based receivers surpass the matched filter detector in interference-heavy scenarios, owing to their capacity to effectively distinguish target signals amidst co-channel interference. Moreover, our work highlights the ultra-low power consumption of SNNs compared to other DL methods for signal detection. The strong detection performance and low power consumption of SNNs make them particularly suitable for onboard signal detection in IoT LEO satellites, especially in high interference conditions.
Kosta Dakic, Bassel Al Homssi, Sumeet Walia, Akram Al-Hourani
2023-04-08T03:13:18Z
http://arxiv.org/abs/2304.03896v1
# Spiking Neural Networks for Detecting Satellite-Based Internet-of-Things Signal ###### Abstract With the rapid growth of IoT networks, ubiquitous coverage is becoming increasingly necessary. Low Earth Orbit (LEO) satellite constellations for IoT have been proposed to provide coverage to regions where terrestrial systems cannot. However, LEO constellations for uplink communications are severely limited by the high density of user devices, which causes a high level of co-channel interference. This research presents a novel framework that utilizes spiking neural networks (SNNs) to detect IoT signals in the presence of uplink interference. The key advantage of SNNs is the extremely low power consumption relative to traditional deep learning (DL) networks. The performance of the spiking-based neural network detectors is compared against state-of-the-art DL networks and the conventional matched filter detector. Results indicate that both DL and SNN-based receivers surpass the matched filter detector in interference-heavy scenarios, owing to their capacity to effectively distinguish target signals amidst co-channel interference. Moreover, our work highlights the ultra-low power consumption of SNNs compared to other DL methods for signal detection. The strong detection performance and low power consumption of SNNs make them particularly suitable for onboard signal detection in IoT LEO satellites, especially in high interference conditions. LEO constellation, Internet-of-Things, chirp waveform, deep learning, spiking neural networks, satellite communication, matched filter, signal detection, interference. ## I Introduction IoT use cases are growing at an unprecedented rate, where terrestrial networks are unable to supply coverage for applications like smart farming and parcel tracking in rural, remote areas, and areas with extreme environments [1]. To accommodate the growing need for global coverage, low earth orbit (LEO) satellite constellations have been of great interest to the research community and industry alike [2, 3, 4] where many large companies such as OneWeb, SpaceX, and Amazon [5, 6, 7] are currently deploying mega satellite constellations. For an alternative method to provide global coverage, geosynchronous orbit (GEO) satellites could also be used. However, LEO satellite links have a lower overall propagation delay and a lower propagation loss relative to GEO links [4]. Moreover, GEO satellites require a higher cost for deployment, and a typical GEO link requires large transmit/receive antennas and stronger transmit power, which makes them inadequate for networks constrained with low cost and low energy such as IoT networks. However, LEO satellite networks suffer from high Doppler due to the satellite motion relative to the ground user as well as a high level of co-channel interference. The high level of interference is caused by other devices operating in the same time-frequency resources due to the wide coverage area of the satellite. High interference in multiple access LEO networks is caused by a large number of co-channel transmissions, particularly in shared bands such as industrial, scientific, and medical (ISM) band [8] which is open to the general public for transmission and does not require paid licensing. The devices in these bands lack standardization and coordination which exacerbates interference [2]. In these interference-limited scenarios, typical detection (demodulation) methods result in a high symbol error rate (SER). To mitigate co-channel interference methods such as frequency reuse, dynamic spectrum allocation, and beam multiplexing techniques are typically used. Nevertheless, the interference levels are still large, to address the high SER brought about by co-channel interference, deep learning (DL) signal detection has been proposed in [9, 10, 11] to efficiently detect signals in interference-limited scenarios. Generic DL methods such as the simple artificial neural network (ANN) and the convolutional neural network (CNN) have shown strong performance in various applications ranging from natural language processing (NLP) [12] to image classification [13]. However, they consume a substantial amount of power depending on the size of the architecture [14]. Due to the large power consumption, generic DL networks are typically designed to perform localized computing tasks, i.e. edge computing, and thus do not suit resource-limited devices. On the other hand, the human brain is able to process complicated dynamic events at only an average of 20 W power [15]. On the other hand, traditional processing such as the largest neural network (NN) model, dubbed the Megatron-Turing natural language generation (MT-NLG) model with 530 billion parameters [16] uses multiple high-end GPUs which consume many orders of magnitude more power than the brain. Furthermore, a NN with 530 billion parameters is much less complex than the human brain which has around 100 billion neurons and 100 trillion synapses. To emulate the behavior of a single biological neuron, a 5-8 layer NN with hundreds of neurons per layer is required according to the authors in [17]. Consequently, the third generation of NN models takes inspiration from the spiking behavior in human brains. Accordingly, the Spiking neural network (SNN) is set to dramatically reduce power consumption [18] by encoding data with intermittent spiking that only produces an output when the input exceeds a certain threshold. This means that SNNs only consume power when there is an event (i.e. a spike) at the neuron, thus raising the level of efficiency closer to that of the biological brains. Due to the exceptional energy efficiency of the SNN relative to conventional DL networks, an SNN-based IoT signal detector would be ideal for a satellite receiver due to its limited power resources. In this paper, we demonstrate the performance of spiking models and their applicability to modern communications systems, specifically for chirp signal detection. We present a comparison of traditional detection, conventional DL models, and spiking models for satellite uplink signal detection of devices that utilize the chirp-based modulation scheme. As a practical implementation, we consider the LoRa modulation scheme which is a popular chirp-based modulation and a technology of interest in terrestrial IoT networks. Nevertheless, the proposed detector can be adapted to many other signal types. The comparison evaluates the performance of how conventional and spiking models can improve the SER in an environment with high levels of co-channel interference. Additionally, the performance assessment relies on the emulation of a LEO satellite constellation with ground users that employ a chirp signal modulation scheme, where emulation of the chirp signal is performed with a MATLAB-scripted emulator [19, 20] developed earlier by our research team. The results show a considerable improvement in the SER of conventional DL and spiking-based signal detectors compared to traditional non-coherent detection in the uplink of a LEO constellation. The contribution of this work is summarized as follows, * It demonstrates the applicability of power-efficient spiking-based networks in detecting uplink IoT signals in a LEO satellite scenario with high co-channel interference levels. * It assesses the power efficiency performance of spiking networks based on neuromorphic hardware relative to conventional DL models running on standard hardware. * It develops a synchronization scheme that is novel in the IoT-over-space context. * It tests the idea from [9] which shows a hybrid network that switches between the conventional matched filter and DL signal detection based on the inferred interference level to further increase the energy efficiency of the spiking-based receivers by hybridizing the network with conventional detection techniques. The rest of this paper is organized as follows; In Section II we give some related works. In Section III we describe the SNN. In Section IV we detail the system model for emulating a LEO constellation for uplink transmission. We describe the geometric model, which covers the satellite constellation, satellite beam footprint, and the distribution of the user devices. Furthermore, we discuss the wireless access model and the channel. Section V describes the spiking-based detection networks used in this paper. In Section VI we cover the physical layer of chirp-based modulation (LoRa). We introduce our proposed synchronization method in Section VII. We show and discuss the results in Section VIII. Finally, the paper is concluded in Section IX. ## II Related Works The uplink performance of LEO IoT networks has been analyzed using stochastic geometry in [21] and simulation in [22]. In these research works co-channel interference is concluded to heavily limit the performance. To combat the performance degradation arising from co-channel interference, many interference mitigation techniques have been studied in past research works. One method is to use coordinated multiple-access techniques, i.e. scheduling similar to cellular networks [23]. However, using coordinated multiple-access techniques is difficult to implement in LEO satellite constellations given the high Doppler shift as well as the higher propagation delay compared to conventional terrestrial networks. Other interference mitigation approaches such as Cognitive Radio (CR) [24] have been utilized, however, CR is not ideal for low-power user devices due to the extra power needed for spectrum sensing. One prominent technology that is positioned for IoT-over-satellite is chirp-base modulation. The performance of LoRa (as the implementation of chirp-based modulation) under interference conditions has been investigated in multiple studies, such as in [25, 26, 27, 21]. While traditional detection of LoRa shows some robustness to interference, the network cannot be directly scaled to practical satellite scenarios given typical traditional non-coherent detector receivers. An uplink LEO satellite communication scenario is an example of a high interference environment, where the satellite receiver has to deal with a high volume of radio frames, due to its inherent large swath of coverage. An IoT-over-satellite scenario concentrating on LoRa modulation is explored in [28], where it is concluded that high volumes of LoRa traffic significantly deteriorate the system's performance. In the study referenced as [29], the authors propose a folded chirp shift keying (FCSK) system that, like LoRa, employs chirp-based modulation for transmitting low-bit-rate data. Although their approach demonstrates enhanced resilience to Doppler effects, FCSK's detection error rate is inferior to that of LoRa. This is primarily attributed to FCSK's variable chirp rate, which renders it more vulnerable to interference from other chirp rates. Nonetheless, receivers need to be significantly redesigned such that they could deal with non-Gaussian impairments. One promising direction is the use of DL tools, as shown in [9], which leverage DLs robustness against non-Gaussian impairments to enhance signal receivers in an interference-limited regime. DL has been shown to be a great tool for dealing with stochastic impairments for signal detection in wireless communication [9, 10, 11]. DL has also been utilized to achieve a performance gain by optimizing an interference mitigation algorithm [30]. However, DL as a tool for interference mitigation has not been extensively showcased in a satellite-based Internet of Things (S-IoT) scenario. For a satellite receiver, a DL receiver has been shown in [31], however, the work focuses on high throughput transmissions and the non-linear impairments produced by the hardware. While conventional DL works well at detecting signals with non-Gaussian noise, the high power requirements of complex DL models are not ideal for use in systems with limited power availability such as onboard processors in LEO satellites. A spiking-based system would reduce the energy requirements while still providing high-performance characteristics for signal detection [18]. ## III Spiking Neural Networks An advantage that SNNs have over generic NNs is their addition of a spatiotemporal dimension in the spike trains instead of single element vectors used in NN that carry the weights and biases. The spike trains allow the SNN to be effective at carrying a lot of information, by using fewer neurons relative to generic NNs [32]. As a result, an SNN would perform particularly strongly at processing real-time, continuous, and temporally rich data streams, such as wireless signals [33]. Also, the SNN network is only energized when the spikes are generated/received, unlike typical NNs where the entire network needs to be continuously run for every new input. These advantages have been used for several previous research works such as in [34], where an unsupervised SNN was used in automatic modulation classification (AMC). Another related example can be found in [35], where the authors efficiently perform AMC on RF domain data using an SNN and improve memory utilization by more than three orders of magnitude. Research work on using SNNs for signal detection is proposed in [36] to improve human body communication by harnessing the aforementioned low power consumption and superior performance on spatiotemporal data of the SNN. Another research work for detecting signals with SNNs is demonstrated in [37], where an SNN is used for detecting radar signals. Notwithstanding, to the best of our knowledge, there has not been any research work showing the prospect of SNNs for signal detection of uplink for IoT-over-satellite communications. The prime benefit of an SNN-based receiver would be its ability to deal with non-Gaussian interference while simultaneously exhibiting low power consumption. The spiking neuron is heavily inspired by biologically plausible neuron models such as the Hodgkin-Huxley model [38]. However, while the Hodgkin-Huxley model is accurate, it is difficult to compute. Consequently, leaky integrate-and-fire (LIF) neuron models can be used instead. The LIF neuron model takes the sum of weighted input pulses, which in turn are integrated over time with a certain exponential decay, i.e., _leakage_. An illustration is shown in Fig. 1(a) showing that a membrane potential is accumulated from the input current \(I_{\rm in}\) and if the integrated value exceeds a chosen threshold \(V_{\rm th}\), then the LIF neuron will fire a voltage spike \(\kappa(t)\). The relationship between the output \(\kappa\) and the voltage \(V(t)\) can be represented as follows, \[\kappa(t)=u\left(V(t)-V_{\rm th}\right), \tag{1}\] where \(u(.)\) is the Heaviside step function. When a spike is triggered, the membrane potential should be reset. Next, the illustration in Fig. 1(b) shows the equivalent Lapique RC circuit, which is composed of the membrane capacitance C and the membrane resistance R. The circuit can be mathematically modeled as follows, \[RC\frac{dV(t)}{dt}=-V(t)+I_{\rm in}(t)R, \tag{2}\] where the solution to the differential equation is [39], \[V(t)=\frac{R}{RC}\int_{0}^{\infty}\text{exp}\left(-\frac{q}{RC}\right)I(t-q) \text{ d}q. \tag{3}\] Finally, Fig. 1(c) shows how each connection to the neuron body accumulates outputs, then \(\kappa(t)\) generates a voltage spike. Ultimately, the temporal information of the spiking data carries the learned attribute from the input to the output of the spiking neural network. The temporal information of the spiking data evolves over time as the network learns. ## IV System Model The simulation model developed in this work emulates randomly distributed ground IoT devices that operate using the chirp modulation scheme. Each ground IoT device is configured to connect to the nearest satellite if the device lies within the satellite's footprint, where each satellite is part of a large constellation. The satellites are simulated to orbit a perfectly spherical Earth as an approximation. The model assumes all IoT devices use chirp-based modulation without loss of generality, which is performed using a Matlab scripted Fig. 1: An illustration of the leaky integrate-and-fire neuron model. Fig. 2: Walker-Delta orbit simulation using 384 satellites with an orbital inclination of \(53^{\circ}\). emulator [20] to generate the chirp-based signals. Note that for this work we utilize the LoRa physical layer, however, the framework can support other physical layer technologies, and LoRa is used as an example. The following sections further describe the geometric model and the channel model. ### _Geometric Model_ We consider a practical satellite constellation for this work, where the Walker-Delta constellation is utilized. The Walker-Delta constellation is used to demonstrate practical network performance as the constellation is adopted by large satellite constellation companies, such as Starlink [6]. The Walker-Delta constellation is described by parameters \(h,i,N,P\mathrm{\ and,\ }F\), where \(h\) is the altitude of the satellite, \(i\) us the satellite orbital inclination angle, \(N\) is the total number of satellites, \(P\) is the number of orbital planes and, \(F\) is the phasing parameters used to describe the phase difference between satellites in consecutive orbital planes. Each orbital plane has an equal number of satellites \(N/P\) where the satellites are uniformly spaced. A snapshot of an example of a Walker-Delta constellation in the simulation is shown in Fig. 2. For the user to be able to communicate with the satellite, it needs to lay within the satellite footprint. For a simplified conical antenna pattern, The footprint is governed by the satellite effective beamwidth [40], denoted as \(\psi\), which dictates the footprint projection on the Earth's surface. The footprint projection is assumed to be an ideal spherical cap bounded by an earth-centered zenith angle, denoted as \(\varphi\) (as indicated in Fig. 3). Using simple geometric reasoning, the area of the spherical cap of the beam is calculated as follows, \[A_{\mathrm{fp}}=2\pi R_{\mathrm{e}}^{2}\left(1-\cos\varphi\right), \tag{4}\] and the earth-centered zenith angle is calculated using the law of sines as follows [41], \[\varphi=\mathrm{asin}\left(\frac{1}{\alpha}\sin\frac{\psi}{2}\right)-\frac{ \psi}{2}, \tag{5}\] where \(\alpha=R_{\mathrm{e}}/R\), \(R_{\mathrm{e}}\) is Earth's average radius, \(R=R_{\mathrm{e}}+h\), and \(h\) is the satellite altitude above the Earth's mean sea level. The satellite footprint size is restricted by the horizon. Thus, the maximum effective beamwidth is, \[\psi_{\mathrm{max}}=2\mathrm{asi}\alpha, \tag{6}\] Moreover, the earth-centered zenith angle when the beamwidth is maximum can be calculated as follows, \[\varphi_{\mathrm{max}}=\mathrm{acos}\left(\frac{R_{\mathrm{e}}}{R}\right). \tag{7}\] Then, a spherical cap perimeter is drawn around each satellite footprint to define the boundary, where if a user device is located within the footprint, it is considered active (connected state). The footprint radius of each satellite is calculated as follows, \[R_{\mathrm{fp}}=R_{\mathrm{e}}\varphi. \tag{8}\] For defining the perimeter of the footprint, the latitude and longitude of the footprint boundary need to be calculated with the heading formulae [42] as follows, \[\phi_{\mathrm{fp}}=\mathrm{asin}\left(\mathrm{sin}\,\phi_{\mathrm{sat}}\cos \varphi+\mathrm{cos}\,\phi_{\mathrm{sat}}\sin\varphi\cos\theta\right), \tag{9}\] and the longitude, \[\rho_{\mathrm{fp}}=\rho_{\mathrm{sat}}+\mathrm{atan2}(\mathrm{ sin}\,\theta\,\mathrm{sin}\,\varphi\cos\phi_{\mathrm{sat}},\\ \cos\varphi-\mathrm{sin}\,\phi_{\mathrm{sat}}\,\mathrm{sin}\, \phi_{\mathrm{fp}}), \tag{10}\] where \(\theta\) is an array from 0 to \(2\pi\) with 360 elements and \(\phi_{\mathrm{sat}}\) and \(\rho_{\mathrm{sat}}\) is the latitude and longitude of the satellite's sub-point. A satellite sub-point refers to the point on the Earth's surface directly below a satellite in orbit. An illustration of the geometry of a LEO satellite is shown in Fig. 3. For this work, all the user devices are assumed to be homogeneously distributed over the surface of the earth. The devices are distributed using a spherically wrapped Poisson Point Process (PPP), where the density of devices is controlled by \(\lambda=D\lambda_{\mathrm{o}}\). \(D\) is the spatial duty cycle of the active devices per second and \(\lambda_{\mathrm{o}}\) is the density of the overall number of devices. Note when we have a large number of users and satellites, a user could be located within more than a single satellite footprint. Consequently, a such user contributes to interference to all these satellites An example of randomly distributed users and links from the active users to the satellite is shown in Fig. 4. ### _IoT Access System_ For this research work, we are interested in the performance of signal detection in a satellite scenario where received signals are prone to co-channel interference from uncoordinated signal transmissions. One popular access system, dubbed pure-ALOHA, does not take into consideration any scheduling or whether another device is transmitting. As such, pure-ALOHA tends to allow for signal collisions that compromise the signal receiver to accurately detect the signal. Additionally, pure-ALOHA does not take into account the availability of a satellite, thus packets can be lost. However, we only care about Fig. 3: LEO satellite scenario showing the concept of Earth-centered zenith angle \(\varphi\) and the satellite beamwidth \(\psi\). signal detection in a satellite scenario, so we put more emphasis on detection rather than satellite availability. Nevertheless, the advantage of pure-ALOHA is its simplicity, it requires a low overhead which allows for less power consumption and therefore a longer battery life for IoT devices. Accordingly, we assume a pure-ALOHA access model for this research work, where each device uses the same channel resources and transmits them as soon as the data is available. As an alternative, scheduled ALOHA could be used, where each user device would require a synchronized clock to regulate the transmission times in a periodical or event-triggered manner. Another option is slotted ALOHA, where each station can transmit at any time, but transmissions are divided into time slots. This also helps to reduce collisions and improve network efficiency, but to a lesser extent than scheduled ALOHA. However, both scheduled and slotted ALOHA requires additional overhead for coordinating the transmissions and use additional power as a consequence. An illustration of the pure-ALOHA access model is shown in Fig. 5. ### _Channel Model_ The received time-series signal at the satellite receiver is depicted as, \[y(t)=g\{x_{i}(t)\}+\underbrace{\sum_{j\neq i}g\{x_{j}(t)\}}_{\mathrm{interference}}+ n(t), \tag{11}\] where \(x_{i}(t)\) is the time-series signal received from the target user denoted by the subscript \(i\), \(x_{j}(t)\) is the time-series signal from an interfering device \(j\), \(g\{.\}\) represents the channel function which accounts for both the fading and Doppler shift, and \(n(t)\sim\mathcal{CN}\left(0,1\right)\) is the complex zero-mean AWGN. The largest portion of the satellite link is under free-space conditions and only a small portion undergoes excess path loss such as fading due to near-ground clutter. The fading is also contributed by the rain and atmospheric absorption depending on the operating frequency. In addition, LEO satellites orbit Earth at high speeds causing high Doppler shifts. Furthermore, a random phase shift is considered in this research work as perfect phase estimation at the receiver is difficult to achieve in an IoT packet for LEO satellite uplink. This is because LEO satellites have a relatively fast orbital velocity and a low altitude, which results in a rapidly changing Doppler shift and a high signal attenuation due to atmospheric absorption and scattering. Additionally, co-channel interference can make it difficult for the receiver to estimate the phase of the transmitted signal. Hence, the channel is modeled as follows, \[g\{x(t)\}=\sqrt{\frac{P_{r}}{N_{\mathrm{o}}}}x(t)\exp(j2\pi\nu(t)+j\phi_{ \mathrm{s}}), \tag{12}\] where \(N_{\mathrm{o}}\) is the noise power spectral density, \(\nu(t)\) is the Doppler frequency shift, and \(\phi_{\mathrm{s}}\) represents the phase shift of the signal. The received power at the satellite receiver, represented as \(P_{\mathrm{r}}(\varphi,\psi)\) and is obtained as follows, \[P_{\mathrm{r}}(\varphi)[\mathrm{dBm}]=\mathsf{EIRP}[\mathrm{dB}] +G[\mathrm{dB}]\\ -\underbrace{l(\varphi)[\mathrm{dB}]-\eta(\varphi)[\mathrm{dB}]}_{ \mathrm{Path-loss}}, \tag{13}\] where \(G\) is the satellite antenna gain, \(l(\varphi)\) is the free-space-path-loss and \(\eta(\varphi)\) is the excess path-loss. The satellite antenna gain \(G\) is obtained by using the ideal antenna gain expression as follows [21], \[G =\frac{\mathrm{spherical\ area}}{\mathrm{spherical\ cap\ area}} \tag{14}\] \[=\frac{4\pi{R_{e}}^{2}}{2\pi{R_{e}}^{2}\left(1-\cos\!\frac{ \varphi}{2}\right)}=\frac{2}{1-\cos\!\frac{\varphi}{2}}.\] Since typical IoT devices operate near the L-band [22] the excess path-loss model is adopted from [43]. This model assumes a two-state channel; the (i) line-of-sight (LoS) and (ii) non-line-of-sight (nLoS) states are assumed. Accordingly, the excess path-loss in decibels is modeled as a mixed normal random variable as follows, \[\eta(\varphi)[\mathrm{dB}]\sim p_{\mathrm{LoS}}(\varphi)\mathcal{ N}(\mu_{\mathrm{LoS}},\sigma_{\mathrm{LoS}}^{2})\\ +p_{\mathrm{nLoS}}(\varphi)\mathcal{N}(\mu_{\mathrm{nLoS}},\sigma_ {\mathrm{nLoS}}^{2}) \tag{15}\] where \(p_{\mathrm{LoS}}(\varphi)=\exp(-\beta\sin\varphi/[\cos\varphi-\alpha])\)[44] is the probability that the link is in the LoS state. Note that \(\{\beta,\mu_{\mathrm{LoS}},\sigma_{\mathrm{LoS}},\mu_{\mathrm{LoS}},\sigma_ {\mathrm{nLoS}}\}\) are propagation parameters that depend on the ground device's environment [43]. On the other hand, the free space path loss only depends on the operating frequency and the distance between the ground device and receiving satellite and is given as follows, \[l(\varphi)=\left[\frac{4\pi d(\varphi)f_{\mathrm{c}}}{c}\right]^{2}, \tag{16}\] Fig. 4: A simulation snapshot showing the distribution of user devices and the satellite footprint. The red line from the ground user to the satellite indicates that even though the user is not being served by the satellite with the link, it still contributes to the interference. Fig. 5: An illustration of the pure-ALOHA access model used for IoT-over-Satellite. where \(c\) is the speed of light and \(f_{\rm c}\) is the carrier frequency. The distance between the ground device and satellite is a function of the zenith angle and is formulated using the cosine rule as, \[d(\varphi)=\sqrt{R_{\rm e}^{2}+R^{2}-2R_{\rm e}R\cos\varphi} \tag{17}\] The Doppler shift \(\nu(t)\) in (12) due to the motion of the satellite is calculated based on the change of the satellite's distance over time relative to the ground user as follows, \[\nu(t)=-\frac{f_{\rm c}}{c}\frac{{\rm d}}{{\rm d}t}\ d(t), \tag{18}\] where \({\rm d}/{\rm d}t\) is the differentiation with respect to time, and \({\rm d}/{\rm d}t\ d(t)\) is the velocity of the receiver satellite relative to the ground device. The maximum Doppler shift \(\nu_{\rm max}\) can be analytically modeled based on the derivation of the satellite's earth-centered zenith angle relative to time multiplied by the derivation of the slant distance between the ground user and the satellite relative to the earth-centered zenith angle1, Footnote 1: Note that this model does not take into account the Earth’s movement and assumes that the Earth is a perfect sphere. \[\begin{split}\nu_{\rm max}&=\pm\frac{f_{\rm c}}{ c}\frac{{\rm d}\varphi}{{\rm d}t}\times\frac{{\rm d}d(\varphi)}{{\rm d} \varphi}\\ &=\pm\frac{f_{\rm c}}{c}\omega\frac{R_{\rm e}R\sin\varphi_{\rm max }}{\sqrt{R_{\rm e}^{2}+R^{2}-2R_{\rm e}R\cos\varphi_{\rm max}}},\end{split} \tag{19}\] where \(f_{\rm c}\) is the center frequency, \(c\) is the speed of light, and \(\omega\) is the angular velocity of the satellite. Additionally, the received signal is assumed to be perfectly synchronized in time. ## V Spiking-Based Detection Networks The main contribution of this paper is in presenting novel spiking-based receivers in a practical S-IoT scenario. In the following section, we discuss the spiking-based neural network receivers used in this paper, namely; (i) the SNN, (ii) the CSNN, and (iii) the HybNet that was adopted from [9] to use spiking-based networks instead of convention DL networks. ### _SNN Detector_ To implement the SNN, we utilize an open-source Python library (SnnTorch [45]) which encodes the input data into a spike train to accommodate spiking-based networks. The encoding is done in our work via constant current injection, where each data sample is treated as a constant current input over each time step, where 50-time steps are used. The encoding of the input can treat static data as a direct current (DC) input and consistently feed the same features to the input layer of the SNN at each time interval. However, other popular examples of encoding the input data include rate encoding, latency encoding, and delta modulation. These methods could be utilized to further extort temporal information for SNNs [46]. Decoding the output from spikes to real numbers is done by a process called spike decoding. Examples of decoding methods include rate decoding and latency decoding. A more expansive analysis of different encoding and decoding strategies can be found in [45]. For this work, mean squared error (MSE) count loss is employed, which in effect is rate decoding. MSE count loss was used as it demonstrated desirable performance when training the network. The membrane potential of the correct class is encouraged to increase the number of spikes, while the incorrect classes are encouraged to reduce the total spike count over time for each neuron to achieve high-level performance [47]. In addition to the SNN, a second network based on CNN called the Convolutional SNN (CSNN) is also investigated to combine the superior feature extraction and the power efficiency of the CNN relative to other DL networks, such as the ANN [48], with the ultra-low power consumption of the SNN. A CSNN performs convolution operations on spikes (or events) generated by individual neurons instead of traditional continuous-valued activations. The convolutional filters in a CSNN are designed to detect specific temporal patterns in the spike sequences. This allows CSNNs to process spatiotemporal data effectively and learn to extract relevant features from input stimuli. To overcome the "dead neuron problem [45]", which significantly deteriorates performance because of the non-differentiability of spikes. The "dead neuron problem" occurs when the membrane potential is 0, thus the neuron does not fire and therefore does not contribute to the loss function in the training stage. The issue of the non-differentiability of the neuron can be addressed by smoothing out the Heaviside step function in 1 with the sigmoid function \(\sigma(x)\), which is known as the surrogate gradient approach [49]. Smoothing out the Heaviside function with the sigmoid function can override the derivative of the Heaviside function. which is the Dirac-Delta function. The smoothed-out Dirac-delta function would then take time to fall to 0, therefore contributing to the training loss function. The use of surrogate gradient descent algorithms in SNNs can effectively address the dead neuron problem by allowing for gradient-based optimization and weight updates, even if some neurons are not generating spikes. The two spiking-based networks mentioned above (SNN and CSNN) are also compared to the conventional CNN and ANN with the same dimensions as the spiking-based network counterpart. The same dimensions are used for a fair comparison between the spiking-based networks and conventional DL networks. A summary of the networks used in this work is summarized in Table I. ### _HybNet_ To further improve the error rate detection performance of the spiking-based receivers in noise-limited scenarios, we extend the idea from [9] which proposes the HybNet architecture. In this work, we adapt the HybNet architecture to the spiking-based networks so that it switches between two different detection branches; (i) a spiking-based network (in our case we select the SCNN due to its strong performance and lower power consumption compared to the ANN, CNN, and SNN, as seen in Section VIII-B), and (ii) a traditional matched filter detector (in our case we utilize non-coherent detection). The advantage of utilizing the HybNet architecture is that in noise-limited scenarios the matched filter path is chosen by the supervisor switching network to detect the signal. Conversely, when the receiving signals are interference-limited, detection using the spiking-based network is chosen by the supervisor switching network. Note that this supervisor switching network is trained with an SCNN network, which we dub as the _Selector SCNN_, and the network structure is listed in Table II. In order to train the selector network, each target chirp-based symbol is cropped and labeled either as; (i) _Minimal interference_ or (ii) _Interference-limited_. When the target symbol power is higher than the interference signal power (i.e., SIR \(>0\) dB), the symbol is labeled as "Minimal interference". Alternatively, if the power of the interfering transmissions is larger than the power of the target signal, the symbol is labeled as "Interference-limited". When this signal is passed into the _Selector SCNN_, the signal is passed to be detected by the non-coherent detector (outlined in subsection VI-B) if it is classified as "Minimal interference" and if it is classified as "Interference-limited" then the symbol is detected by the SCNN in this work. An illustration of the utilized switching architecture is shown in Fig. 6. ### _Dataset Creation and Training_ To ensure the desired performance of the DL/spiking-based receivers, the networks need to be trained on a dataset that contains emulated signals that exhibit the behavior of the received chirp-based IoT uplink signals in a practical LEO satellite scenario. For creating the dataset for training the DL/spiking-based receiver networks for signal detection, chirp I/Q signals are emulated, where the signals in the dataset are generated in our satellite scenario emulator. Each satellite is designated an empty vector 1D with a length determined by the time-step \(dt\) of the simulation. Next, the satellite's empty vector is populated with signals from the active ground user devices within its coverage area. Each signal contains an n-length random symbol sequence \(M=\{m_{1},m_{2},...,m_{n}\}\) that has a controlled transmit power \(p_{\rm{Tx}}\) and has a random time offset \(\tau\). Every transmitted signal is impaired by the emulated ground-to-satellite channel effects which are described in II. Additionally, a randomly chosen target signal is chosen as the target signal and is appended with a preamble for synchronization using the developed method detailed in Section VII. After synchronization, the target signal is normalized by dividing by the target signal power \(p_{\rm{s}}\) which is estimated from the preamble. The normalization is shown mathematically as follows, \[r_{\rm{norm}}(t)=\frac{r(t)}{\sqrt{p_{\rm{s}}}}. \tag{20}\] The reason for normalizing the signal power to \(p_{\rm{s}}=1\) is to maintain a uniform magnitude of the target signal power during the training of the receiver. This is because scaling the input has been proven to improve performance [9, 50]. Furthermore, normalization provides the receiver network with information about which signal is the target signal and which is the interfering signal. Finally, each symbol is cropped out and is represented as follows, \[Y_{\rm{k}}=|y(t)S_{k}^{*}(t)|, \tag{21}\] where \(y(t)\) denotes the received MFSK chirp-based signal and \(S_{k}(t)\) is the MFSK reference signal with frequency shift \(k\Delta_{\rm{f}}\), this is the same equation as in Eq. VI-B without the argmax function. Each training symbol is then labeled and can be represented as follows, \[T=\{(Y_{1},m_{1}),(Y_{2},m_{2}),...,(Y_{\rm{k}},m_{\rm{k}})\}, \tag{22}\] where \(m\) is the symbol label. An illustration of an example chirp signal is shown in Fig. 7. Fig. 6: An illustration of the HybNet architecture used in this paper. To train the _Selector SCNN_ the same dataset that was used to train the DL/spiking-based receivers is utilized. However, only two labels were used, _Minimal interference_ or _Interference-limited_. The labeling procedure is described in Section V-B. ## VI Chirp-Based Signal Emulation For this work, we incorporate the LoRa modulation technique as an example of chirp-based modulation due to its widespread use for LPWAN applications. We utilize the LoRa MATLAB emulator [20] developed earlier by our team which consists of two components: (i) signal generator, and (ii) conventional detector based on non-coherent detection. Note, for this work, there is no additional error correction involved, just the generation of a chirp-based packet with a random symbol payload, and a receiver that detects the payload packet with a non-coherent receiver compared with the developed DL/spiking-based receiver. Signal detection performance with error correction would decrease the error rate. As such, in this paper, we show signal detection performance without error correction as the lower bound. ### _LoRa Modulation_ LoRa uses the chirp spread spectrum (CSS) technique [51] to modulate the transmitted symbols. This is achieved by shifting the start of the chirp according to the transmitted symbol value, and then the frequency is cyclically swept within a given bandwidth B. According to LoRaWAN specifications only a few discrete values of the bandwidth are permitted; where \(B\in\{125,250,500\}\) kHz. Another parameter that controls the chirp rate of each symbol is called the Spreading Factor (SF), which is also restricted in the LoRaWAN specifications to \(\text{SF}\in\{7,8,9,10,11,12\}\). The chirp rate can be calculated with the SF as \(\frac{d\phi}{dt}=\frac{SF}{T_{\rm chirp}}\), where \(T_{\rm chirp}\) is the time of the chirp. In addition, the SF defines the number of possible values that can be encoded by the symbol, which is given by \(\mathrm{M}=2^{\rm SF}\). ### _LoRa Detection_ The detection of a LoRa signal involves a two-step process; in the first step, the signal is dechriped by mixing the LoRa signal with an inverted chirp. The dechirping process converts the LoRa signal into a multiple frequency-shift keying (MFSK) signal, which is a modulation type that encodes symbols with M equally spaced frequency tones. The second step can then be detected either using conventional coherent or non-coherent detection methods. The coherent detection method based on matched filtering is proven to be an optimal detector under AWGN channel conditions [52] and is also shown to achieve better performance compared to the non-coherent detector in the presence of LoRa-on-LoRa interference [27]. However, coherent detection requires perfect synchronization in both time and frequency [52] for coherent detection to perform well [53]. Therefore, coherent detection is not appropriate for a LEO satellite communication scenario due to the high Doppler frequency shift on the transmitted signal. Consequently, we use non-coherent detection in this paper. To describe non-coherent detection, the square-law (envelope) matched filter detector can be used [52]. The signal is mixed with (i.e., mathematically multiplied by) by the conjugate of all possible realizations of the chirp-based signal and the maximum magnitude denotes the symbol estimate \(\hat{m}_{\text{noch}}\) as follows, \[\hat{m}_{\text{noch}}=\underset{k}{\mathrm{argmax}}\,|y(t)S_{k}^{*}(t)|, \tag{23}\] where \(y(t)\) denotes the received MFSK chirp-based signal and \(S_{k}(t)\) is the MFSK reference signal with frequency shift \(k\Delta_{t}\), represented as \(z_{\rm k}(t)=\exp{(j2\pi k\Delta_{t}t)}\) and \(k\) is an integer representing all symbol possibilities \(k=\{0,1,...,M-1\}\). \(\Delta_{\rm f}\) is the frequency step between the MFSK shifts where the frequency step for chirp-based symbols is equal to the symbol rate \(B/M\). ## VII Frame Synchronization The preamble in the IoT frame is used to train the local chirp generator on finding the start of the chirp. The preamble contains no payload data and is just a sequence of symbols that are appended at the beginning of the frame. Given the large Doppler shifts in LEO orbits, we propose a synchronization method based on time-frequency matched filtering (inspired by range Doppler matched filtering in radar signal processing [54]) by matching the ideal templates of the known preamble each having a slightly different Doppler shift. Mathematically the time domain signal of preamble templates with different Doppler shifts can be expressed as follows, \[K=x_{\rm pre}\exp(j2\pi\kappa(t)), \tag{24}\] where \(x_{\rm pre}\) is the ideal preamble template and \(\kappa(t)\) is the vector of different Doppler shifts. We first perform a coarse search with a Doppler frequency shift spacing of 100 Hz, which is denoted as \(\eta_{\rm c}\). Then when the best match is found, a fine search is then performed around the best candidate with a frequency spacing of 1 Hz, which is denoted as \(\eta_{\rm f}\). We make \(K\in M\times N\) matrix, where \(N\) is the number of temporal samples in the signal, and \(M\) is the resolution of the search Fig. 7: A snapshot of a three-symbol wide spectrogram of an example chirp-based signal with interference for another chirp-based signal. The interference signal is time-shifted by \(\tau\), has a Doppler shift of \(\Delta_{\rm f}\), and the target signal has a Doppler shift of \(\Delta_{\rm s}\). space. The resolution for the coarse search is equal to the length of \(\kappa(t)\), which can be obtained as, \[M_{\rm c}=\kappa_{\rm len}=2\frac{\nu_{\rm max}}{\eta_{\rm c}}, \tag{25}\] where the \(\nu_{\rm max}\) is the maximum Doppler shift (calculated with Eq. 19). The resolution of the fine search is taken as, \[M_{\rm f}=2\eta_{\rm c}. \tag{26}\] Then, to create a 2D time-frequency plot of the similarity values, we employ a 2D cross-correlation as follows, \[C(k,l)=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}K(m,n)y^{*}(m-k,n-l), \tag{27}\] where \(y^{*}(.)\) corresponds to the complex conjugate of the received signal vector, and \(y\) is a \(P\times Q\) matrix, where \(P\) is equal to \(M_{\rm c}\) when performing coarse search and \(P\) is equal to \(M_{\rm f}\) when performing the fine search. \(Q\) is equal to the number of temporal samples in \(x_{\rm pre}\). Matched filtering in this way would yield a 2D array of similarity values where the maximum value is taken to be the best match. Finding the best match after the fine search would give an accurate time and frequency synchronization with the target IoT packet. The performance of the frame synchronization method is shown in Fig. 8 where the synchronization accuracy decreases as the transmit power decreases due to the lower SNR. ## VIII Results and Discussion In this section, we provide Monte Carlo simulation results of the emulated satellite system, where the simulation parameters are listed in Table III. Chirp-based signals are emulated and passed through the simulated ground-to-satellite channel, where the spatial duty cycle \(D\) dictates the number of transmissions sent each second. The spatial duty cycle \(D\) was arbitrarily chosen as \(1\times 10^{-6}\) it allows for a balance between interference-limited and noise-limited scenarios experienced by the satellite receivers. Bear in mind that to create accurate signals, an oversampled signal vector is constructed by sampling at \(f_{s}^{\prime}=1\) MHz. Each transmitted chirp-based signal is impaired by the emulated ground-to-satellite channel. After all the signals from the active devices within a satellite footprint at a certain time period are superimposed onto the signal vector. The vector is downsampled to twice the Nyquist sampling frequency of \(f_{\rm s}=250\) kHz to increase robustness against noise and Doppler. Note that for this work, we emulate LoRa signals with a SF \(=7\), and a \(B=125\) kHz. A higher SF can be trained on, however, this would increase the training dataset size so that for every increment of SF, the size of the training dataset be cumulatively doubled. This increase in dataset size is needed to support the additional numbers of symbol values a symbol can encode, which is equal to \(2^{\rm SF}\). Furthermore, a more complex DL network would also be required. The CNN and ANN are trained using stochastic gradient descent with momentum over \(50\) epochs with a learning rate of \(1e^{-3}\) and a mini-batch size of \(256\). On the other hand, the SNN, CSNN, and Selector CSNN are trained with a batch size of 128 is used, with the Adam optimizer and a learning rate of \(5e^{-4}\) over \(50\) epochs. These hyperparameters for training were chosen after performing manual optimization and the parameters showed desirable performance of the networks. To demonstrate the effectiveness of the DL and spiking network for signal detection in a practical scenario, the SER performance is shown in Fig. 9. Synchronization in time and Doppler compensation is performed using the time-frequency matched-filter synchronization method discussed in Section VII. Additionally, the signal is normalized to the average power of the preamble. The SER curve decreases as the transmit power increases, for both non-coherent as well as for the DL/spiking-based receivers as the system moves from noise-limited to interference limited. However, the DL/spiking Fig. 8: A plot of the median frequency synchronization error against different transmit powers. The spatial duty cycle \(D\) was chosen as \(1\times 10^{-6}\). based methods show a greater performance relative to the non-coherent receiver at the region of the SER plateaus, and the system becomes interference-limited. The SER performance is also recorded as the spatial duty cycle \(D\) increases and is shown in Fig. 10. From this plot is evident that performance improvement between conventional detection and DL/spiking-based detection increases as the spatial duty cycle increases. The improvement comes from the ability of DL and spiking-based networks to correctly detect symbols in the presence of co-channel interference. From the SER performance plots, we can see that the CNN has the highest performance compared to the other networks discussed in this work. However, the spiking-based networks still outperform conventional detection and have ultra-low power consumption which is further analyzed in the subsection VIII-B. ### _HybNet Performance_ The performance of the _HybNet_ architecture is shown in Fig. 11. From the plot, HybNet switches between being purely non-coherent detection in noise-limited scenarios and being detected with the SCNN in interference scenarios. The switching then allows for an average decrease in the SER relative to both the SCNN and noncoherent detection. However, at the transition between noise-limited and interference limited the Selector SCNN fails to correctly classify which detection pathway is more efficient. ### _Energy Consumption_ To demonstrate the advantages of spiking-based learning against conventional learning in terms of power efficiency, benchmarks for energy per detection are recorded. The energy is recorded using the Keras-Spiking Python package from Nengo [55]. The following assumptions are made when estimating the energy used by the proposed model on a particular device: The Energy consumption approximation is calculated based on the energy used per operation, where the sources for the GPU [56] and for Loihi [57] are used. Overhead is also not considered such as the energy required to transfer the data to be classified by the network. Only the energy consumption of the components in each network is considered. From the Table. IV, The spiking-based networks use a few orders of magnitude less power per detection compared to their conventional DL network counterparts. These results reinforce the idea that a spiking network could be used for signal detection, particularly in resource-constrained applications such as in a LEO satellite. ## IX Conclusion In this paper, we presented a spiking network-based receiver for detecting IoT signals in a satellite IoT uplink scenario. For the receiver, we investigated the spiking neural Fig. 11: A plot of the performance of the HybNet framework, where plots are shown with a spatial duty cycle \(D\) of 7e-\(7\%\) without Doppler. Fig. 10: A plot of the average SER against different spatial duty cycles \(D\), where the transmit power is fixed at 0 dBW. Manual Doppler compensation and synchronization are also performed using the preamble. Fig. 9: A plot of the average SER against different transmit powers, where plots are shown with a spatial duty cycle \(D\) of 1e-\(6\%\). Manual Doppler compensation and synchronization are also performed using the preamble. network (SNN) and the convolutional SNN (CSNN) and compared the error rate performance against conventional ANN and CNN receivers, as well as conventional non-coherent detection. The findings reveal that both spiking-based and DL receivers exhibit resilience to co-channel interference. Notably, the spiking-based networks display impressive detection performance in high interference scenarios, while consuming several orders of magnitude less power per detection than traditional ANN and CNN receivers. To further improve the detection performance of the spiking networks, we adopt the HybNet framework from our previous work to switch between the traditional detection methods and the spiking-based receiver.
2307.12207
Dynamics and Synchronization of Weakly Coupled Memristive Reaction-Diffusion Neural Networks
A new mathematical model of memristive neural networks described by the partly diffusive reaction-diffusion equations with weak synaptic coupling is proposed and investigated. Under rather general conditions it is proved that there exists an absorbing set showing the dissipative dynamics of the solution semiflow in the energy space and multiple ultimate bounds. Through uniform estimates and maneuver of integral inequalities and sharp interpolation inequalities on the interneuron differencing equations, it is rigorously proved that exponential synchronization of the neural network solutions at a uniform convergence rate occurs if the coupling strength satisfies a threshold condition expressed by the system parameters. Applications with numerical simulation to the memristive diffusive Hindmarsh-Rose neural networks and FitzHugh-Nagumo neural networks are also shown.
Yuncheng You, Junyi Tu
2023-07-23T02:41:49Z
http://arxiv.org/abs/2307.12207v2
# Dynamics and synchronization of weakly coupled memristive reaction-diffusion neural networks ###### Abstract. A new mathematical model of memristive neural networks described by the partly diffusive reaction-diffusion equations with weak synaptic coupling is proposed and investigated. Under rather general conditions it is proved that there exists an absorbing set showing the dissipative dynamics of the solution semiflow in the energy space and multiple ultimate bounds. Through uniform estimates and maneuver of integral inequalities and sharp interpolation inequalities on the interneuron differencing equations, it is rigorously proved that exponential synchronization of the neural network solutions at a uniform convergence rate occurs if the coupling strength satisfies a threshold condition expressed by the system parameters. Applications with numerical simulation to the memristive diffusive Hindmarsh-Rose neural networks and FitzHugh-Nagumo neural networks are also shown. Key words and phrases:Memristive neural network, synchronization, reaction-diffusion equations, dissipative dynamics, nonlinear coupling strength 2010 Mathematics Subject Classification: 35B40, 35G50, 35K57, 37N25, 92C20 ## 1. Introduction In this paper we shall consider a new mathematical model of neural networks described by a system of partly diffusive hybrid differential equations with memristors and weak nonlinear interneuron coupling. Let a network of \(m\) coupled memristive neuron cells be denoted by \(\mathcal{NW}=\{\mathcal{N}_{i}:i=1,2,\cdots,m\}\), where \(m\geq 2\) is a positive integer, which is described by the following model of partly diffusive and memristive equations with nonlinear couplings. Each neuron \(\mathcal{N}_{i},1\leq i\leq m\), in the network is presented by the hybrid differential equations: \[\begin{split}&\frac{\partial u_{i}}{\partial t}=\eta\Delta u_{i}+f(u_{ i},z_{i})-k\tanh(\rho_{i})u_{i}-Pu_{i}\sum_{j=1}^{m}\Gamma(u_{j}),\\ &\frac{\partial z_{i}}{\partial t}=\Lambda z_{i}+h(u_{i},z_{i}),\\ &\frac{\partial\rho_{i}}{\partial t}=au_{i}-b\rho_{i},\end{split} \tag{1.1}\] for \(t>0,\ x\in\Omega\subset\mathbb{R}^{n}\ (n\leq 3)\), where \(\Omega\) is a bounded domain with locally Lipschitz continuous boundary \(\partial\Omega\). Here \(\Delta\) is the Laplacian operator with respect to the spatial variable \(x\in\Omega\) and \(\Lambda\) is an \(\ell\times\ell\) constant square matrix. In the nonlinear synaptic coupling term for neurons, the function \[\Gamma(s)=\frac{1}{1+\exp[-r(s-V)]} \tag{1.2}\] is a sigmoidal function. The biological meaning of the parameters in this type of nonlinear weak couplings can be seen in [9, 16, 24, 28, 37]. The coefficient \(P>0\) is the coupling strength. The constant \(V\in\mathbb{R}\) is a threshold for neuron bursting, and \(r>0\) shapes the sigmoidal function versus the Heaviside function. In this system (1.1), for \(1\leq i\leq m\), the transmembrane electric potential \(u_{i}(t,x)\) and the memductance \(\rho_{i}(t,x)\) of the memristor (which is caused by the electromagnetic induction flux across the neuron membrane) are scalar functions, while \(z_{i}(t,x)\) can be an \(\ell\)-dimensional (\(\ell\geq 1\)) vector function whose components represent various ionic currents in the neuron cell. The memristive-potential coupling \(-k\tanh(\rho_{i})u_{i}\) is a nonlinear term. We impose the homogeneous Neumann boundary condition on the first component function in (1.1): \[\frac{\partial u_{i}}{\partial\nu}(t,x)=0,\quad\text{for}\ \ t>0,\ x\in \partial\Omega,\quad 1\leq i\leq m. \tag{1.3}\] The initial states of the system (1.1) will be denoted by \[u_{i}^{0}(x)=u_{i}(0,x),\ z_{i}^{0}(x)=z_{i}(0,x),\ \rho_{i}^{0}=\rho_{i}(0,x), \ 1\leq i\leq m. \tag{1.4}\] The scalar function \(f\in C^{1}(\mathbb{R}^{1+\ell},\mathbb{R})\) and the vector function \(h\in C^{1}(\mathbb{R}^{1+\ell},\mathbb{R}^{\ell})\) are continuously differentiable and we assume that \[\begin{split}& f(s,\sigma)s\leq-\alpha|s|^{4}+\lambda|s|| \sigma|+J,\quad(s,\sigma)\in\mathbb{R}^{1+\ell},\\ &\max\left\{\frac{\partial f}{\partial s}(s,\sigma),\,\left| \frac{\partial f}{\partial\sigma}(s,\sigma)\right|\right\}\leq\beta,\ (s,\sigma)\in\mathbb{R}^{1+\ell},\end{split} \tag{1.5}\] and the \(\ell\)-dimensional matrix \(\Lambda\) and \(h(s,\sigma)\) satisfy \[\begin{split}&\langle\Lambda\sigma,\sigma\rangle\leq-\gamma| \sigma|^{2},\quad\sigma\in\mathbb{R}^{\ell},\\ & h(s,\sigma)\sigma\leq q|s|^{2}|\sigma|+L|\sigma|,\quad(s, \sigma)\in\mathbb{R}^{1+\ell},\\ &\left|\frac{\partial h}{\partial s}(s,\sigma)\right|\leq\xi(|s |+1),\ \ \frac{\partial h}{\partial\sigma}(s,\sigma)=0,\quad(s,\sigma)\in\mathbb{R}^{1+ \ell},\end{split} \tag{1.6}\] where the parameters \(\eta,a,b,k\) in (1.1), \(\alpha,\lambda,J,\beta\) in (1.5), and \(\gamma,q,L,\xi\) in (1.6) are all positive constants. Note that Assumptions (1.5) and (1.6) are satisfied by the partly diffusive FitzHugh-Nagumo neural networks [33, 34] and the partly diffusive Hindmarsh-Rose neural networks [45, 46], which will be shown in Section 6. In neurobiology, the ionic currents flowing across a neuron cell's membrane cause changes of the membrane potential over time. The resulting electrical signals propagate through the neuron axon and stimulate the dendrites of neighbor neurons by synapses, which constitute a biological neural network. An excitatory neuron's firing consists of successive spiking followed by relatively long period of quiescence called bursting. Chaotic bursting means the number of spikes per burst is irregular. Synchronization mechanism of various neural networks revealed by mathematical models and analysis is one of the central topics in the research of neuroscience, medical science, and artificial neural networks [1, 3, 10, 13, 15, 29, 39, 41, 43]. For many neuron models in terms of ODEs with or without time-delay, analysis of Hopf bifurcations, stability by Lyapunov exponents, the energy or Hamiltonian functions, and numerical simulations are the main approaches to show synchronization of neuron ensembles or neural networks [10, 18, 25, 28, 37, 48]. Dynamics and synchronization of cellular neural networks and memristive neural networks modeled by partly diffusive Hindmarsh-Rose equations and FitzHugh-Nagumo equations have been studied in the authors' group recently [26, 27, 33, 34, 45, 46, 47]. These neural network models of hybrid (PDE-ODE) differential equations reflect the structural feature of neuron cells, which contain the short-branch dendrites receiving incoming signals and the long-branch axon transmitting outgoing signals through synapses. Memristor concept coined by Chua [7] describes the effect of electromagnetic flux on moving charges such as the ionic currents in neuron cells. In advanced biological neuron models and artificial intelligence computing [2, 11, 19, 22, 36, 38, 39, 42], the memristive feature was exhibited as a new type (other than electrical and chemical) synaptic coupling or an ideal component which has the nonvolatile properties and can process dynamically memorized signal information to deal with complex or chaotic behaviors in neural networks. Memristor-based differential equation models now appear in many fields with applications to image encryption, DNA sequences operation, brain criticality, cell physiology, cybersecurity, drift-diffusion models in semiconductor devices, and quantum computers, cf. [20, 22, 23, 31, 35, 36, 40]. Researches on dynamics of memristive neural networks in ODE models with linear interneuron couplings are increasing in the recent years, cf.[2, 14, 11, 21, 25, 30, 37, 38, 39, 44], mainly by computational simulations combined with semi-analytic methods. Very recently in [45, 46, 47] the authors rigorously proved the dissipative dynamics and the exponential synchronization of the memristive Hindmarsh-Rose neural networks and FitzHugh-Nagumo neural networks with the partly diffusive PDE-ODE models and linear interneuron couplings. Note that the linear interneuron coupling can be viewed as a strong coupling for neural networks, which is mathematically amenable but may not always well reflect the biological synaptic interactions. It is an open and challenging problem to theoretically show that partly diffusive and memristive biological neural networks with the nonlinear interneuron coupling shown in (1.1) with (1.2) is dissipative and synchronizable under a threshold condition. Such a nonlinear coupling can be called weak coupling. In this work we shall prove a sufficient condition on the network coupling strength to ensure an exponential synchronization of the neural networks modeled in (1.1)-(1.2) by the approach of showing dissipative dynamics of the system through sharp and uniform integral estimates. Moreover, the general model (1.1) with the Assumptions (1.5)-(1.6) for neural networks can cover all the typical neuron models including Hindmarsh-Rose equations [16], FitzHugh-Nagumo equations [12], Hodgkin-Huxley equations [17], FitzHugh-Rinzel equations [42], and Morris-Lecar equations [41]. It is worth mentioning that an effective methodology developed here from dissipative dynamics toward synchronization was originated in J.K. Hale's paper in 1997 [15]. This approach can be extended and explored to study many other complex or artificial neural networks in a broad scope. ## 2. **Formulation and Preliminaries** For a framework to formulate the solution and dynamics problem of the neural network system (1.1)-(1.4), we define two Hilbert spaces of functions: \[E=[L^{2}(\Omega,\mathbb{R}^{2+\ell})]^{m}\quad\text{and}\quad\Pi=[H^{1}(\Omega )\times L^{2}(\Omega,\mathbb{R}^{\ell+1})]^{m}\] where \(H^{1}(\Omega)\) is a Sobolev space. One can call \(E\) the energy space and \(\Pi\) the regular space. The norm and inner-product of \(L^{2}(\Omega)\) or \(E\) will be denoted by \(\|\,\cdot\,\|\) and \(\langle\,\cdot,\cdot\,\rangle\), respectively. We use \(|\,\cdot\,|\) to denote a vector norm or a set measure in Euclidean spaces \(\mathbb{R}^{n}\). The initial-boundary value problem (1.1)-(1.4) can be formulated into an initial value problem of the evolutionary equation: \[\begin{split}\frac{\partial g}{\partial t}=&\,Ag+F( g),\,\,\,t>0,\\ &\,g(0)=g^{0}\in E.\end{split} \tag{2.1}\] The unknown function in (2.1) is a column vector \(g(t)=\text{col}\,\,(g_{1}(t),g_{2}(t),\cdots,g_{m}(t))\), where the component subvector \[g_{i}(t)=\text{col}\,(u_{i}(t,\cdot),\,z_{i}(t,\cdot),\,\rho_{i}(t,\cdot)), \quad\text{for}\,\,1\leq i\leq m,\] characterizes the dynamics of the neuron \(\mathcal{N}_{i}\), for \(1\leq i\leq m\). The initial data function in (2.1) is \[g(0)=g^{0}=\text{col}\,\,(g_{1}^{0},\,g_{2}^{0},\cdots,g_{m}^{0})\quad\text{ where}\,\,\,g_{i}^{0}=\text{col}\,(u_{i}^{0},\,z_{i}^{0},\,\rho_{i}^{0}),\,\,\,1\leq i \leq m.\] The energy norm \(\|g(t)\|\) of a solution \(g(t)\) for the evolutionary equation (2.1) in the space \(E\) is given by \[\|g(t)\|^{2}=\sum_{i=1}^{m}\|g_{i}(t)\|^{2}=\sum_{i=1}^{m}\left(\|u_{i}(t)\|^{2}+ \|z_{i}(t)\|^{2}+\|\rho_{i}(t)\|^{2}\right).\] The closed linear operator \(A\) in (2.1) is defined by \(A=\operatorname{diag}\left(A_{1},A_{2},\cdots,A_{m}\right)\), where \[A_{i}=\begin{pmatrix}\eta\Delta&0&0\\ 0&\Lambda&0\\ 0&0&-bI\end{pmatrix}_{(2+\ell)\,\times\,(2+\ell)}:\mathcal{D}(A)\to E,\quad i=1,2,\cdots,m, \tag{2.2}\] with the domain \(\mathcal{D}(A)=\{g\in[H^{2}(\Omega)\times L^{2}(\Omega,\mathbb{R}^{\ell+1})]^ {m}:\partial u_{i}/\partial\nu=0,1\leq i\leq m\}\). The operator \(A\) is the generator of \(C_{0}\)-semigroup \(\{e^{At}\}_{t\geq 0}\) on the space \(E\) and \(I\) is the identity operator. By the fact that the Sobolev imbedding \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\) is a continuous mapping for space dimension \(n\leq 3\) and according to Assumptions (1.5) and (1.6), the nonlinear mapping \[F(g)=\begin{pmatrix}f(u_{1},z_{1})-k\tanh(\rho_{1})u_{1}-Pu_{1}\sum_{j=2}^{m} \Gamma(u_{j})\\ h(u_{1},z_{1})\\ au_{1}\\ \vdots\\ f(u_{m},z_{m})-k\tanh(\rho_{m})u_{m}-Pu_{m}\sum_{j=1}^{m-1}\Gamma(u_{j})\\ h(u_{m},z_{m}).\\ au_{m}\end{pmatrix}:\Pi\longrightarrow E \tag{2.3}\] is a locally Lipschitz continuous mapping. In this work we shall consider the weak solutions, cf. [6, Section XV.3] and [32, Section 4.2.3], of this initial value problem (2.1). **Definition 2.1**.: A \((2+\ell)m\)-dimensional vector function \(g(t,x)\), where \((t,x)\in[0,\tau]\times\Omega\), is called a weak solution to the initial value problem of the evolutionary equation (2.1), if the following two conditions are satisfied: (i) \(\frac{d}{dt}(g(t),\zeta)=(A^{1/2}g(t),A^{1/2}\zeta)+(F(g(t)),\zeta)\) is satisfied for almost every \(t\in[0,\tau]\) and any \(\zeta\in E^{*}=E\). (ii) \(g(t,\cdot)\in C([0,\tau];E)\cap C^{1}((0,\tau);E)\) and \(g(0)=g^{0}\). Here \(\mathcal{D}(A^{1/2})=\{g\in\Pi:\partial u_{i}/\partial\nu=0,\,1\leq i\leq m\}\) and \(E^{*}\) is the dual space of \(E\). The bilinear \(E\) vs \(E^{*}\) dual product is in the scalar distribution sense. The following proposition can be proved by Galerkin spectral approximation method [6] for the first statement on weak solutions and by the compactness property of the parabolic semigroup \(e^{At}\) in mild solution bootstrap argument [32, Theorem 42.12 and Corollary 42.13] for the second statement on strong solutions when \(t>0\). **Proposition 2.2**.: _For any given initial state \(g^{0}\in E\), there exists a unique weak solution \(g(t;g^{0}),\,t\in[0,\tau]\), for some \(\tau>0\) may depending on \(g^{0}\), of the initial value problem (2.1) formulated from the memristive neural network equations (1.1). The weak solution \(g(t;g^{0})\) continuously depends on the initial data \(g^{0}\) and satisfies_ \[g\in C([0,\tau];E)\cap C^{1}((0,\tau);E)\cap L^{2}((0,\tau);\Pi). \tag{2.4}\] _Moreover, for any initial state \(g^{0}\in E\), the weak solution \(g(t;g^{0})\) becomes a strong solution for \(t\in(0,\tau)\), which has the regularity_ \[g\in C((0,\tau];\Pi)\cap C^{1}((0,\tau);\Pi). \tag{2.5}\] An infinite dimensional dynamical system [6, 32] for time \(t\geq 0\) only is usually called a semiflow. Absorbing set defined below is the key concept to characterize dissipative dynamics of a semiflow on a Banach space. **Definition 2.3**.: Let \(\{S(t)\}_{t\geq 0}\) be a semiflow on a Banach space \(\mathscr{X}\). A bounded set \(B^{*}\) of \(\mathscr{X}\) is called an _absorbing set_ of this semiflow, if for any given bounded set \(B\subset\mathscr{X}\) there exists a finite time \(T_{B}\geq 0\) depending on \(B\), such that \(S(t)B\subset B^{*}\) for all \(t>T_{B}\). The semiflow is said to be _dissipative_ if there exists an absorbing set. The Young's inequality in a generic form will be used throughout in this paper. For any two positive numbers \(x\) and \(y\), if \(\frac{1}{p}+\frac{1}{q}=1\) and \(p>1,q>1\), one has \[x\,y\leq\frac{1}{p}\varepsilon x^{p}+\frac{1}{q}C(\varepsilon,p)\,y^{q}\leq \varepsilon x^{p}+C(\varepsilon,p)\,y^{q},\quad C(\varepsilon,p)=\varepsilon^{ -q/p}, \tag{2.6}\] where constant \(\varepsilon>0\) can be arbitrarily small. The Gagliardo-Nirenberg interpolation inequalities [32, Theorem B.3] will be exploited in a crucial step to prove the main result on neural network synchronization. ## 3. **Dissipative Dynamics of the Memristive Semiflow** In this section, first we shall prove the global existence of weak solutions in time for the initial value problem (2.1) and establish a solution semiflow of the memristive neural networks modeled by (1.1). Then we show the existence of an absorbing set of this semiflow in the state spaces \(E\), which exhibits the dissipative dynamics of this memristive neural network semiflow. **Theorem 3.1**.: _Under the Assumption (1.5), for any initial state \(g^{0}\in E\), there exists a unique global weak solution in time, \(g(t;g^{0})=\operatorname{col}\left(u_{i}(t),v_{i}(t),w_{i}(t),\rho_{i}(t):1\leq i \leq m),t\in[0,\infty)\), to the initial value problem (2.1) of the memristive neural network equations (1.1)._ Proof.: Take the \(L^{2}\) inner-products of the \(u_{i}\)-equation in (1.1) with \(C_{1}u_{i}(t,x)\) for \(1\leq i\leq m\), with a scaling constant \(C_{1}>0\) to be chosen later. Then sum them up. By using the Gauss divergence theorem and the boundary condition (1.3) to treat the Laplacian term and by the Assumprion (1.5), we can get \[\begin{split}&\frac{C_{1}}{2}\frac{d}{dt}\sum_{i=1}^{m}\|u_{i}(t)\|^ {2}+C_{1}\sum_{i=1}^{m}\eta\|\nabla u_{i}(t)\|^{2}\\ =&\,C_{1}\sum_{i=1}^{m}\int_{\Omega}\left[f(u_{i},z_ {i})u_{i}-k\tanh(\rho_{i})u_{i}^{2}-\sum_{j=1,j\neq i}^{m}\frac{Pu_{i}^{2}}{1+ \exp[-r(u_{j}-V)]}\right]dx\\ \leq&\,C_{1}\sum_{i=1}^{m}\int_{\Omega}\left[-\alpha |u_{i}|^{4}+\lambda|u_{i}||z_{i}|+J-k\tanh(\rho_{i})u_{i}^{2}-\sum_{j=1,j\neq i }^{m}\frac{Pu_{i}^{2}}{1+\exp[-r(u_{j}-V)]}\right]dx\\ \leq&\,C_{1}\sum_{i=1}^{m}\int_{\Omega}\left[-\alpha |u_{i}|^{4}+k|u_{i}|^{2}+\lambda|u_{i}||z_{i}|+J\right]dx\\ \leq&\,-C_{1}\alpha\sum_{i=1}^{m}\int_{\Omega}u_{i}^ {4}(t,x)\,dx+\left(C_{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}\right)\sum_{i=1 }^{m}\|u_{i}\|^{2}+\frac{\gamma}{4}\sum_{i=1}^{m}\|z_{i}\|^{2}+C_{1}mJ\,|\Omega |,\end{split} \tag{3.1}\] because \[-\sum_{j=1,j\neq i}^{m}\frac{Pu_{i}^{2}}{1+\exp[-r(u_{j}-V)]}\leq 0,\] where the Young's inequality (2.6) and the property \(|\tanh(\rho_{i})|\leq 1\) are used. Then sum up the \(L^{2}\) inner-products of the \(z_{i}\)-equation with \(z_{i}(t,x)\) and the \(\rho_{i}\)-equation with \(\rho_{i}(t,x)\) in (1.1), \(1\leq i\leq m\), again by (2.6), we have \[\begin{split}&\frac{1}{2}\frac{d}{dt}\sum_{i=1}^{m}\left(\|z_{i}(t)\|^ {2}+\|\rho_{i}(t)\|^{2}\right)=\sum_{i=1}^{m}\int_{\Omega}\left(\langle\Lambda z _{i},z_{i}\rangle+h(u_{i},z_{i})z_{i}+au_{i}\rho_{i}-b\rho_{i}^{2}\right)dx\\ \leq&\,\sum_{i=1}^{m}\int_{\Omega}\left[-\gamma|z_{i} |^{2}+q|u_{i}|^{2}|z_{i}|+L|z_{i}|+au_{i}\rho_{i}-b\,\rho_{i}^{2}\right]dx\\ \leq&\,\sum_{i=1}^{m}\frac{q^{2}}{4\gamma}\int_{\Omega }u_{i}^{4}(t,x)\,dx-\gamma\sum_{i=1}^{m}\|z_{i}\|^{2}+\frac{a^{2}}{2b}\sum_{i= 1}^{m}\|u_{i}\|^{2}-\frac{b}{2}\sum_{i=1}^{m}\|\rho_{i}\|^{2}+\frac{mL^{2}}{ \gamma}|\Omega|\\ =&\,\sum_{i=1}^{m}\frac{q^{2}}{4\gamma}\int_{\Omega }u_{i}^{4}(t,x)\,dx-\frac{3\gamma}{4}\sum_{i=1}^{m}\|z_{i}\|^{2}+\frac{a^{2}}{2 b}\sum_{i=1}^{m}\|u_{i}\|^{2}-\frac{b}{2}\sum_{i=1}^{m}\|\rho_{i}\|^{2}+\frac{mL^{2}}{ \gamma}|\Omega|.\end{split} \tag{3.2}\] Both inequalities (3.1) and (3.2) are valid in the time interval \(I_{max}(g^{0})=[0,T_{max})\) of solution existence for each weak solution \(g(t;g^{0}))\). Now we add the above two inequalities (3.1) and (3.2) to obtain \[\begin{split}&\frac{1}{2}\frac{d}{dt}\sum_{i=1}^{m}\left(C_{1}\|u_{i}(t )\|^{2}+\|z_{i}(t)\|^{2}+\|\rho_{i}(t)\|^{2}\right)+C_{1}\eta\sum_{i=1}^{m}\| \nabla u_{i}(t)\|^{2}\\ \leq&-C_{1}\alpha\sum_{i=1}^{m}\int_{\Omega}u_{i}^{4} (t,x)\,dx+\left(C_{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}\right)\sum_{i=1}^{ m}\|u_{i}\|^{2}+\frac{\gamma}{4}\sum_{i=1}^{m}\|z_{i}\|^{2}+C_{1}mJ|\Omega|\\ &+\sum_{i=1}^{m}\frac{q^{2}}{4\gamma}\int_{\Omega}u_{i}^{4}(t,x) \,dx+\frac{a^{2}}{2b}\sum_{i=1}^{m}\|u_{i}\|^{2}-\frac{3\gamma}{4}\sum_{i=1}^{ m}\|z_{i}\|^{2}-\frac{b}{2}\|\rho_{i}\|^{2}+\frac{mL^{2}}{\gamma}|\Omega|\\ =&-\left(C_{1}\alpha-\frac{q^{2}}{4\gamma}\right) \sum_{i=1}^{m}\int_{\Omega}u_{i}^{4}(t,x)\,dx+\left(C_{1}k+\frac{C_{1}^{2} \lambda^{2}}{\gamma}+\frac{a^{2}}{2b}\right)\sum_{i=1}^{m}\int_{\Omega}u_{i}^ {2}(t,x)\,dx\\ &-\frac{\gamma}{2}\|z_{i}\|^{2}-\sum_{i=1}^{m}\frac{b}{2}\|\rho_{ i}\|^{2}+m\left(C_{1}J+\frac{L^{2}}{\gamma}\right)|\Omega|,\quad t\in I_{max}=[0,T_{ max}).\end{split} \tag{3.3}\] Choose the scaling constant \(C_{1}\) to be \[C_{1}=\frac{1}{\alpha}\left(1+\frac{q^{2}}{4\gamma}\right)\quad\text{so that}\quad C_{1}\alpha-\frac{q^{2}}{4\gamma}=1. \tag{3.4}\] With this choice, from (3.3) it follows that \[\begin{split}&\frac{1}{2}\frac{d}{dt}\sum_{i=1}^{m}\left(C_{1}\|u_{i} \|^{2}+\|z_{i}\|^{2}+\|\rho_{i}\|^{2}\right)+C_{1}\eta\sum_{i=1}^{m}\|\nabla u _{i}\|^{2}\\ &+\sum_{i=1}^{m}\int_{\Omega}u_{i}^{4}(t,x)\,dx-\left(C_{1}k+ \frac{C_{1}^{2}\lambda^{2}}{\gamma}+\frac{a^{2}}{2b}\right)\sum_{i=1}^{m}\int _{\Omega}u_{i}^{2}(t,x)\,dx\\ &+\frac{\gamma}{2}\sum_{i=1}^{m}\|z_{i}\|^{2}+\frac{b}{2}\sum_{i= 1}^{m}\|\rho_{i}\|^{2}\leq m\left(C_{1}J+\frac{L^{2}}{\gamma}\right)|\Omega|, \quad t\in I_{max}=[0,T_{max}).\end{split} \tag{3.5}\] By completing square, we have \[\begin{split}&\sum_{i=1}^{m}\int_{\Omega}u_{i}^{4}(t,x)\,dx-\left(C _{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}+\frac{a^{2}}{2b}\right)\sum_{i=1}^{m} \int_{\Omega}u_{i}^{2}(t,x)\,dx\\ =&\sum_{i=1}^{m}\int_{\Omega}\left(u_{i}^{4}(t,x)- \left(C_{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}+\frac{a^{2}}{2b}\right)u_{i}^ {2}(t,x)\right)dx\\ =&\sum_{i=1}^{m}\int_{\Omega}\left(u_{i}^{2}(t,x)- \frac{1}{2}\left(C_{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}+\frac{a^{2}}{2b}+1 \right)\right)^{2}dx\\ &+\sum_{i=1}^{m}\|u_{i}\|^{2}-\frac{m}{4}\left(C_{1}k+\frac{C_{1 }^{2}\lambda^{2}}{\gamma}+\frac{a^{2}}{2b}+1\right)^{2}|\Omega|\\ \geq&\sum_{i=1}^{m}\|u_{i}\|^{2}-\frac{m}{4}\left(C _{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}+\frac{a^{2}}{2b}+1\right)^{2}|\Omega |.\end{split} \tag{3.6}\] Substitute (3.6) in (3.5). It yields the inequality \[\begin{split}&\frac{1}{2}\frac{d}{dt}\sum_{i=1}^{m}\left(C_{1}\|u_{i} \|^{2}+\|z_{i}\|^{2}+\|\rho_{i}\|^{2}\right)+C_{1}\eta\sum_{i=1}^{m}\|\nabla u _{i}\|^{2}\\ &+\sum_{i=1}^{m}\left(\|u_{i}\|^{2}+\frac{\gamma}{2}\|z_{i}\|^{2} +\frac{b}{2}\|\rho_{i}\|^{2}\right)\leq C_{2}m|\Omega|,\quad t\in I_{max}. \end{split} \tag{3.7}\] Denote by \[C_{2}=C_{1}J+\frac{L^{2}}{\gamma}+\frac{1}{4}\left(C_{1}k+\frac{C_{1}^{2} \lambda^{2}}{\gamma}+\frac{a^{2}}{2b}+1\right)^{2}. \tag{3.8}\] We can remove the nonnegative term \(C_{1}\eta\sum_{i=1}^{m}\|\nabla u_{i}\|^{2}\) from (3.7) to obtain the Gronwall-type differential inequality: \[\begin{split}&\frac{d}{dt}\sum_{i=1}^{m}\left[C_{1}\|u_{i}\|^{2}+\|z_{i}\|^{2}+\| \rho_{i}\|^{2}\right]+\mu\sum_{i=1}^{m}\left[C_{1}\|u_{i}\|^{2}+\|z_{i}\|^{2}+ \|\rho_{i}\|^{2}\right]\\ \leq&\frac{d}{dt}\sum_{i=1}^{m}\left[C_{1}\|u_{i}\|^ {2}+\|z_{i}\|^{2}+\|\rho_{i}\|^{2}\right]+\sum_{i=1}^{m}\left(2\|u_{i}\|^{2}+ \gamma\|z_{i}\|^{2}+b\|\rho_{i}\|^{2}\right)\\ \leq&\,2C_{2}m|\Omega|,\quad\text{for}\,\,t\in I_{ max}=[0,T_{max}),\end{split} \tag{3.9}\] where \[\mu=\min\left\{\frac{2}{C_{1}},\;\gamma,\;b\right\}=\min\left\{\frac{8\alpha \gamma}{4\gamma+q^{2}},\;\gamma,\;b\right\}. \tag{3.10}\] Now solve the differential inequality (3.9) to obtain the following bounding estimate of all the weak solutions on the maximal existence time interval \(I_{max}\), \[\begin{split}&\|g(t,g^{0})\|^{2}=\sum_{i=1}^{m}\|g_{i}(t,g_{i}^{0}) \|^{2}=\sum_{i=1}^{m}\big{(}\|u_{i}(t)\|^{2}+\|w_{i}(t)\|^{2}+\|\rho_{i}(t)\|^{ 2}\big{)}\\ &\leq\frac{\max\{C_{1},1\}}{\min\{C_{1},1\}}e^{-\mu\,t}\|g^{0}\|^ {2}+\frac{2C_{2}m}{\mu\min\{C_{1},1\}}|\Omega|,\ \ \ \ t\in[0,\infty).\end{split} \tag{3.11}\] Here it is shown that \(I_{max}=[0,\infty)\) for every weak solution \(g(t,g^{0})\) because the solution will never blow up at any finite time. The uniqueness of any weak solution to the initial value problem (2.1) is shown in Proposition 2.2. Therefore, for any initial data \(g^{0}=(g_{1}^{0},\cdots,g_{m}^{0})\in E\), there exists a unique global weak solution of the initial value problem (2.1) for this memristive reaction-diffusion neural network model (1.1)-(1.4) in the space \(E\) for time \(t\in[0,\infty)\). Based on the global existence of weak solutions established in Theorem 3.1, we can define the solution semiflow \(\{S(t):E\to E\}_{t\geq 0}\) of the memristive neural network system (1.1) to be \[S(t):g^{0}\longmapsto g(t;g^{0})=\operatorname{col}\big{(}u_{i}(t,\cdot),z_{i}( t,\cdot),\rho_{i}(t,\cdot):1\leq i\leq m\big{)},\ \ \ t\geq 0.\] We shall call this semiflow \(\{S(t)\}_{t\geq 0}\) the _memristive reaction-diffusion neural network semiflow_ generated by the model equations (1.1). The next theorem shows that the memristive reaction-diffusion neural network semiflow \(\{S(t)\}_{t\geq 0}\) is a dissipative dynamical system in the state space \(E\). **Theorem 3.2**.: _There exists a bounded absorbing set for the memristive reaction-diffusion neural network semiflow \(\{S(t)\}_{t\geq 0}\) in the state space \(E\), which is the bounded ball_ \[B^{*}=\{g\in E:\|g\|^{2}\leq K\} \tag{3.12}\] _where the constant_ \[K=1+\frac{2C_{2}m}{\mu\min\{C_{1},1\}}|\Omega|, \tag{3.13}\] _and the positive constants \(C_{1}\) and \(C_{2}\) are given in (3.4) and (3.8)._ Proof.: This is the consequence of the global uniform estimate (3.11) shown in Theorem 3.1, which implies that \[\limsup_{t\to\infty}\|g(t,g^{0})\|^{2}=\limsup_{t\to\infty}\,\sum_{i=1}^{m}\|g _{i}(t,g_{i}^{0})\|^{2}<K \tag{3.14}\] for all weak solutions of (2.1) with any initial data \(g^{0}\) in \(E\). Moreover, for any given bounded set \(B=\{g\in E:\|g\|^{2}\leq\mathcal{R}\}\) in \(E\), there exists a finite time \[T_{B}=\frac{1}{\mu}\log^{+}\left(\mathcal{R}\,\frac{\max\{C_{1},1\}}{\min\{C_{1 },1\}}\right)\] such that all the solution trajectories started at the initial time \(t=0\) from the set \(B\) will permanently enter the bounded ball \(B^{*}\) shown in (3.12) for \(t>T_{B}\). Therefore, the bounded ball \(B^{*}\) is an absorbing set in \(E\) for the semiflow \(\{S(t)\}_{t\geq 0}\) so that this memristive neural network semiflow is dissipative. ## 4. **Higher-Order and Pointwise Ultimate Bounds** We shall further prove an ultimate uniform bound of the membrane potential functions \(\{u_{i}(t):1\leq i\leq m\}\) for all the weak solutions in the higher-order integrable space \(L^{4}(\Omega)\). Note that Proposition 2.2 and Theorem 3.1 together show that any weak solution \(g(t)\in C((0,\infty),\Pi)\) so that the component function \(u_{i}(t)\in C((0,\infty),H^{1}(\Omega))\subset C((0,\infty),L^{6}(\Omega)) \subset C((0,\infty),L^{4}(\Omega))\). **Theorem 4.1**.: _There exists a constant \(Q>0\) such that for any initial data \(g^{0}\in E\), the membrane potential components \(u_{i},1\leq i\leq m\), of the weak solution \(g(t,g^{0})=(g_{1}(t),\cdots,g_{m}(t))\) of the initial value problem (2.1) for the memristive reaction-diffusion neural network \(\mathcal{NW}\) are ultimately uniform bounded in the space \(L^{4}(\Omega)\) and_ \[\limsup_{t\to\infty}\,\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{4}<Q. \tag{4.1}\] Proof.: Take the \(L^{2}\) inner-product of the \(u_{i}\)-equation in (1.1) with \(u_{i}^{3}(t,\cdot),1\leq i\leq m\), and sum them up. By the boundary condition (1.3) and Assumption (1.5), we have \[\begin{split}&\frac{1}{4}\,\frac{d}{dt}\sum_{i=1}^{m}\|u_{i}(t) \|_{L^{4}}^{4}+3\eta\sum_{i=1}^{m}\|u_{i}\nabla u_{i}\|_{L^{2}}^{2}\\ &=\sum_{i=1}^{m}\int_{\Omega}(f(u_{i},x)u_{i}^{3}-k\tanh(\rho_{i })u_{i}^{4})\,dx-\sum_{i=1}^{m}\sum_{j=1}^{m}\int_{\Omega}\frac{Pu_{i}^{4}}{1+ \exp[-r(u_{j}-V)]}\,dx\\ &\leq\sum_{i=1}^{m}\int_{\Omega}\left(-\alpha u_{i}^{6}+\lambda u _{i}^{3}|z_{i}|+Ju_{i}^{3}+ku_{i}^{4}\right)dx,\quad t>0.\end{split} \tag{4.2}\] By Young's inequality (2.6), it is seen that \[\lambda u_{i}^{3}|z_{i}|+Ju_{i}^{3}+ku_{i}^{4}\leq\left(\frac{\alpha}{4}u_{i} ^{6}+\frac{\lambda^{2}}{\alpha}z_{i}^{2}\right)+\left(\frac{\alpha}{4}u_{i}^{ 6}+\frac{J^{2}}{\alpha}\right)+\left(\frac{\alpha}{4}u_{i}^{6}+\frac{64}{27 \alpha^{2}}k^{3}\right), \tag{4.3}\] where the last two terms in a bracket come from \[ku_{i}^{4}=\left[\frac{3\alpha}{8}u_{i}^{6}\right]^{2/3}\left[\frac{64}{9\alpha^{ 2}}k^{3}\right]^{1/3}\leq\frac{2}{3}\left(\frac{3\alpha}{8}u_{i}^{6}\right)+ \frac{1}{3}\left(\frac{64}{9\alpha^{2}}k^{3}\right)=\frac{\alpha}{4}u_{i}^{6}+ \frac{64}{27\alpha^{2}}k^{3}.\] Note that \[u_{i}^{4}\leq\frac{1}{3}+\frac{2}{3}u_{i}^{6}\leq 1+u_{i}^{6}\quad\text{so that} \,-u_{i}^{6}\leq-u_{i}^{4}+1. \tag{4.4}\] From (4.2) wherein we can use the inequalities (4.3) and (3.14), it follows that \[\frac{1}{4}\,\frac{d}{dt}\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{4}+3 \eta\sum_{i=1}^{m}\|u_{i}\nabla u_{i}\|^{2} \tag{4.5}\] \[\leq \sum_{i=1}^{m}\int_{\Omega}\left[-\alpha u_{i}^{6}+\left(\frac{ \alpha}{4}u_{i}^{6}+\frac{\lambda^{2}}{\alpha}z_{i}^{2}\right)+\left(\frac{ \alpha}{4}u_{i}^{6}+\frac{J^{2}}{\alpha}\right)+\left(\frac{\alpha}{4}u_{i}^{ 6}+\frac{64}{27\alpha^{2}}k^{3}\right)\right]dx\] \[= \sum_{i=1}^{m}\left(-\frac{\alpha}{4}\int_{\Omega}u_{i}^{6}\,dx+ \frac{\lambda^{2}}{\alpha}\|z_{i}(t)\|^{2}\right)+m\left(\frac{J^{2}}{\alpha }+\frac{64k^{3}}{27\alpha^{2}}\right)|\Omega|\] \[\leq \sum_{i=1}^{m}\left(-\frac{\alpha}{4}\int_{\Omega}u_{i}^{4}\,dx+ \frac{\lambda^{2}}{\alpha}\|z_{i}(t)\|^{2}\right)+m\left(\frac{\alpha}{4}+ \frac{J^{2}}{\alpha}+\frac{64k^{3}}{27\alpha^{2}}\right)|\Omega|\] \[< -\frac{\alpha}{4}\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{4}+\frac{ \lambda^{2}}{\alpha}K+m\left(\frac{\alpha}{4}+\frac{J^{2}}{\alpha}+\frac{64k^ {3}}{27\alpha^{2}}\right)|\Omega|,\] for time \(t\) sufficiently large, where the constant \(K\) is given in (3.13), which is valid for all the weak solutions. Consequently, with the nonnegative gradient terms removed, the differential inequality (4.5) shows that \[\frac{d}{dt}\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{4}+\alpha\sum_{i= 1}^{m}\|u_{i}(t)\|_{L^{4}}^{4} \leq\frac{4\lambda^{2}}{\alpha}K+m\left(\alpha+\frac{4J^{2}}{ \alpha}+\frac{256k^{3}}{27\alpha^{2}}\right)|\Omega|. \tag{4.6}\] \[<\frac{4\lambda^{2}}{\alpha}K+m\left(\alpha+\frac{4J^{2}}{\alpha }+\frac{10k^{3}}{\alpha^{2}}\right)|\Omega|,\] for \(t>T(g^{0})\), where \(T(g^{0})>0\) is a finite time when the solution trajectory started from the initial state \(g^{0}\) be absorbed into the absorbing set \(B^{*}\) in the state space \(E\), as shown in Theorem 3.2. By the parabolic regularity stated in Proposition 2.2, for any weak solution \(g(t,g^{0})\) one has \(u_{i}(T(g^{0}))\in H^{1}(\Omega)\subset L^{4}(\Omega)\) for \(1\leq i\leq m\). Then the second statement in Proposition 2.2 shows that any weak solution has the regularity \[\sum_{i=1}^{m}u_{i}(t)\in C([T(g^{0}),\infty),H^{1}(\Omega))\subset C([T(g^{0} ),\infty),L^{4}(\Omega)).\] Apply the Gronwall inequality to (4.6). It results in the bounding estimate of all the \(u_{i}\) components, \(1\leq i\leq m\), in the space \(L^{4}(\Omega)\) as follows: \[\begin{split}\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{4}&<e^ {-\alpha(t-T(g^{0}))}\sum_{i=1}^{m}\|u_{i}(T(g^{0}))\|_{L^{4}}^{4}\\ &+\frac{4\lambda^{2}}{\alpha^{2}}K+m\left(1+\frac{4J^{2}}{\alpha ^{2}}+\frac{10k^{3}}{\alpha^{3}}\right)|\Omega|,\quad\text{for}\,\,\,t\geq 0, \end{split} \tag{4.7}\] Therefore (4.1) is proved with \[Q=1+\frac{4\lambda^{2}}{\alpha^{2}}K+m\left(1+\frac{4J^{2}}{\alpha^{2}}+\frac{ 10k^{3}}{\alpha^{3}}\right)|\Omega|. \tag{4.8}\] which is a constant independent of any initial data. The pointwise estimation in the following theorem will be used to deal with the nonlinear weak coupling toward exponential synchronization featured in this work. **Theorem 4.2**.: _There exists a constant \(G>0\) such that for any initial data \(g^{0}\in E\), the membrane potential component \(u_{i}(t,x),1\leq i\leq m\), of the weak solution \(g(t,g^{0})=(g_{1}(t),\cdots,g_{m}(t))\) of the initial value problem (2.1) for the memristive reaction-diffusion neural network \(\mathcal{NW}\) is ultimately uniform bounded in the space \(\mathbb{R}\) and_ \[\limsup_{t\to\infty}\,\sum_{i=1}^{m}|u_{i}(t,x)|_{\mathbb{R}}<G,\quad\text{for} \,\,\,x\in\Omega. \tag{4.9}\] Proof.: Similar to (3.1) in the proof of Theorem 3.1, we can multiply the \(u_{i}\)-equation in (1.1) by \(C_{1}u_{i}(t,x)\) for \(1\leq i\leq m\), where \(C_{1}\) is the same constant given in (3.4), and sum them up to get \[\begin{split}&\frac{C_{1}}{2}\frac{d}{dt}\sum_{i=1}^{m}|u_{i}(t,x) |^{2}+C_{1}\sum_{i=1}^{m}\eta|\nabla u_{i}(t,x)|^{2}\\ \leq&\sum_{i=1}^{m}\left[-C_{1}\alpha\,u_{i}^{4}(t,x )+\left(C_{1}k+\frac{C_{1}^{2}\lambda^{2}}{\gamma}\right)u_{i}^{2}(t,x)+\frac {\gamma}{4}z_{i}^{2}(t,x)\right]+C_{1}mJ.\end{split} \tag{4.10}\] Similar to (3.2), by multiplication and summation but without integration, we have \[\begin{split}&\frac{1}{2}\frac{d}{dt}\sum_{i=1}^{m}\left(|z_{i}(t,x )|^{2}+|\rho_{i}(t,x)|^{2}\right)\\ \leq&\sum_{i=1}^{m}\left[\frac{q^{2}}{4\gamma}u_{i}^ {4}(t,x)-\frac{3\gamma}{4}z_{i}^{2}(t,x)+\frac{a^{2}}{2b}u_{i}^{2}(t,x)-\frac {b}{2}\rho_{i}^{2}(t,x)\right]+\frac{mL^{2}}{\gamma}.\end{split} \tag{4.11}\] Then parallel to the steps from (3.3) through (3.9) in the proof of Theorem 3.1, one can reach the pointwise differential inequality \[\begin{split}&\frac{d}{dt}\sum_{i=1}^{m}\big{[}C_{1}u_{i}^{2}(t,x)+z _{i}^{2}(t,x)+\rho_{i}^{2}(t,x)\big{]}\\ &+\mu\sum_{i=1}^{m}\big{[}C_{1}u_{i}^{2}(t,x)+z_{i}^{2}(t,x)+\rho _{i}^{2}(t,x)\big{]}\leq 2C_{2}m,\quad t>0,\ x\in\Omega,\end{split} \tag{4.12}\] where \(C_{2}\) and \(\mu\) are two universal positive constants given in (3.8) and (3.10) respectively. It follows that \[\sum_{i=1}^{m}\big{(}u_{i}^{2}(t,x)+z_{i}^{2}(t,x)+\rho_{i}^{2}(t,x)\big{)} \leq\frac{\max\{C_{1},1\}}{\min\{C_{1},1\}}e^{-\mu\,t}|g^{0}(t,x)|^{2}+\frac{2 C_{2}m}{\mu\min\{C_{1},1\}}, \tag{4.13}\] for \(t>0,x\in\Omega\), which implies that (4.9) is valid with a uniform constant \[G=\left[1+\frac{2C_{2}m}{\mu\min\{C_{1},1\}}\right]^{1/2}, \tag{4.14}\] which is independent of any initial data. ## 5. **Synchronization of Memristive Reaction-Diffusion Neural Networks** In this section, we shall prove the main result on the synchronization of the memristive reaction-diffusion neural networks described by (1.1) in the state space \(E\). This result provides a quantitative threshold condition for the interneuron coupling strength to reach the neural network synchronization. **Definition 5.1**.: For a model evolutionary equation of a general neural network called _GNW_, such as (2.1) formulated from the memristive reaction-diffusion equations (1.1), we define the asynchronous degree of this neural network in a state space (as a Banach space) \(W\) to be \[deg_{s}\left(\text{\emph{GNW}}\right)=\sum_{1\,\leq i\,<j\,\leq\,m}\left\{ \sup_{g_{i}^{0},\,g_{j}^{0}\,\in\,W}\ \left\{\limsup_{t\to\infty}\,\|g_{i}(t;g_{i}^{0})-g_{j}(t;g_{j}^{0})\|_{W} \right\}\right\}\] where \(g_{i}(t)\) and \(g_{j}(t)\) are any two solutions of this model evolutionary equation with the initial states \(g_{i}^{0}\) and \(g_{j}^{0}\) respectively for two neurons \(\mathcal{N}_{i}\) and \(\mathcal{N}_{j}\), \(1\leq i,j\leq m\), in the network. The neural network is said to be asymptotically synchronized if \[deg_{s}\left(\text{\emph{GNW}}\right)=0.\] If the asymptotic convergence to zero of the difference norm for any two neurons in the network admits a uniform exponential rate, then the neural network is called exponentially synchronized. Introduce the neuron difference functions: For \(i,j=1,\cdots,m\), define \[U_{ij}(t,x) =u_{i}(t,x)-u_{j}(t,x),\] \[Z_{ij}(t,x) =z_{i}(t,x)-z_{j}(t,x),\] \[R_{ij}(t,x) =\rho_{i}(t,x)-\rho_{j}(t,x).\] Given any initial state \(g^{0}=\operatorname{col}\left(g^{0}_{1},\cdots,g^{0}_{m}\right)\) in the space \(E\), the difference between any two solutions of (2.1) associated with two neurons \(\mathcal{N}_{i}\) and \(\mathcal{N}_{j}\) in the network is what we consider: \[g_{i}(t,g^{0}_{i})-g_{j}(t,g^{0}_{j})=\operatorname{col}\left(U_{ij}(t,\cdot),Z _{ij}(t,\cdot),R_{ij}(t,\cdot)\right),\quad t\geq 0.\] By subtraction of the three governing equations for the \(j\)-th neuron from the corresponding governing equations for the \(i\)-th neuron in (1.1), we obtain the following differencing reaction-diffusion equations. For \(i,j=1,\cdots,m\), \[\begin{split}\frac{\partial U}{\partial t}=&\, \eta\Delta U+f(u_{i},z_{i})-f(u_{j},z_{j})-k(\tanh(\rho_{i})u_{i}-\tanh(\rho_{ j})u_{j})\\ &-P\left[u_{i}\sum_{\nu=1}^{m}\Gamma(u_{\nu})-u_{j}\sum_{\nu=1}^{ m}\Gamma(u_{\nu})\right].\\ \frac{\partial Z}{\partial t}=&\,\Lambda Z+h(u_{i}, z_{i})-h(u_{j},z_{j}),\\ \frac{\partial R}{\partial t}=&\,aU-bR.\end{split} \tag{5.1}\] Here and after, for any given \(i\) and \(j\), we shall simply write \(U(t,x)=U_{ij}(t,x),Z(t,x)=Z_{ij}(t,x),R(t,x)=R_{ij}(t,x)\) as a notational convenience. The following exponential synchronization theorem is the main result of this paper. **Theorem 5.2**.: _For memristive reaction-diffusion neural networks \(\mathcal{NW}\) with the model (1.1)-(1.3) and the Assumptions (1.5)-(1.6), if the following threshold condition is satisfied by the coupling strength coefficient \(P\),_ \[\begin{split} P>&\frac{1+\exp[r(G+|V|)]}{m}\times\\ &\times\left[\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+\frac{a^{2} }{b}+2\sqrt{2Q}\,C^{*}\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right)+ \frac{64Q^{2}C^{*4}}{\eta^{3}}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma} \right]^{4}\right]\end{split} \tag{5.2}\] _where the positive constant \(Q\) and \(G\) are given in (4.8) and (4.14) respectively and \(C^{*}\) is a coefficient in the Gagliardo-Nirenberg interpolation inequality (5.9), then the memristive neural network \(\mathcal{NW}\) is exponentially synchronized in the state space \(E\) at a uniform exponential rate_ \[\delta(P)=\min\left\{b,\,\gamma,\,2\left(\frac{mP}{1+\exp[r(G+|V|)]}-\kappa \right)\right\}, \tag{5.3}\] _where the positive constant \(\kappa\) is_ \[\kappa=\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+\frac{a^{2}}{b}+2\sqrt{2Q}\,C ^{*}\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right)+\frac{64Q^{2}C^{*4}}{ \eta^{3}}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]^{4}. \tag{5.4}\] Proof.: The proof will go through three steps. Step 1. Take the \(L^{2}\) inner-products of the first equation in (5.1) with \(U(t)\), the second equation in (5.1) with \(Z(t)\), and the third equation in (5.1) with \(R(t)\). Then sum them up and use the Assumptions (1.5)-(1.6) to get \[\frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})+ \eta\|\nabla U(t)\|^{2}+\gamma\,\|Z(t)\|^{2}+b\|R(t)\|^{2} \tag{5.5}\] \[+P\int_{\Omega}\left[u_{i}\sum_{\nu=1}^{m}\Gamma(u_{\nu})-u_{j} \sum_{\nu=1}^{m}\Gamma(u_{\nu})\right]U(t,x)\,dx\] \[= \int_{\Omega}(f(u_{i},z_{i})-f(u_{j},z_{j}))U\,dx-\int_{\Omega}k (\tanh(\rho_{i})u_{i}-\tanh(\rho_{j})u_{j})U]\,dx\] \[+\int_{\Omega}(h(u_{i},z_{i})-h(u_{j},z_{j}))Z\,dx+\int_{\Omega }aUR\,dx\] \[\leq \int_{\Omega}\frac{\partial f}{\partial s}\left(\zeta u_{i}+(1- \zeta)u_{j}\right)U^{2}\,dx+\int_{\Omega}\frac{\partial f}{\partial\sigma} \left(\varsigma z_{i}+(1-\varsigma)z_{j}\right)UZ\,dx\] \[+\int_{\Omega}\frac{\partial h}{\partial s}\left(\epsilon u_{i} +(1-\epsilon)u_{j}\right)UZ\,dx\] \[-k\int_{\Omega}\left[\operatorname{sech}^{2}(\varepsilon\rho_{i} +(1-\varepsilon)\rho_{j})R\,u_{i}U+\tanh(\rho_{j})U^{2}\right]dx+\int_{ \Omega}aUR\,dx\] \[\leq \int_{\Omega}\left(\beta(U^{2}+|UZ|)+\xi(|u_{i}|+|u_{j}|+1)|UZ|+k (|u_{i}|RU+U^{2})+aUR\right)dx.\] \[\leq \int_{\Omega}\left(\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+ \frac{a^{2}}{b}\right)U^{2}(t,x)dx+\frac{\gamma}{4}\|Z(t)\|^{2}+\frac{b}{4}\| R(t)\|^{2}\] \[+\int_{\Omega}\xi(|u_{i}|+|u_{j}|)|UZ|\,dx+\int_{\Omega}k|u_{i}| RU\,dx,\quad t>0.\] where the mean value theorem in differentiation and the hyperbolic function properties \(|\tanh(\rho_{j})|\leq 1,\operatorname{sech}^{2}(\alpha\rho_{i}+(1-\alpha)\rho_{j})\leq 1\) are used and the numbers \(\zeta,\varsigma,\epsilon,\varepsilon\in[0,1]\). Step 2. We treat the last two integral terms on the right-hand side of the inequality (5.5). By the Holder inequality, \[\begin{split}&\int_{\Omega}k|u_{i}|RU\,dx\leq k\int_{\Omega} \left(\frac{b}{4k}R^{2}(t,x)+\frac{k}{b}u_{i}^{2}(t,x)U^{2}(t,x)\right)dx\\ \leq&\,\frac{b}{4}\|R(t)\|^{2}+\frac{k^{2}}{b}\left[ \int_{\Omega}u_{i}^{4}(t,x)\,dx\right]^{1/2}\left[\int_{\Omega}U^{4}(t,x)\,dx \right]^{1/2}\\ =&\,\frac{b}{4}\|R(t)\|^{2}+\frac{k^{2}}{b}\|u_{i}( t)\|_{L^{4}}^{2}\|U(t)\|_{L^{4}}^{2},\quad t>0.\end{split} \tag{5.6}\] Similarly we have \[\begin{split}&\int_{\Omega}\xi(|u_{i}|+|u_{j}|)|UZ|\,dx\leq\xi \int_{\Omega}\left(\frac{\gamma}{4\xi}Z^{2}(t,x)+\frac{2\xi}{\gamma}(u_{i}^{2} +u_{j}^{2})U^{2}(t,x)\right)dx\\ \leq&\,\frac{\gamma}{4}\|Z(t)\|^{2}+\frac{2\xi^{2}} {\gamma}\left(\left[\int_{\Omega}u_{i}^{4}\,dx\right]^{1/2}+\left[\int_{ \Omega}u_{j}^{4}\,dx\right]^{1/2}\right)\left[\int_{\Omega}U^{4}(t,x)\,dx \right]^{1/2}\\ =&\,\frac{\gamma}{4}\|Z(t)\|^{2}+\frac{2\xi^{2}}{ \gamma}\left(\|u_{i}(t)\|_{L^{4}}^{2}+\|u_{j}(t)\|_{L^{4}}^{2}\right)\|U(t)\|_ {L^{4}}^{2},\quad t>0.\end{split} \tag{5.7}\] Substitute the term estimates (5.6) and (5.7) into the differential inequality (5.5). We obtain \[\begin{split}&\frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|W(t)\|^{2}+\|R(t)\|^{2})+ \eta\|\nabla U(t)\|^{2}+\frac{\gamma}{2}\|Z(t)\|^{2}+\frac{b}{2}\|R(t)\|^{2} \\ &+P\int_{\Omega}\left[u_{i}\sum_{\nu=1}^{m}\Gamma(u_{\nu})-u_{j} \sum_{\nu=1}^{m}\Gamma(u_{\nu})\right]U(t,x)\,dx\\ \leq&\,\left[\beta+\frac{\beta^{2}+\xi^{2}}{\gamma} +k+\frac{a^{2}}{b}\right]\|U(t)\|^{2}\\ &+\left[\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right)\|u_ {i}(t)\|_{L^{4}}^{2}+\frac{2\xi^{2}}{\gamma}\|u_{j}(t)\|_{L^{4}}^{2}\right]\| U(t)\|_{L^{4}}^{2},\quad t>0.\end{split} \tag{5.8}\] The challenge is to handle the last two terms of \(L^{4}\)-norm products on the right-hand side of the above inequality (5.8). We exploit the Gagliardo-Nirenberg interpolation inequalities [32, Theorem B.3] and [4]. It states that Sobolev embedding \[H^{1}(\Omega)\subset L^{4}(\Omega)\subset L^{2}(\Omega)\] implies \[\begin{split}&\|U(t)\|_{L^{4}}^{2}\leq C^{*}\|U(t)\|_{H^{1}}^{2 \theta}\|U(t)\|^{2(1-\theta)}\\ \leq&\,C^{*}(\|U(t)\|+\|\nabla U(t)\|)^{2\theta}\|U(t )\|^{2(1-\theta)}\\ \leq&\,C^{*}2^{2\theta}(\|U(t)\|^{2\theta}+\|\nabla U (t)\|^{2\theta})\|U(t)\|^{2(1-\theta)}\\ =&\,2\sqrt{2}C^{*}\|U(t)\|^{2}+2\sqrt{2}C^{*}\|\nabla U (t)\|^{3/2}\|U(t)\|^{2(1-3/4)}\end{split} \tag{5.9}\] where an inequality in [5, Theorem 4.7] is used and the coefficient \(C^{*}(\Omega)>0\) only depends on the spatial domain \(\Omega\). Here the interpolation index \(\theta=3/4\) is determined by \[-\frac{n}{4}\leq\theta\left(1-\frac{n}{2}\right)-(1-\theta)\,\frac{n}{2}, \quad\text{for }1\leq n=\dim\Omega\leq 3,\] and the equality holds for \(n=3\). The interpolation inequality (5.9) shows that \[\|U(t)\|_{L^{4}}^{2}\leq 2\sqrt{2}C^{*}\|U(t)\|^{2}+2\sqrt{2}C^{*}\|\nabla U(t )\|^{3/2}\|U(t)\|^{1/2}. \tag{5.10}\] According to Theorem 3.2 and Theorem 4.1, we know that \(\limsup_{t\to\infty}\|U(t)\|^{2}<K\) and \(\limsup_{t\to\infty}\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{4}<Q\). Thus for any given initial state \(g^{0}\in E\) there exists a finite time \(T(g^{0})\geq 0\) such that \[\sum_{i=1}^{m}\|u_{i}(t)\|_{L^{4}}^{2}<Q^{1/2},\quad\text{for all}\;\;t>T(g^{0}).\] Therefore, by (5.10) and Young's inequality (2.6), we achieve the estimate \[\begin{split}&\left[\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma} \right)\|u_{i}(t)\|_{L^{4}}^{2}+\frac{2\xi^{2}}{\gamma}\|u_{j}(t)\|_{L^{4}}^{2 }\right]\|U(t)\|_{L^{4}}^{2}\\ =&\left[\frac{k^{2}}{b}\|u_{i}(t)\|_{L^{4}}^{2}+\frac {2\xi^{2}}{\gamma}\left(\|u_{i}(t)\|_{L^{4}}^{2}+\|u_{j}(t)\|_{L^{4}}^{2} \right)\right]\|U(t)\|_{L^{4}}^{2}\\ \leq&\,\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma} \right)Q^{1/2}2\sqrt{2}C^{*}(\|U(t)\|^{2}+\|\nabla U(t)\|^{3/2}\|U(t)\|^{1/2} )\\ \leq&\,\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma} \right)Q^{1/2}2\sqrt{2}C^{*}\|U(t)\|^{2}+\eta\|\nabla U(t)\|^{(3/2)\times(4/3) }\\ &\,+\frac{1}{\eta^{3}}\left[\left(\frac{k^{2}}{b}+\frac{2\xi^{2} }{\gamma}\right)Q^{1/2}2\sqrt{2}C^{*}\|U(t)\|^{1/2}\right]^{4}\\ =&\,\eta\|\nabla U(t)\|^{2}+\left[2\sqrt{2Q}\,C^{*} \left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right)+\frac{64Q^{2}C^{*4}}{ \eta^{3}}\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right)^{4}\right]\| U(t)\|^{2},\end{split} \tag{5.11}\] for \(t>T(g^{0})\). Substitute (5.11) in (5.8) and then cancel the gradient terms \(\eta\|\nabla U(t)\|^{2}\) on two sides of that inequality. It follows that \[\begin{split}&\frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t )\|^{2})+\frac{\gamma}{2}\|Z(t)\|^{2}+\frac{b}{2}\|R(t)\|^{2}\\ &+P\int_{\Omega}\left[u_{i}\sum_{\nu=1}^{m}\Gamma(u_{\nu})-u_{j} \sum_{\nu=1}^{m}\Gamma(u_{\nu})\right]U(t,x)\,dx\\ &\leq\left[\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+\frac{a^{2}}{ b}+2\sqrt{2Q}\,C^{*}\left(\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right)+\frac{64Q^{2} C^{*4}}{\eta^{3}}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]^{4} \right]\|U(t)\|^{2}.\end{split} \tag{5.12}\] Step 3. Another challenge is to handle the nonlinear difference term of the weak coupling on the left-hand side of the inequality (5.12). For any given \(1\leq i\neq j\leq m\), we have \[\begin{split}& P\int_{\Omega}\left[u_{i}\sum_{\nu=1}^{m}\Gamma(u_{ \nu})-u_{j}\sum_{\nu=1}^{m}\Gamma(u_{\nu})\right]U(t,x)\,dx\\ =&\,P\int_{\Omega}\sum_{\nu=1}^{m}\frac{u_{i}-u_{j}} {1+\exp[-r(u_{\nu}-V)]}U(t,x)\,dx\\ =&\,P\int_{\Omega}\sum_{\nu=1}^{m}\frac{U^{2}(t,x)} {1+\exp[-r(u_{\nu}-V)]}\,dx.\end{split} \tag{5.13}\] By Theorem 4.2 and (4.9), for each solution trajectory \(g(t,g^{0})\) there exists a finite time \(\tau(g^{0})>0\) such that \(\sum_{i=1}^{m}|u_{i}(t,x)|_{\mathbb{R}}<G\) for \(t>\tau(g^{0})\). Hence it holds that \[\frac{1}{1+\exp\left[-r(u_{\nu}(t,x)-V)\right]}\geq\frac{1}{1+\exp\left[r(G+| V|)\right]},\quad t>\tau(g^{0}), \tag{5.14}\] for all \(1\leq\nu\leq m\). Now substitute (5.13) and (5.14) in the differential inequality (5.12) on the left-hand side. We obtain \[\frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})+ \frac{\gamma}{2}\|Z(t)\|^{2}+\frac{b}{2}\|R(t)\|^{2}\] \[+\frac{mP}{1+\exp\left[r(G+|V|)\right]}\|U(t)\|^{2}\] \[= \frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})+ \frac{\gamma}{2}\|Z(t)\|^{2}+\frac{b}{2}\|R(t)\|^{2}\] \[+\frac{P}{1+\exp\left[r(G+|V|)\right]}\int_{\Omega}\sum_{\nu=1}^ {m}U^{2}(t,x)\,dx\] \[\leq \frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})+ \frac{\gamma}{2}\|Z(t)\|^{2}+\frac{b}{2}\|R(t)\|^{2}\] \[+P\,\sum_{\nu=1}^{m}\int_{\Omega}\frac{1}{1+\exp\left[-r(u_{\nu} (t,x)-V)\right]}U^{2}(t,x)\,dx\] \[= \frac{1}{2}\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})+ \frac{\gamma}{2}\|Z(t)\|^{2}+\frac{b}{2}\|R(t)\|^{2}\] \[+P\int_{\Omega}\left[u_{i}\sum_{\nu=1}^{m}\Gamma(u_{\nu})-u_{j} \sum_{\nu=1}^{m}\Gamma(u_{\nu})\right]U(t,x)\,dx\] \[\leq \left[\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+\frac{a^{2}}{b}+2 \sqrt{2Q}\,C^{*}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]+\frac{64 Q^{2}C^{*4}}{\eta^{3}}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]^{4} \right]\|U(t)\|^{2} \tag{5.15}\] for \(t>\tau(g^{0})\). From (5) and by the threshold condition (5.2) stated in this theorem, it results in the following Gronwall-type inequality: \[\frac{d}{dt}(\|U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})+\delta(P)(\| U(t)\|^{2}+\|Z(t)\|^{2}+\|R(t)\|^{2})\] \[\leq\frac{d}{dt}(\|U(t)\|^{2}+\|W(t)\|^{2}+\|R(t)\|^{2})+\gamma\| Z(t)\|^{2}+b\|R(t)\|^{2}+2\left[\frac{mP}{1+\exp\left[r(G+|V|)\right]}\right.\] \[-\left[\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+\frac{a^{2}}{b}+ 2\sqrt{2Q}\,C^{*}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]+\frac{64 Q^{2}C^{*4}}{\eta^{3}}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]^{4} \right]\right]\|U(t)\|^{2}\] \[\leq 0,\quad\text{for}\;\;t>\tau(g^{0}). \tag{5.16}\] Denote by \[\kappa=\beta+\frac{\beta^{2}+\xi^{2}}{\gamma}+k+\frac{a^{2}}{b}+2\sqrt{2Q}\,C^{*} \left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]+\frac{64Q^{2}C^{*4}}{\eta^ {3}}\left[\frac{k^{2}}{b}+\frac{2\xi^{2}}{\gamma}\right]^{4}.\] Finally we can solve this linear Gronwall inequality (5.16) to reach the exponential synchronization result: For any initial state \(g^{0}\in E\) and any two neurons \(\mathcal{N}_{i}\) and \(\mathcal{N}_{j}\) in this memristive reaction-diffusion neural network model (1.1), their difference function \(g_{i}(t;g_{i}^{0})-g_{j}(t;g_{j}^{0})\) converges to zero in the state space \(E\) exponentially at a uniform convergence rate \(\delta(P)\) shown below. Namely, for any \(1\leq i\neq j\leq m\), \[\begin{split}\|g_{i}(t)-g_{j}(t)\|_{E}^{2}&=\|u_{i} (t)-u_{j}(t)\|^{2}+\|z_{i}(t)-z_{j}(t)\|^{2}+\|\rho_{i}(t)-\rho_{j}(t)\|^{2}\\ &=\|U_{ij}(t)\|^{2}+\|Z_{ij}(t)\|^{2}+\|R_{ij}(t)\|^{2}\\ &\leq e^{-\delta(P)\,t}\left\|g_{i}^{0}-g_{j}^{0}\right\|^{2} \to 0,\ \ \text{as}\,\,t\to\infty.\end{split} \tag{5.17}\] Here the constant convergence rate in (5.17) is \[\delta(P)=\min\left\{b,\,\gamma,\,2\left(\frac{mP}{1+\exp\left[r(G+|V|)\right] }-\kappa\right)\right\},\] which is exactly (5.3)-(5.4) stated in the threshold condition (5.2) of this theorem. Hence it is proved that \[deg_{s}(\mathcal{N}\mathcal{W})=\sum_{1\,\leq\,i\,\neq\,j\,\leq\,m}\left\{\sup _{g^{0}\,\in\,E}\,\left\{\limsup_{t\to\infty}\|g_{i}(t)-g_{j}(t)\|_{E}^{2} \right\}\right\}=0. \tag{5.18}\] The proof of this theorem is completed. As a meaningful extension of Theorem 5.2, we can also prove the exponential synchronization of memristive reaction-diffusion neural networks denoted by \(\mathbb{N}\mathbb{W}=\{N_{i}:i=1,2,\cdots,m\}\) with the following model equations, cf. [3, 9, 24, 28], \[\begin{split}&\frac{\partial u_{i}}{\partial t}=\eta\Delta u_{i}+f(u_{ i},z_{i})-k\tanh(\rho_{i})u_{i}-\sum_{j=1}^{m}\frac{P(u_{i}-u_{e})}{1+\exp[-r(u_{ j}-V)]},\\ &\frac{\partial z_{i}}{\partial t}=\Lambda z_{i}+h(u_{i},z_{i}), \\ &\frac{\partial\rho_{i}}{\partial t}=au_{i}-b\rho_{i},\end{split} \tag{5.19}\] where the weak coupling terms involve a constant \(u_{e}\in\mathbb{R}\) called the reversal potential, on a bounded spacial domain \(\Omega\) and satisfy the same boundary conditions as specified in Section 1. **Theorem 5.3**.: _Assume that the nonlinear terms \(f(s,\sigma)\) and \(h(s,\sigma)\) in the memristive neural network model (5.19) are respectively scalar and vector polynomials and satisfy the same Assumptions (1.5) and (1.6), then there exists a positive constant \(\Psi>0\) which depends only on the parameters including \(u_{e}\) but independent of any initial data, such that if the threshold condition_ \[P>\Psi \tag{5.20}\] _is satisfied, then the solution semiflow of the memristive reaction-diffusion neural network \(\mathbb{NW}\) will be exponentially synchronized in the same state space \(E\) at a uniform convergence rate._ Proof.: We just briefly sketch the proof. Make the variable changes to denote \(\tilde{u}_{i}=u_{i}-u_{e}\), \(1\leq i\leq m\). Then the system (5.19) becomes \[\frac{\partial\tilde{u}_{i}}{\partial t}=\eta\Delta\tilde{u}_{i}+f(\tilde{u}_ {i}+u_{e},z_{i})\,-k\tanh(\rho_{i})(\tilde{u}_{i}+u_{e})-\sum_{j=1}^{m}\frac{P \,\tilde{u}_{i}}{1+\exp\left[-r(\tilde{u}_{j}+u_{e}-V)\right]}\,,\] \[\frac{\partial z_{i}}{\partial t}=\Lambda z_{i}+h(\tilde{u}_{i}+u_{e},z_{i}),\] \[\frac{\partial\rho_{i}}{\partial t}=a(\tilde{u}_{i}+u_{e})-b\rho_{i}.\] This system of equations can be written as \[\frac{\partial\tilde{u}_{i}}{\partial t}=\eta\Delta\tilde{u}_{i} +\tilde{f}(\tilde{u}_{i},z_{i},\rho_{i})-\sum_{j=1}^{m}\frac{P\,\tilde{u}_{i}}{ 1+\exp[-r(\tilde{u}_{j}-\tilde{V})]}, \tag{5.21}\] \[\frac{\partial z_{i}}{\partial t}=\Lambda z_{i}+\tilde{h}(\tilde {u}_{i},z_{i}),\] \[\frac{\partial\rho_{i}}{\partial t}=a\tilde{u}_{i}-b\rho_{i}+au_{ e}.\] where \(\tilde{V}=V-u_{e}\) and the two new functions \(\tilde{f}\) and \(\tilde{h}\) are \[\tilde{f}(\tilde{u}_{i},z_{i},\rho_{i}) =f(\tilde{u}_{i}+u_{e},z_{i})-k\tanh(\rho_{i})(\tilde{u}_{i}+u_{e}), \tag{5.22}\] \[\tilde{h}(\tilde{u}_{i},z_{i}) =h(\tilde{u}_{i}+u_{e},z_{i}).\] By expansion of the scalar and vector polynomials \(f(s+u_{e},\sigma)\) and \(h(s+u_{e},\sigma)\) and \(|\tanh(\rho_{i})|\leq 1\), using Young's inequality (2.6) appropriately, it follows from the Assumptions (1.5) and (1.6) that the new functions \(\tilde{f}\) and \(\tilde{h}\) possess the properties \[\begin{split}&\tilde{f}(s,\sigma,\rho)s\leq-\tilde{\alpha}|s|^{4}+ \tilde{\lambda}|s||\sigma|+\tilde{J},\quad(s,\sigma,\rho)\in\mathbb{R}^{2+\ell}, \\ &\max\left\{\frac{\partial\tilde{f}}{\partial s}(s,\sigma,\rho), \left|\frac{\partial\tilde{f}}{\partial\sigma}(s,\sigma,\rho)\right|\right\} \leq\tilde{\beta},\;(s,\sigma,\rho)\in\mathbb{R}^{2+\ell},\\ &\left|\frac{\partial\tilde{f}}{\partial\rho}(s,\sigma,\rho) \right|\leq k|s+u_{e}|,\;(s,\sigma,\rho)\in\mathbb{R}^{2+\ell},\end{split} \tag{5.23}\] and \[\begin{split}&\tilde{h}(s,\sigma)\sigma\leq\tilde{q}|s|^{2}|\sigma| +\tilde{L}|\sigma|,\quad(s,\sigma)\in\mathbb{R}^{1+\ell},\\ &\left|\frac{\partial\tilde{h}}{\partial s}(s,\sigma)\right|\leq \tilde{\xi}(|s|+1),\;\;\frac{\partial\tilde{h}}{\partial\sigma}(s,\sigma)=0, \quad(s,\sigma)\in\mathbb{R}^{1+\ell},\end{split} \tag{5.24}\] where the positive constants \(\tilde{\alpha},\tilde{\lambda},\tilde{J},\tilde{\beta}\) in (5.23) and \(\tilde{q},\tilde{L},\tilde{\xi}\) in (5.24) are the new parameters for the new model equations (5.21), which may also depend on the constant \(u_{e}\). We notice the structural similarity and the new parameters between the Assumptions (1.5)-(1.6) and the properties (5.23)-(5.24) and we see the maneuverable differences in the term \(-k\tanh(\rho_{i})u_{i}\) of (5.22) in (5.23) and a spillover constant \(au_{e}\) of the third equation of (5.21). Then we can conduct _a priori_ estimates parallel to the steps shown in Section 3 and Section 4 in the same formulated framework. It can be shown that the weak solutions of this neural network model (5.21) exist globally in time and the solution semiflow has an absorbing set \(\tilde{B}^{*}\) in the same state space \(E\). Specifically, we have \[\tilde{B}^{*}=\{\tilde{g}\in E:\|\tilde{g}\|^{2}\leq K^{*}\}\] where \(g=\operatorname{col}\left(u_{1},z_{1},\rho_{1},\cdots,u_{m},z_{m},\rho_{m}\right)\) and \[K^{*}=1+\frac{2C_{4}m}{\tilde{\mu}\min\{C_{3},1\}}|\Omega|, \tag{5.25}\] in which \[\begin{split}&\tilde{\mu}=\min\left\{\frac{2}{C_{3}},\;\gamma, \;b\right\},\quad C_{3}=\frac{1}{\tilde{\alpha}}\left(1+\frac{\tilde{q}^{2}} {4\gamma}\right),\\ & C_{4}=C_{3}\tilde{J}+\frac{\tilde{L}^{2}}{\gamma}+\frac{a^{2}u _{e}}{b}+\frac{1}{4}\left(\frac{C_{3}^{2}\,\tilde{\lambda}^{2}}{\gamma}+\frac {a^{2}}{2b}+1\right)^{2}.\end{split} \tag{5.26}\] Moreover, the ultimate bound property holds: \[\limsup_{t\to\infty}\,\sum_{i=1}^{m}\|\tilde{u}_{i}(t)\|_{L^{4}}^{4}<Q^{*}\] where \[Q^{*}=1+\frac{4\tilde{\lambda}^{2}}{\tilde{\alpha}}K^{*}+m\left(\tilde{\alpha}+ \frac{4\tilde{J}^{2}}{\tilde{\alpha}}\right)|\Omega|. \tag{5.27}\] We can also get the pointwise ultimate bound: \[\limsup_{t\to\infty}\,\sum_{i=1}^{m}|\tilde{u}_{i}(t,x)|_{\mathbb{R}}<G^{*}\] where \[G^{*}=\left[1+\frac{2C_{4}m}{\tilde{\mu}\min\{C_{3},1\}}\right]^{1/2}. \tag{5.28}\] Finally we define the neuron difference functions: For \(1\leq i,j\leq m\,(i\neq j)\), \[\tilde{U}_{ij}=\tilde{u}_{i}-\tilde{u}_{j},\qquad Z_{ij}=z_{i}-z_{j},\qquad R_ {ij}=\rho_{i}-\rho_{j}.\] They satisfy the following differencing reaction-diffusion equations: \[\begin{split}\frac{\partial\tilde{U}_{ij}}{\partial t}=& \,\eta\Delta\tilde{U}_{ij}+\tilde{f}(\tilde{u}_{i},z_{i},\rho_{i})- \tilde{f}(\tilde{u}_{j},z_{j},\rho_{j})-\sum_{\nu=1}^{m}\frac{P(\tilde{u}_{i}- \tilde{u}_{j})}{1+\exp\left[-r(\tilde{u}_{\nu}-\tilde{V})\right]}\\ \frac{\partial Z_{ij}}{\partial t}=&\,\Lambda Z_{ij} +\tilde{h}(\tilde{u}_{i},z_{i})-\tilde{h}(\tilde{u}_{j},z_{j}),\\ \frac{\partial R_{ij}}{\partial t}=&\,a\tilde{U}_{ ij}-bR_{ij}.\end{split} \tag{5.29}\] Parallel to the steps in the proof of Theorem 5.2, one can show that if the threshold condition (5.20) is satisfied, where the threshold constant \[\begin{split}&\Psi=\frac{1+\exp\left[r(G^{*}+|\tilde{V}|)\right]}{m }\times\\ &\left[\tilde{\beta}+\frac{\tilde{\beta}^{2}+\tilde{\xi}^{2}}{ \gamma}+k+\frac{a^{2}}{b}+2\sqrt{2Q^{*}}C^{*}\left(\frac{k^{2}}{b}+\frac{2 \tilde{\xi}^{2}}{\gamma}\right)+\frac{64(Q^{*})^{2}(C^{*})^{4}}{\eta^{3}} \left[\frac{k^{2}}{b}+\frac{2\tilde{\xi}^{2}}{\gamma}\right]^{4}\right]\end{split} \tag{5.30}\] and the mathematical coefficient \(C^{*}\) remains the same as in Theorem 5.2, then the solutions of this memristice neural network \(\mathbb{NW}\) is exponentially synchronized in the state space \(E\) at a uniform convergence rate. ## 6. **Examples and Numerical Simulation** In this section we shall provide two typical and most useful mathematical models of biological neural networks with memristors to illustrate the applications of the achieved exponential synchronization result in Theorem 5.2. To avoid notational overlap or confusion, the parameters in the following two subsections will be attached with subscript \(1\) and subscript \(2\), respectively. Numerical simulation for these two types of memristive neural networks will be performed to show the synchronization convergence behavior with a relatively higher threshold and possibly lower convergence rate due to the nonlinear weak coupling of the solution trajectories in the depicted curves of their \(L^{2}\)-norms. ### Diffusive Hindmarsh-Rose Equations with Memristor Consider a model of memristive diffusive Hindmarsh-Rose neural networks with memristor [16, 26, 45]: \[\begin{split}&\frac{\partial u_{i}}{\partial t}=\eta_{1}\Delta u_{i} +a_{1}u_{i}^{2}-b_{1}u_{i}^{3}+v_{i}-w_{i}-k_{1}\tanh(\rho_{i})u_{i}-Pu_{i} \sum_{j=1}^{m}\Gamma(u_{j}),\\ &\frac{\partial v_{i}}{\partial t}=\alpha_{1}-\beta_{1}u_{i}^{2}- v_{i},\\ &\frac{\partial w_{i}}{\partial t}=q_{1}u_{i}-r_{1}w_{i},\\ &\frac{\partial\rho_{i}}{\partial t}=c_{1}u_{i}-\delta_{1}\rho_{ i},\end{split} \tag{6.1}\] for \(t>0,\ x\in\Omega\subset\mathbb{R}^{n}\ (n\leq 3)\), where \(1\leq i\leq m\) and \(\Omega\) is a bounded domain up to three dimension with locally Lipschitz continuous boundary. The nonlinear function \(\Gamma(s)\) is the same as in (1.2). In this system (6.1), the variable \(u_{i}(t,x)\) refers to the membrane electric potential of a neuron cell, the variable \(v_{i}(t,x)\) represents the transport rate of the ions of sodium and potassium through the fast channels and can be called the spiking variable, while the variables \(w_{i}(t,x)\) called the bursting variable represents the transport rate across the neuron membrane through slow channels of calcium and some other ions. All the involved parameters \(a_{1},b_{1},c_{1},\eta_{1},\alpha_{1},\beta_{1},q_{1},r_{1},\delta_{1},k_{1}\) and the coupling strength coefficient \(P\) can be any positive constants. We impose the homogeneous Neumann boundary conditions for the \(u\)-component, \(\frac{\partial u}{\partial\nu}\left(t,x\right)=0\), \(x\in\partial\Omega\), and the initial conditions of the components are denoted by \[u_{i}^{0}(x)=u_{i}(0,x),\ v_{i}^{0}(x)=v_{i}(0,x),\ w_{i}^{0}(x)=w_{i}(0,x),\ \rho_{i}^{0}=\rho_{i}(0,x),\ \ 1\leq i\leq m.\] For illustrating the synchronization result Theorem 5.2, we can simply check all the Assumptions in (1.5) and (1.6) are satisfied by this model of memristive Hindmarsh-Rose equations (6.1). The vector functions \(z_{i}(t,x)\) in the general model (1.1) in this case is \[z_{i}(t,x)=\begin{pmatrix}v_{i}(t,x)\\ w_{i}(t,x)\end{pmatrix}\] and correspondingly the vector \(\sigma=\operatorname{col}\left(\sigma_{v},\sigma_{w}\right)\) has two components. Verify the Assumptions (1.5) and (1.6): In this model, we have the scalar function \[f(s,\sigma)=a_{1}s^{2}-b_{1}s^{3}+\sigma_{v}-\sigma_{w},\] the 2-dimensional square matrix and the vector function \[\Lambda=\begin{pmatrix}-I&0\\ 0&-r_{1}I.\end{pmatrix},\quad h(s,\sigma)=\begin{pmatrix}\alpha_{1}-\beta_{1}s ^{2}\\ q_{1}s\end{pmatrix}.\] We can verify that \[\begin{split} f(s,\sigma)s&=s(a_{1}s^{2}-b_{1}s^{3}+\sigma_{v}- \sigma_{w})=a_{1}s^{3}-b_{1}s^{4}+s(\sigma_{v}-\sigma_{w})\\ &\leq\left(\frac{3b_{1}}{4}|s|^{4}+\frac{a_{1}^{4}}{4b_{1}^{3}} \right)-b_{1}|s|^{4}+|s|(|\sigma_{v}|+|\sigma_{w}|)\\ &\leq-\frac{b_{1}}{4}|s|^{4}+\sqrt{2}|s||\sigma|+\frac{a_{1}^{4}}{4b _{1}^{3}},\quad\text{for}\,\,\,(s,\sigma)\in\mathbb{R}^{3},\end{split} \tag{6.2}\] and \[\begin{split}&\max\left\{\frac{\partial f}{\partial s}(s,\sigma ),\left|\frac{\partial f}{\partial\sigma}(s,\sigma)\right|\right\}=\max\left\{ 2a_{1}s-3b_{1}s^{2},1\right\}\\ &\leq\max\,\left\{\frac{a_{1}^{2}}{2b_{1}}+(2b_{!}-3b_{1})s^{2},1 \right\}\leq\max\,\left\{\frac{a_{1}^{2}}{2b_{1}},\,1\right\},\quad\text{for }\,\,(s,\sigma)\in\mathbb{R}^{3}.\end{split} \tag{6.3}\] Therefore Assumption (1.5) is satisfied. Moreover, \[\begin{split}\langle\Lambda\sigma,\sigma\rangle&=- \sigma_{v}^{2}-r_{1}\sigma_{w}^{2}\leq-\min\left\{1,\,r_{1}\right\}|\sigma|^{2 },\quad\text{for}\,\,\,\sigma\in\mathbb{R}^{2},\\ h(s,\sigma)\sigma&=(\alpha_{1}-\beta_{1}s^{2}) \sigma_{v}+q_{1}s\sigma_{w}\leq\beta_{1}s^{2}|\sigma_{v}|+q_{1}s^{2}|\sigma_ {w}|+(\alpha_{1}|\sigma_{v}|+q_{1}|\sigma_{w}|)\\ &\leq(\beta_{1}+q_{1})s^{2}|\sigma|+(\alpha_{1}+q_{1})|\sigma|, \quad\text{for}\,\,\,(s,\sigma)\in\mathbb{R}^{3},\end{split} \tag{6.4}\] and \[\begin{split}&\frac{\partial h}{\partial s}(s,\sigma)\leq| \operatorname{col}\left(-2\beta_{1}s,\,q_{1}\right)|\leq\max\left\{2\beta_{1},q _{1}\right\}(|s|+1),\\ &\frac{\partial h}{\partial\sigma}(s,\sigma)=0,\quad\text{for} \,\,\,(s,\sigma)\in\mathbb{R}^{3}.\end{split} \tag{6.5}\] Therefore Assumption (1.6) is also satisfied. We can record the specific parameters in (1.5) and (1.6) for this memristive Hindmarsh-Rose neural network model as follows: \[\begin{gathered}\alpha=\frac{b_{1}^{4}}{4},\quad\lambda=\sqrt{2}, \quad J=\frac{a_{1}^{4}}{4b^{3}},\quad\beta=\max\,\left\{\frac{a_{1}^{2}}{2b_{1 }},\,\sqrt{2}\right\},\\ \gamma=\max\,\left\{1,\,r_{1}\right\},\quad q=\beta_{1}+q_{1}, \quad L=\alpha_{1}+q_{1},\quad\xi=\max\,\{2\beta_{1},q_{1}\}.\end{gathered} \tag{6.6}\] Apply the proved synchronization Theorem 5.2 to this memristive diffusive Hindmarsh-Rose neural network model (6.1). Then we reach the following result. **Theorem 6.1**.: _For memristive diffusive Hindmarsh-Rose neural networks with the model (6.1), if the threshold condition (5.2) with the parameters in (6.6) is satisfied by the coupling strength coefficient \(P\), then the neural network is exponentially synchronized in the state space \(E=[L^{2}(\Omega,\mathbb{R}^{4})]^{m}\) at a uniform exponential convergence rate \(\delta(P)\) shown in (5.3) with the parameters given in (6.6)._ We numerically solve the differential equations of the memristive Hindmarsh-Rose neural network with the model (6.1) in a two-dimensional square domain. And we use the finite difference method for the numerical scheme programmed in Python. Choose the following parameters in the model (6.1): \[\begin{gathered} m=4,\ \ \eta_{2}=5,\ \ a_{1}=1,\ \ b_{1}=2,\,k_{1}=0.3,\ \ V=0.5,\ \ r=0.1,\\ \alpha_{1}=0.4,\ \ \beta_{1}=0.06,\ \ q_{1}=0.2,\ \ r_{1}=4,\ \ c_{1}=1,\ \ \delta_{1}=7.\end{gathered}\] Take time-step to be \(0.00025\) and spatial-step to be \(1\) on a \(32*32\) membrane. We compute and show the \(L^{2}\)-norm curves of the neuron membrane potential variable \(u_{i}\), the spiking variable \(v_{i}\), the bursting variable \(w_{i}\) and the memductance variable \(\rho_{i}\) in Figure 1 to Figure 4. Lastly the pairwise difference of vector solutions \(g_{i},\ i=1,2,3,4\) in the energy space \(E\) is shown in Figure 5. In Figure 1 to Figure 4, with a comparison between results after \(666\) iterations and results after \(2000\) iterations, one can observe the synchronization tendency of the four characterizing variables \((u_{i},v_{i},w_{i},\rho_{i})\) among the neurons in the simulated mimristive Hindmarsh-Rose neural network. From Figure 5, we observe that the \(L^{2}\)-norms of pairwise differences \(\|g_{i}-g_{j}\|\) tend to \(0\). We also calculate the following key constants involved in Theorem 5.2 based on our selection of parameters, rounding up to \(2\) digits. \[\begin{gathered} C_{1}=0.25,\quad C_{2}=0.44,\quad\mu=4.0,\quad K =3630.45,\quad Q=23719.02,\\ G=2.12,\quad C^{*}=0.4,\quad\kappa=16.69,\quad P=19.60,\quad\delta=4. 0.\end{gathered}\] The constant \(C^{*}\) from Gargliardo-Nirenberg inequality is chosen to be \(0.4\) based on [4]. Table 1 to Table 4 list the sampled values of the four components \(u_{i},v_{i},w_{i}\), and \(\rho_{i}\) of the simulated solution \(g_{i}\) at one same point in the domain at the initial time \(t=0\), at the 200 and 2000 time-step. It is seen that with a big difference on the initial values, after a certain time, the values of \(u_{i}\), \(v_{i}\), \(w_{i}\), and \(\rho_{i}\) tend to be close to each other between various neurons. The synchronization result rigorously proved in this work is illustrated by the example with sample selections of the system parameters and a randomized set of initial data. Our numerical simulation also exhibits that the neuron potentials \(u_{i}\) seem to be synchronized fastest within a limited time partly due to the memristor-potential coupling, while it takes much longer time to observe the synchronization on the other three variables \(v_{i}\), \(w_{i}\) and \(\rho_{i}\). \begin{table} \begin{tabular}{c|c|c|c} & Initial Value & At the 200 time step & At the 2000 time step \\ \hline \(u_{1}\) & 0.0075524 & 0.00160041 & 0.00441385 \\ \(u_{2}\) & 0.038242690 & 0.01277219 & 0.00990575 \\ \(u_{3}\) & 0.064522400 & 0.02294659 & 0.01506631 \\ \(u_{4}\) & 0.098338950 & 0.03892756 & 0.02388300 \\ \end{tabular} \end{table} Table 1. Comparison of the \(u_{i}\) at the point \(x=10,\ y=10\) Figure 1. The \(L^{2}\) norm of the neurons component \(u_{i}\) after 666 iterations (upper figure) and after 2000 iterations (lower figure) ### Diffusive FitzHugh-Nagumo Equations with Memristor Consider a model of memristive neural networks described by the diffusive FitzHugh-Nagumo equations \begin{table} \begin{tabular}{c|c|c|c} & Initial Value & At the 200 time step & At the 2000 time step \\ \hline \(v_{1}\) & 0.01623901 & 0.03495750 & 0.16725151 \\ \(v_{2}\) & 0.35418338 & 0.35641666 & 0.37220956 \\ \(v_{3}\) & 0.61091695 & 0.60062472 & 0.52791116 \\ \(v_{4}\) & 1.30404545 & 1.25993808 & 0.94827405 \\ \end{tabular} \end{table} Table 2. Comparison of the \(v_{i}\) at the point \(x=10,\ y=10\) \begin{table} \begin{tabular}{c|c|c|c} & Initial Value & At the 200 time step & At the 2000 time step \\ \hline \(w_{1}\) & 0.00289204 & 0.00239707 & 0.00053399 \\ \(w_{2}\) & 0.05901589 & 0.04850523 & 0.00841518 \\ \(w_{3}\) & 0.08612489 & 0.07083853 & 0.01235578 \\ \(w_{4}\) & 0.14579938 & 0.11989174 & 0.02088769 \\ \end{tabular} \end{table} Table 3. Comparison of the \(w_{i}\) at the point \(x=10,\ y=10\) Figure 2. The \(L^{2}\) norm of the neurons component \(v_{i}\) after 666 iterations (upper figure) and after 2000 iterations (lower figure) cf. [12, 33, 47] with nonlinear weak coupling: \[\frac{\partial u_{i}}{\partial t} =\eta_{2}\Delta u_{i}+\alpha_{2}u_{i}(u_{i}-\beta_{2})(1-u_{i})- \gamma_{2}\,w_{i}-k_{2}\tanh(\rho_{i})u_{i}-Pu_{i}\sum_{j=1}^{m}\Gamma(u_{j}),\] \[\frac{\partial w_{i}}{\partial t} =a_{2}u_{i}+c_{2}-b_{2}w_{i},\] \[\frac{\partial\rho_{i}}{\partial t} =q_{2}u_{i}-r_{2}\rho_{i}, \tag{6.7}\] \begin{table} \begin{tabular}{c|c|c|c} & Initial Value & At the 200 time step & At the 2000 time step \\ \hline \(\rho_{1}\) & 0.00516599 & 0.00377392 & 0.00065141 \\ \(\rho_{2}\) & 0.05720875 & 0.04118323 & 0.00309032 \\ \(\rho_{3}\) & 0.09721523 & 0.07001327 & 0.00511662 \\ \(\rho_{4}\) & 0.13192033 & 0.09538789 & 0.00755887 \\ \end{tabular} \end{table} Table 4. Comparison of the \(\rho_{i}\) at the point \(x=10,\ y=10\) Figure 3. The \(L^{2}\) norm of the neurons component \(w_{i}\) after 666 iterations (upper figure) and after 2000 iterations (lower figure) for \(t>0,\,x\in\Omega\subset\mathbb{R}^{n}\) (\(n\leq 3\)), where \(1\leq i\leq m\) and \(\Omega\) is a bounded domain with locally Lipschitz continuous boundary \(\partial\Omega\). All the involved parameters are positive constants. The nonlinear function \(\Gamma(s)\) is the same as in (1.2). In this system, the fast excitatory variable \(u_{i}(t,x)\) refers to the transmembrane electrical potential of a neuron cell and the slow recovering variable \(w_{i}(t,x)\) represents the integrated ionic current across the neuron membrane. The memductance \(\rho_{i}(t,x)\) of the memristor caused by the electromagnetic induction flux across the neuron membrane is a scalar function. We impose the homogeneous Neumann boundary condition is \(\frac{\partial u_{i}}{\partial\nu}(t,x)=0,\,t>0,\,x\in\partial\Omega,1\leq i\leq m\), and the initial states of the system are denoted by \[u_{i}^{0}(x)=u_{i}(0,x),\quad w_{i}^{0}(x)=w_{i}(0,x),\quad\rho_{i}^{0}=\rho_{ i}(0,x),\quad 1\leq i\leq m.\] As an application of the synchronization result Theorem 5.2, here we just check all the Assumptions in (1.5) and (1.6) are satisfied by this model of memristive FitzHugh-Nagumo neural networks (6.7). Figure 4. The \(L^{2}\) norm of the neurons component \(\rho_{i}\) after 666 iterations (upper figure) and after 2000 iterations (lower figure) In this case the generic functions in the Assumptions (1.5) and (1.6) are \[f(s,\sigma) =\alpha_{2}s(s-\beta_{2})(1-s)-\gamma_{2}\,\sigma, \tag{6.8}\] \[\Lambda =-b_{2},\] \[h(s,\sigma) =a_{2}s+c_{2},\quad(s,\sigma)\in\mathbb{R}^{2}.\] Check the Assumptions (1.5) and (1.6): We can verify that \[f(s,\sigma)s =-\alpha_{2}s^{4}+\alpha_{2}(1+\beta_{2})s^{3}-\alpha_{2}\beta_{ 2}s^{2}-\gamma_{2}\,s\,\sigma \tag{6.9}\] \[\leq-\alpha_{2}\left(s^{4}-\frac{3}{4}s^{4}-\frac{1}{4}(1+\beta_{ 2})^{4}\right)+\gamma_{2}|s||\sigma|\] \[=-\frac{1}{4}\alpha_{2}s^{4}+\gamma_{2}|s||\sigma|+\frac{1}{4} \alpha_{2}(1+\beta_{2})^{4},\quad(s,\sigma)\in\mathbb{R}^{2},\] and Figure 5. The \(L^{2}\) norm of pairwise differences between neural network solutions after 666 iterations (upper figure) and after 2000 iterations (lower figure) \[\begin{split}&\max\left\{\frac{\partial f}{\partial s}(s,\sigma), \left|\frac{\partial f}{\partial\sigma}(s,\sigma)\right|\right\}=\max\,\left\{-3 \alpha_{2}s^{2}+2\alpha_{2}(1+\beta_{2})s-\alpha_{2}\beta_{2},\,\gamma_{2} \right\}\\ \leq&\max\,\left\{-3\alpha_{2}s^{2}+\alpha_{2}s^{2}+ (1+\beta_{2})^{2}-\alpha_{2}\beta_{2},\,\gamma_{2}\right\}<\max\,\left\{(1+ \beta_{2})^{2},\,\gamma_{2}\right\}.\end{split} \tag{6.10}\] Therefore the Assumption (1.5) is satisfied. Moreover, \[\begin{split}&\langle\Lambda\sigma,\,\sigma\rangle=-b_{2}|\sigma |^{2},\\ & h(s,\sigma)\sigma=(a_{2}s+c_{2})\sigma\leq\frac{1}{4}a_{2}|s|^{ 2}|\sigma|+(a_{2}+c_{2})|\sigma|,\\ &\frac{\partial h}{\partial s}(s,\sigma)=a_{2},\quad\frac{ \partial h}{\partial\sigma}(s,\sigma)=0,\end{split} \tag{6.11}\] for \((s,\sigma)\in\mathbb{R}^{2}\). Therefore Assumption (1.6) is also satisfied. We record the specific parameters in (1.5) and (1.6) for this memristive FitzHugh-Nagumo neural network model as follows: \[\begin{split}\alpha=\frac{1}{4}\alpha_{2},&\lambda =\gamma_{2},\quad J=\frac{1}{4}\alpha_{2}(1+\beta_{2})^{4},\quad\beta=\max\, \left\{(1+\beta_{2})^{2},\,\gamma_{2}\right\},\\ &\gamma=b_{2},\quad q=\frac{1}{4}a_{2},\quad L=a_{2}+c_{2},\quad \xi=a_{2}.\end{split} \tag{6.12}\] Apply the proved synchronization Theorem 5.2 to this memristive diffusive FitzHugh-Nagumo neural network model. We also reach the following result. **Theorem 6.2**.: _For memristive diffusive FitzHugh-Nagumo neural networks with the model (6.7), if the threshold condition (5.2) with the parameters in (6.12) is satisfied by the coupling strength coefficient \(P\), then the neural network is exponentially synchronized in the state space \(E=[L^{2}(\Omega,\mathbb{R}^{3})]^{m}\) at a uniform exponential convergence rate \(\delta(P)\) shown in (5.3) with the parameters given in (6.12)._ Now we numerically solve the differential equations of the memristive FitzHugh-Nagumo neural network model (6.7) in a two-dimensional square domain. We use the finite difference method for the numerical scheme and programmed in Python. Choose the following parameters in the model (6.7): \[\begin{split} m=4,\,\,\eta_{2}=10,\,\,\alpha_{2}=0.5,\beta_{2}=0. 1\gamma_{2}=0.05,\,\,k_{2}=0.1,\\ a_{2}=0.3,\,\,b_{2}=3,\,\,c_{2}=1,\,\,q_{2}=0.2,\,\,r_{2}=10,\\ V=0.5,\,\,\,\,r=0.1.\end{split}\] Take the time-step to be \(0.00025\) and spatial-step to be \(1\) on a \(32*32\) membrane. We compute the \(L^{2}\) norm of the neuron membrane potential \(u_{i}\), the recovering variable \(w_{i}\), the memductance \(\rho_{i}\), and also the vector solution \(g_{i}\) of the model equations (6.7) in the energy space \(E\). The plotted curves are shown in Figure 6 to Figure 9. In comparison between the results after 333 iterations and after 1000 iterations in Figure 6 to Figure 8, one can observe the synchronization tendency of the three characterizing variables \((u_{i},w_{i},\rho_{i})\) among the neurons in the simulated mimristive FitzHugh-Nagumo neural network. From Figure 9, we observe that the pairwise differences of the \(L^{2}\)-norm \(\|g_{i}-g_{j}\|\) tend to 0. We can calculate the following constants involved in Theorem 5.2 based on our selection of parameters, rounding up to 2 digits. \[C_{1}=8.01,\quad C_{2}=2.89,\quad\mu=0.25,\quad K=94714.73,\quad Q =15101.69,\] \[G=9.67,\quad C^{*}=0.4,\quad\kappa=15.49,\quad P=19.58,\quad\delta =3.\] The constant \(C^{*}\) from Gargliardo-Nirenberg inequality is chosen to be 0.4 based on [4]. Table 5 to Table 7 list the sampled values of the three components \(u_{i},w_{i},\) and \(\rho_{i}\) of the simulated solution \(g_{i}\) at one same point in the domain at \(t=0\), at the 100 and 10000 time-step. It is seen that with a big difference on the initial values, after a certain time, the curves of \(u_{i}\), \(w_{i}\), and \(\rho_{i}\) tend to be close to each other among various neurons. Figure 6. The \(L^{2}\) norm of the neurons component \(u_{i}\) after 333 iterations (upper figure) and after 1000 iterations (lower figure) The synchronization result rigorously proved in this work is illustrated by the presented example with sample selections of the system parameters and a randomized \begin{table} \begin{tabular}{c|c|c|c} & Initial Value & At the 100 time step & At the 1000 time step \\ \hline \(w_{1}\) & 0.00280643 & 0.02672859 & 0.17726598 \\ \(w_{2}\) & 0.02656693 & 0.04889201 & 0.18859465 \\ \(w_{3}\) & 0.01219399 & 0.03569609 & 0.18192625 \\ \(w_{4}\) & 0.09637967 & 0.11393263 & 0.22179753 \\ \end{tabular} \end{table} Table 6. Comparison of the \(w_{i}\) at the point \(x=10,\ y=10\) Figure 7. The \(L^{2}\) norm of the neurons component \(w_{i}\) after 333 iterations (upper figure) and after 1000 iterations (lower figure) set of initial data. Our numerical simulation also exhibits that the neuron potentials \(u_{i}\) seem to be synchronized fastest within a limited time, while it takes much longer time to observe the synchronization on the other two variables \(w_{i}\) and \(\rho_{i}\). This observation actually enhances the neurodynamical conjecture that adding a nonlinear memristor coupling in the neuron potential equation would accelerate the synchronization for the main variable of neuron membrane potential. On the other hand, it also hints that although the main Theorem 5.2 confirmed the exponential synchronization has a uniform but maybe small convergence rate, each of the three components may have a different synchronization rate, which turns out to be a new interesting problem for further research. \begin{table} \begin{tabular}{c|c|c|c} & Initial Value & At the 100 time step & At the 1000 time step \\ \hline \(\rho_{1}\) & 0.00822771 & 0.02672859 & 0.00067401 \\ \(\rho_{2}\) & 0.01216352 & 0.04889201 & 0.00101167 \\ \(\rho_{3}\) & 0.08205714 & 0.03569609 & 0.00674807 \\ \(\rho_{4}\) & 0.08914909 & 0.11393263 & 0.00734463 \\ \end{tabular} \end{table} Table 7. Comparison of the \(\rho_{i}\) at the point \(x=10,\ y=10\) Figure 8. The \(L^{2}\) norm of the neurons component \(\rho_{i}\) after 333 iterations (upper figure) and after 1000 iterations (lower figure) **Conclusions**. We summarize the new contributions of the results in this paper. 1. In this paper we propose and study a general mathematical framework that can cover many typical and useful partial-ordinary differential equation models to characterize spatiotemporal dynamics of biological neural networks with memristors and weak synaptic coupling, which is a challenging and open problem in mathematical neuroscience and potentially in complex artificial learning dynamics. 2. The advancing contributions of this work are in three aspects. First it is proved in Section 3 and Section 4 that the solution semiflow of these memristive neural networks exhibits dissipative dynamics in common and admits ultimately uniform bounds in multiple norms. Second and more important is the exponential synchronization Theorem 5.2 and Theorem 5.3, in which we rigorously proved an explicit threshold condition in terms of the involved biological parameters and one mathematical parameter to ensure a synchronization at a uniform exponential rate in the \(L^{2}\) energy norm. Third we provide an effective analytic approach and a significant methodology to pursue the synchronization investigation through scaled _a priori_ estimates, leverage of dynamic integral inequalities, and sharp interpolation inequalities (such as the Figure 9. The \(L^{2}\) norm of pairwise differences between neural network solutions after 333 iterations (upper figure) and after 1000 iterations (lower figure) crucial Gagliardo-Nirenberg inequalities on Sobolev spaces) to tackle and control the memristive effect and nonlinearity by the weak synaptic coupling strength only. 3. Two illustrative applications of the main result on synchronization are presented by the memristive diffusive Hindmarsh-Rose neural networks and FitzHugh-Nagumo neural networks. It is expected that the mathematical framework and approach presented in this work and related computational simulations can be further generalized to a broader field and integrated with more applications in neurodynamics and network dynamics.
2310.06865
Neural Network Analysis of S-Star Dynamics: Implications for Modified Gravity
We studied the dynamics of S-stars in the Galactic center using the physics-informed neural networks. The neural networks are considered for both, Keplerian and the General Relativity dynamics, the orbital parameters for stars S1, S2, S9, S13, S31, and S54 are obtained and the regression problem is solved. It is shown that the neural network is able to detect the Schwarzschild precession for S2 star, while the regressed part revealed an additional precession. Attributing the latter to a possible contribution of a modified gravity, we obtain a constraint for the weak-field modified General Relativity involving the cosmological constant which also deals with the Hubble tension. Our analysis shows the efficiency of neural networks in revealing the S-star dynamics and the prospects upon the increase of the amount and the accuracy of the observational data.
N. Galikyan, Sh. Khlghatyan, A. A. Kocharyan, V. G. Gurzadyan
2023-10-04T13:04:52Z
http://arxiv.org/abs/2310.06865v1
# Neural Network Analysis of S-Star Dynamics: Implications for Modified Gravity ###### Abstract We studied the dynamics of S-stars in the Galactic center using the physics-informed neural networks. The neural networks are considered for both, Keplerian and the General Relativity dynamics, the orbital parameters for stars S1, S2, S9, S13, S31, and S54 are obtained and the regression problem is solved. It is shown that the neural network is able to detect the Schwarzschild precession for S2 star, while the regressed part revealed an additional precession. Attributing the latter to a possible contribution of a modified gravity, we obtain a constraint for the weak-field modified General Relativity involving the cosmological constant which also deals with the Hubble tension. Our analysis shows the efficiency of neural networks in revealing the S-star dynamics and the prospects upon the increase of the amount and the accuracy of the observational data. pacs: 98.80.-kCosmology ## 1 Introduction The motion of S-stars in the Galactic center has become an important source for probing the value of the mass of the supermassive black hole of Sgr A* and a natural laboratory for testing of General Relativity (GR) and different theories of gravity. The S-star dynamics is being studied within dedicated observational surveys involving different methods of data analysis, [1] and references therein. An essential area of the observational data analysis data is dedicated to the testing of the modified gravity theories based on the reconstruction of the dynamics of individual S-stars, e.g. [2; 3; 4; 5; 6; 7; 8; 9], thus complementing the recent tests of GR, e.g. [10; 11; 12; 13; 14]. In this paper we involve neural networks, namely, the physics-informed neural networks (PINN) [15; 16] to analyse the dynamics of the S-stars. Neural networks of various architecture are already widely used in broad range of physical problems, from particle physics to astrophysics, e.g. [17; 18; 19; 20]. In our analysis we used the neural networks considering both the Newtonian theory and the General Relativity, to explicitly reveal the differences which are able to trace the network architectures. We obtain the orbital parameters of certain stars, which then enables us to constrain a weak-field modified General Relativity. As is known, in contrast to the Keplerian orbital motion, in GR there occurs an apsidal precession (for central body without a rotational momentum, i.e. for Schwarzschild metric), yielding during one period of revolution a precession shift [21] \[\delta\varphi_{\rm SP}=3\frac{r_{g}}{a(1-e^{2})}\pi, \tag{1}\] where \(r_{g}\) is the gravitational radius of the central body, \(a\) is the semi-major axis and \(e\) is the eccentricity of the orbit. Since for the S2 star the observational data are available for more than one revolution period, it was involved to test the GR and constrain possible deviations from it [1]. In this paper, as an involved additional deviation, we consider the possible contribution to the precession rate by the weak-field modified GR involving the cosmological constant \(\Lambda\). That modification is based on the condition of identity of sphere's and point mass's gravity, and \(\Lambda\)-term naturally enters as an additional one in the Newtonian force [22] \[{\bf F}(r)=\left(-\frac{A}{r^{2}}+\Lambda r\right)\hat{\bf r}. \tag{2}\] Then the metric tensor components have the form \[g_{00}=1-\frac{2GM}{rc^{2}}-\frac{\Lambda r^{2}}{3};\ \ g_{rr}=\left(1-\frac{2GM}{rc^{2} }-\frac{\Lambda r^{2}}{3}\right)^{-1}, \tag{3}\] where the currently estimated value for the cosmological constant is \(\Lambda=1.11\times 10^{-52}\,\mathrm{[m]^{-2}}\)[23]. This metric is known as Schwarzschild - de Sitter metric [24], and considered from Eq.(1) as weak-field GR, it provides a description of astrophysical structures such as the galaxy groups and clusters [25; 26; 27]. Eq.(1) enables to include the cosmological constant in the McCrea-Milne cosmology [28; 29]. The additional \(\Lambda\)-term will create a non-force-free field inside the spherical shell, and it appears efficient in describing the properties of galactic halos, other observable effects [30; 31; 32]. Its role in the relative instability of \(N\)-body gravitating systems has been analyzed in [33]. The consideration of \(\Lambda\) as a fundamental physical constant links the cosmological evolution with the notion of information [34]. This approach also enables to describe the Hubble tension as a result of local and global flows [35], to address the structure formation in the local Universe [36; 37]. Thus, our first goal is to apply PINN in obtaining the orbital parameters of S-stars i.e. the eccentricity \(e\) and focal parameter \(p\), the mass of the central body and solve the regression problem \(u(\varphi)\), where \(u\) is the inverse of the radius and \((r,\varphi)\) are the polar coordinates of the motion. Then, for the Schwarzschild metric, from the S2 star data we get a constraint for the GR modification involving the cosmological constant. Note that, we do not take into account the interaction between individual stars, i.e. we do not consider the \(N\)-body problem. The possible role of extended mass distribution on the dynamics of S-stars has been considered earlier (e.g. [38; 39]), however, the analysis by the Gravity collaboration [40] for plausible density profiles shows that the extended mass component inside the S2 star apocenter must be less than \(0.1\%\) of the mass of the central black hole. As mentioned, we used the data for S2. Future more accurate observations will enable to get more information on the role of extended mass distribution. ## 2 Observational Data To analyze the motion of S-stars we used the available observational data as studied in [41]. The coordinates of the S-stars motion are given by right ascension and declination, using the Thiele-Innes constants and real orbits argument of periapsis, the inclination, and the ascending node angles we can transform the Sky-plane coordinates into the Cartesian coordinates (\(x\)\(\mathrm{[Au]},y\)\(\mathrm{[Au]}\))\({}_{\mathrm{Star}_{i}}\)[1; 3]. Then, we normalize the Cartesian coordinates in two ways: 1) Each star data normalized individually \(\left(\tilde{x}:=\frac{x}{\tilde{S}_{i}},\tilde{y}:=\frac{y}{\tilde{S}_{i}} \right)_{\mathrm{Star}_{i}}\), with a normalization coefficient \(S_{i}\) corresponding to the \(i\)-th star. 2) All stars data normalized by a single normalization coefficient \(S\)\(\left(\tilde{x}:=\frac{x}{\tilde{S}},\tilde{y}:=\frac{y}{\tilde{S}}\right)_{ \mathrm{Star}_{i}}\). Numerical simulations have demonstrated that the outcomes remain consistent regardless of the specific choice. We considered the stars given in the Tab.(1) and in the Fig.(1) [41]. ## 3 Neural networks ### Implementation of the PINN PINNs have a broad range of applications [42], since they allow combining neural network approaches with physical models, which are often presented in the form of differential equations, both ODE and PDE [15; 16]. These neural networks are already used in such areas as the fluid mechanics [43], nonlinear optics (for solving the nonlinear Schroedinger \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Name & eccentricity \(e\) & focal parameter \(p\)\(\mathrm{[Au]}\) & Period \(T\)\(\mathrm{[Yr]}\) & Data count \\ \hline S1 & 0.556 & 3412 & 166 & 161 \\ \hline S2 & 0.884 & 228 & 16 & 145 \\ \hline S9 & 0.644 & 1323 & 51 & 160 \\ \hline S13 & 0.425 & 1796 & 49 & 127 \\ \hline S31 & 0.550 & 2601 & 108 & 51 \\ \hline S54 & 0.893 & 2017 & 477 & 94 \\ \hline \end{tabular} \end{table} Table 1: S-stars the parameters of which were used during the training. The values of orbital parameters and periods were obtained by Keplerian fit [41]. equation)[44], heat transfer problems[45], industrial problems (power systems [46] and main bearing fatigue prognosis [47]), medicine (cardiac activation mapping [48] and cardiovascular flows modeling [49]), etc. There are different approaches and implementations of PINN, and in this paper we involve PINN to solve regression problem using the differential equations of the corresponding physical models. This approach is based on the so-called physical loss function during its training together with a classic regression loss \(L_{reg}(y,f(x))\) (e.g. mean squared error), where \(y\) is the ground truth value and \(f(x)\) is the model. If the physical process is described by Eq.(4) \[F(x,y,y^{\prime}_{x},y^{\prime\prime}_{x},\ldots,y^{(n)}_{x})=0. \tag{4}\] Then the physical loss is given by the following loss function Eq.(5) \[L_{phys}(f(x),x)=F^{2}\left(x,f(x),f^{\prime}(x),f^{\prime\prime}(x),\ldots,f^ {(n)}(x)\right). \tag{5}\] To calculate the total loss value one should use the actual data \(\{(y_{i},x_{i})\}_{i=1}^{N}\) and sample \(\{(\hat{x}_{i})\}_{i=1}^{N_{p}}\) points from a larger data domain. Then the loss function may be calculated by Eq.(6), where \(\alpha\) is a given training step-dependent regularization parameter. \[\mathcal{L}(f(\cdot),y,x,\hat{x})=\frac{1}{N}\sum_{i=1}^{N}L_{reg}(y_{i},f(x_ {i}))+\frac{\alpha}{N_{p}}\sum_{i=1}^{N_{p}}L_{phys}(f(\hat{x}_{i}),\hat{x}_{ i}). \tag{6}\] We involve PINN for the following reasons: 1. It can be used in cases when not much data are available; 2. One can extrapolate the results of the regression to a bigger data domain; 3. One can consider parameters in the differential equation as trainable parameters for NN, and estimate their values by training. ### Models and metrics As mentioned above, all considered models consist of two parts: the regression part, which yields the fully-connected layers, and the physical part, in which differential equations are solved. The differentials equations include both, the parameters common to all stars \(P^{C}\), and individual parameters \(P^{I}\) that are different for each star. Figure 1: The coordinates of the considered stars: points indicate the observed data and dashed lines are the ellipses obtained from the orbital parameters of the Keplerian fit [41]. The input data include the observed polar angles \(\varphi_{i}:=\arctan\frac{\tilde{y}_{i}}{\tilde{x}_{i}}\) and physical polar angles \((\varphi_{i})_{\text{Phys}}\), within the limits in which we want to solve the regression problem. For the Keplerian case the output of regression part is the predicted \(\tilde{\tilde{u}}_{i}\), which is compared with observed/target \(\tilde{u}_{i}:=\frac{1}{\tilde{r}_{i}}=\frac{1}{\sqrt{\tilde{x}_{i}^{2}+ \tilde{y}_{i}^{2}}}\) using the MSE Loss function. And for the GR case, the Schwarzschild metric output was involved using the Darwin variable \(\chi_{i}\)[52]. Two different training methods were used: 1) Parallel training, when the data of all stars are simultaneously fed into the input of copies of the neural network and in the physical part, the common training parameters \(P^{C}\) (such as the mass of the central body) are the same parameter. 2) Individual training, the data of each star are involved separately and the training is sequential from star to star, in this case there are no common parameters in the physical part. The schemes in Fig.(2) illustrate how the training process works. To obtain statistically significant results, the physical losses, taking into account the regularization coefficient, must be less than the values of the terms that enter into the losses. As for physical losses, we have used both the second and first order differential equations simultaneously (see Eq.(10) or Eq.(12)). The reason is that, the first order equations usually get stuck on a constant value after the extremum point, i.e. when \(\frac{du}{d\varphi}=0\) in Eq.(10) or \(\frac{d\chi}{d\varphi}=0\) in Eq.(12), because of the zero under the square root in those points. Using both equations has helped NN to avoid that problem and have a better performance. Figure 2: Training scheme: \((\varphi)_{\text{Star}_{j}}\) are the polar angles of the \(j\)-th S-star, \((\varphi)_{\text{Phys}}\) is the physical polar angles set, i.e. the domain (wider than the observational domain) in which we want to predict the motion, \(\varphi_{i}\) is the \(i\)-th polar angle for the corresponding star or physical angles, \(P^{I}\) is the star individual parameters (such as eccentricity), \(P^{C}\) is the common parameters (such as central body mass) and \(\mathbb{D}(\cdot)\) is the physical model differential equations. To estimate the performance of the models we use the metrics given by Eq.(7) \[\begin{split}&\mathcal{M}_{\text{model-data}}=\mathbb{E}\left[1- \frac{|u_{\text{Model}}-u_{\text{Star}}|}{u_{\text{Star}}}\right],\\ &\mathcal{M}_{\text{data-physics}}=\mathbb{E}\left[1-\frac{|u_{ \text{Phys}}-u_{\text{Star}}|}{u_{\text{Star}}}\right],\\ &\mathcal{M}_{\text{model-physics}}=\mathbb{E}\left[1-\frac{|u_{ \text{Model}}-u_{\text{Phys}}|}{\frac{1}{2}(u_{\text{Model}}+u_{\text{Phys}})} \right],\end{split} \tag{7}\] where \(u_{\text{Model}}\) is the predictions of the model, \(u_{\text{Star}}\) is the used data, and \(u_{\text{Phys}}=\frac{1}{\hat{p}}(1+\hat{e}\cos(\varphi-\varphi_{0}))\) is the prediction by the obtained orbital parameters. These metrics, besides showing us how well the model has learned the data, also show how well the model has "understood" the physics. Note that, while "data" metrics are calculated for the given data points, the \(\mathcal{M}_{\text{model-physics}}\) is calculated for the physical polar angles \((\varphi)_{\text{Phys}}\). ## 4 Numerical experiments ### Kepler case First, we consider the case of the Keplerian potential, \[V(r)=-\frac{M}{r}, \tag{8}\] i.e. we assume that the stars move along ellipses. In this case, the energy \(E\) and the angular momentum \(L\) are conserved, together with the normalized energy \(\tilde{E}=\frac{E}{m}\) and angular momentum \(\tilde{L}=\frac{L}{m}\) by the mass of the orbiting object \(m\). \[\begin{split}\tilde{E}&=\frac{\dot{r}^{2}}{2}+\frac {r^{2}\dot{\varphi}^{2}}{2}-V(r),\\ \tilde{L}&=r^{2}\dot{\varphi},\end{split} \tag{9}\] where the dot is used for the time derivative [50]. Substituting \(u=\frac{1}{r}\), \(e=\sqrt{1+\frac{2\tilde{E}L^{2}}{M^{2}}}\), and \(p=\frac{L^{2}}{M}\) we arrive to the first equation in the Eq.(10), the second one is derived by differentiation of the first one [51]. \[\begin{split}\left(\frac{du}{d\varphi}\right)^{2}& =\frac{(e^{2}-1)}{p^{2}}-u^{2}+\frac{2}{p}u,\\ \frac{d^{2}u}{d\varphi^{2}}&=-u+\frac{1}{p}.\end{split} \tag{10}\] In this case we carried out the individual training scheme with the training NN block shown in the Fig.(4). Training parameters \(P^{I}\) are the eccentricity and focal parameter. The network itself (regression part) consists of 4 fully connected layers with \((32,64,32,1)\) nodes respectively and on all layers except the last one there is a tanh activation function. We considered the S1, S2, S9, S13 and S54 stars. The predicted orbital parameters \((\hat{e},\hat{p})\), together with estimated metrics Eq.(7) are shown in the Tab.(2). The trajectories predicted by the model are shown in the Fig.(3), which also shows the prediction of the models outside the observational domain. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Star Name & \(e\) & \(\hat{e}\) & \(p\) [Au] & \(\hat{p}\) [Au] & \(\mathcal{M}_{\text{model-data}}\) & \(\mathcal{M}_{\text{data-physics}}\) & \(\mathcal{M}_{\text{model-physics}}\) \\ \hline S1 & 0.556 & 0.554 & 3412 & 3405 & 0.9933 & 0.9896 & 0.9942 \\ \hline S2 & 0.884 & 0.872 & 228 & 226 & 0.9871 & 0.9763 & 0.9894 \\ \hline S9 & 0.644 & 0.603 & 1323 & 1406 & 0.9892 & 0.9533 & 0.9920 \\ \hline S13 & 0.425 & 0.454 & 1796 & 1792 & 0.9888 & 0.9724 & 0.9892 \\ \hline S54 & 0.893 & 0.712 & 2017 & 1862 & 0.9662 & 0.9359 & 0.9738 \\ \hline \end{tabular} \end{table} Table 2: Results for Kepler case: \((e,p)\) orbital parameters are from [41], \((\hat{e},\hat{p})\) the network prediction. Figure 3: The model predictions for Kepler case: On the left, the predicted trajectories are shown in the Cartesian coordinates \((x\ [\mathrm{Au}],y\ [\mathrm{Au}])\), where \((0,0)\) is the central mass. On the right, the models’ regression results are shown. ### GR case The next step is to consider the Schwarzschild metric as a physical model for PINN. Following [52] and introducing the Darwin variable \(\chi\), the equation of motion can be written as \[u=\frac{\mu}{M}(1+e\cos\chi), \tag{11}\] \[\begin{split}\left(\frac{d\chi}{d\varphi}\right)^{2}& =1-2\mu(3+e\cos\chi),\\ \frac{d^{2}\chi}{d\varphi^{2}}&=\mu e\sin\chi,\end{split} \tag{12}\] where \(\mu:=\frac{M}{p}\). In this case we carried out the parallel training scheme and the training NN block is shown in the Fig.(4). The individual training parameters \(P^{I}\) are the eccentricity and the focal parameter, and the common parameter \(P^{C}\) is the mass of the central body. The network itself consists of 4 fully connected layers with \((5,10,5,1)\) nodes respectively and on all layers except the last one, there is a tanh activation function. Moreover, artificial data is generated based on orbital parameters from [41] for S13, S31, and S54 stars to close the orbit. The results of model inference are shown in Fig. (5). As for the star S54, we see that the model actually learned the data, but it failed to learn the physical part, thus extrapolating the trajectory to angles \(>2\pi\). For the remaining stars, the model learned their orbital parameters well and was able to extrapolate outside the observed data. Figure 4: A more detailed NN block scheme for the Kepler and GR cases from the Fig.(2(c)). ### Precession & \(\Lambda\) constraint The S2 star data make it possible to check GR based on the precession [1]. The first order GR (Schwarzschild) correction, i.e. the following perturbation to the Keplerian potential Eq.(13) is \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Star Name & \(e\) & \(\hat{e}\) & \(p\) [Au] & \(\hat{p}\) [Au] & \(\mathcal{M}_{\text{model-data}}\) & \(\mathcal{M}_{\text{data-physics}}\) & \(\mathcal{M}_{\text{model-physics}}\) \\ \hline S2 & 0.884 & 0.884 & 228 & 223 & 0.9755 & 0.9768 & 0.9789 \\ \hline S13 & 0.425 & 0.418 & 1796 & 1706 & 0.9844 & 0.9829 & 0.9974 \\ \hline S31 & 0.550 & 0.552 & 2601 & 2675 & 0.9803 & 0.9826 & 0.9835 \\ \hline S54 & 0.893 & 1.001 & 2017 & 4.787 & 0.9660 & \(<0\) & \(<0\) \\ \hline \end{tabular} \end{table} Table 3: Results for GR case: \((e,p)\) orbital parameters from [41], \((\hat{e},\hat{p})\) the network prediction. Figure 5: Model Predictions. Note that the data domain used for the model’s physical part training is set to be \([0,4\pi]\), which means that besides learning the data, the model also extrapolates it as it was expected by PINN approach. \[V(r)=-\frac{M}{r}+\frac{r_{g}L^{2}}{2}\frac{1}{r^{3}}, \tag{13}\] where \(L=\sqrt{pM}\) is the angular momentum of the test particle, corresponding to the Schwarzschild precession rate Eq.(1). Statistical analysis of data within the first order PPN by the GRAVITY Collaboration [1], reports a deviation from Eq.(1) by a magnitude of \(f_{\rm SP}=1.10\pm 0.19\) \[\delta\varphi_{\rm GRAV}=f_{\rm SP}\cdot\delta\varphi_{\rm SP}=3f_{\rm SP} \frac{\pi r_{g}}{a(1-e^{2})}; \tag{14}\] \[f_{\rm SP}=0,\quad{\rm Keplerian\ case},\] \[f_{\rm SP}=1,\quad{\rm GR\ case}.\] To find the contribution of the \(\Lambda\) term to the precession, based on Eq.(2), we have the following perturbed potential \[V(r) = -\frac{M}{r}+\delta V_{\rm GR}(r)+\delta V_{\Lambda}(r)\] \[= -\frac{M}{r}+\frac{r_{g}L^{2}}{2}\frac{1}{r^{3}}-\frac{\Lambda}{ 6}r^{2},\] which, along with \(\delta\varphi_{\rm SP}\), leads to an additional term \(\delta\varphi_{\Lambda}\) \[\delta\varphi_{A}=\frac{2a^{3}(1-e^{2})^{\frac{1}{2}}\pi}{r_{g}}\Lambda. \tag{16}\] Based on the values reported by [1; 41], we find a constraint on the value of the cosmological constant as follows \[\delta\varphi_{\rm SP}+\delta\varphi_{A}=\delta\varphi_{\rm GRAV}, \tag{17}\] and the following upper constraint on the \(\Lambda\) \[\Lambda\leq 1.0\times 10^{-36}\ {\rm[m]^{-2}}. \tag{18}\] We can also find the constraint on \(\Lambda\) from the PINN. After individual training on the S2 star, we get the following values for the parameters with their confidence intervals for the certain NN and the metrics \[\begin{array}{ll}\hat{e}=0.88512\pm 0.00001,&\hat{p}=219.2\pm 0.2\,{\rm[ Au]},\quad\hat{M}=0.04\,{\rm[Au]},\\ {\cal M}_{\rm model\-data}=0.9865,&{\cal M}_{\rm data\-physics}=0.9881,\quad{ \cal M}_{\rm model\-physics}=0.9977.\end{array} \tag{19}\] The predicted trajectory of motion of the S2 star is shown in the Fig. (6). It is important to note that, after a certain point during training, the value of \(\hat{M}\) was fixed to be equal \(0.04\,{\rm[au]}\). Although the model was able to reach the given value and "understand" the physical meaning of \(\hat{M}\), which was indicated by the closeness (\(\pm 0.1^{\prime}\)) of precession rates calculated using physical parameters Eq.(21) and regression Eq.(20) after that point (before the model had already obtained the values of eccentricity, focal parameter, and obtained the regression results, but the value of physical precession was an order of magnitude higher), its value was not stable due to the quality of the data. During training, we calculate the two values of the precession rate: \[\delta\varphi_{\rm Reg}=\varphi\left(\min_{1}\hat{u}\right)-\varphi\left(\min _{0}\hat{u}\right)-2\pi,\quad{\rm Precession\ rate\ of\ regression\ part}\ \hat{u}(\varphi). \tag{20}\] \[\delta\varphi_{\rm Phys}=3\frac{\hat{r}_{g}}{\hat{p}}\pi,\quad{\rm Precession\ rate\ of\ physical\ part}. \tag{21}\] Taking the moving average for every 500 epochs we obtain the following results \[\delta\varphi_{\rm Reg}=11.84^{\prime};\quad\sigma_{\rm Reg}=0.03^{\prime}, \tag{22}\] \[\delta\varphi_{\rm Phys}=11.82^{\prime};\quad\sigma_{\rm Phys}=0.02^{\prime}. \tag{23}\] During the calculations we took into account that for the significant results physical loss must be less than the individual terms that enter into it, and choose the step for \(\varphi\in[0;4\pi]\) for which we calculate \(\delta\varphi_{\rm Reg}\) is so that, it is less than the difference \(\delta\varphi_{\rm Reg}-\delta\varphi_{\rm Phys}\). Using the equations Eq.(16) and Eq.(17), also taking \(\delta\varphi_{\rm Reg}(+3\sigma_{\rm Reg})\) as the total precession rate and \(\delta\varphi_{\rm Phys}(-3\sigma_{\rm Phys})\) as the Schwarzschild precession rate, we obtain for the \(\Lambda\) the following upper constraint \[\Lambda\leq 5.8\times 10^{-38}\ {\rm[m]^{-2}}. \tag{24}\] ## 5 Conclusions We studied the dynamics of the S-stars using neural networks PINN, aiming first to reveal the values of the orbital parameters of each star. Both Keplerian and General Relativity dynamics were considered to reveal their differences for the given network architecture. It was shown that with a given physical model, with good accuracy, one can obtain the orbital parameters of stars, as well as a regression problem can be solved providing the dependence of \(u(\varphi)\). The neural network was able to "see" the Schwarzschild precession for S2 star, which made it possible to find the precession rate for both, based on the regression part and the physical part. And since the regressed part is more "flexible" and is directly related to the observational data, the difference in the values of the precession rates can be attributed to an additional precession that occurs due to terms not entered in the physical model. Specifically, as such a contribution we considered the gravity modification with the cosmological constant \(\Lambda\) in Eq.(2). Ultimately, this procedure enables us to find a constraint on the cosmological constant for the current data accuracy, being in agreement with its adopted value. The same analysis was carried out using the results for the precession obtained by the GRAVITY collaboration [1]. Our analysis reveals the efficiency of the neural networks in the study of the S-star dynamics and that stronger constraints on GR and gravity modifications can be expected from forthcoming observational data. ## 6 Acknowledgment Sh.K. is acknowledging the ANSEF grant 23AN:PS-astroth-2922. ## 7 Data Availability Statement Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
2306.04994
Ambulance Demand Prediction via Convolutional Neural Networks
Minimizing response times is crucial for emergency medical services to reduce patients' waiting times and to increase their survival rates. Many models exist to optimize operational tasks such as ambulance allocation and dispatching. Including accurate demand forecasts in such models can improve operational decision-making. Against this background, we present a novel convolutional neural network (CNN) architecture that transforms time series data into heatmaps to predict ambulance demand. Applying such predictions requires incorporating external features that influence ambulance demands. We contribute to the existing literature by providing a flexible, generic CNN architecture, allowing for the inclusion of external features with varying dimensions. Additionally, we provide a feature selection and hyperparameter optimization framework utilizing Bayesian optimization. We integrate historical ambulance demand and external information such as weather, events, holidays, and time. To show the superiority of the developed CNN architecture over existing approaches, we conduct a case study for Seattle's 911 call data and include external information. We show that the developed CNN architecture outperforms existing state-of-the-art methods and industry practice by more than 9%.
Maximiliane Rautenstrauß, Maximilian Schiffer
2023-06-08T07:29:42Z
http://arxiv.org/abs/2306.04994v1
# Ambulance Demand Prediction via Convolutional Neural Networks ###### Abstract Minimizing response times is crucial for emergency medical services to reduce patients' waiting times and to increase their survival rates. Many models exist to optimize operational tasks such as ambulance allocation and dispatching. Including accurate demand forecasts in such models can improve operational decision-making. Against this background, we present a novel convolutional neural network (CNN) architecture that transforms time series data into heatmaps to predict ambulance demand. Applying such predictions requires incorporating external features that influence ambulance demands. We contribute to the existing literature by providing a flexible, generic CNN architecture, allowing for the inclusion of external features with varying dimensions. Additionally, we provide a feature selection and hyperparameter optimization framework utilizing Bayesian optimization. We integrate historical ambulance demand and external information such as weather, events, holidays, and time. To show the superiority of the developed CNN architecture over existing approaches, we conduct a case study for Seattle's 911 call data and include external information. We show that the developed CNN architecture outperforms existing state-of-the-art methods and industry practice by more than 9%. **Keywords:** AI in health care, spatio-temporal forecasting, convolutional neural networks ## 1 Introduction Reducing response times is paramount for emergency medical services (EMS) to provide first aid in a timely manner. However, tight budgets pressure EMS systems to minimize operational expenses, resulting in limited availability of resources. Demographic changes such as an aging population can further increase ambulance demands, intensifying the need for an efficient use of resources. Climate change causing more frequent extreme weather conditions, such as heat spells or heavy rainfalls, can additionally challenge EMS systems in the future. To tackle these challenges and to reduce response times, several models exist to optimize operational tasks such as ambulance allocation and dispatching, see, e.g., Brotcorne et al. (2003), Aboueljinane et al. (2013), Belanger et al. (2019), and Farahani et al. (2019) for literature overviews. Embedding accurate demand forecasts in such models can significantly improve operational decision-making. In literature, various machine learning approaches such as artificial neural networks (ANNs), tree-based models, or support vector regressions (SVRs) exist to predict ambulance demand. For other prediction tasks, such as traffic data and mobility demand predictions, convolutional neural networks (CNNs) yield highly accurate predictions, see, e.g., Wang et al. (2018) and Guo et al. (2019). Although CNNs can outperform state-of-the-art approaches for such prediction tasks, CNNs have not yet been applied for ambulance demand prediction. Against this background, we present a novel CNN architecture that transforms time series information into heatmaps to forecast ambulance demand. Our contribution to the existing literature is three-fold: First, we present a flexible, generic CNN architecture, allowing for the inclusion of external features with varying dimensions. This concept differs from most state-of-the-art approaches that divide the given region into subregions and derive local forecasts for each of them separately, neglecting possible spatial correlations. Contrarily, we enable the detection of correlations in space and time by including three-dimensional convolutional layers in a CNN architecture and obtain the predictions for all subregions simultaneously. Most CNN architectures neglect external information as they have been mainly applied to image classification and recognition tasks. Nonetheless, effectively predicting ambulance demand with CNN architectures requires the incorporation of external features. Thus, we integrate historical ambulance demand and external information such as weather, events, holidays, time, weekdays, and months. Second, we show how to jointly perform feature selection and hyperparameter tuning by applying Bayesian optimization (BO). We treat the decision of whether to include a feature or not as an additional hyperparameter of the prediction model and add these parameters to the hyperparameter tuning process. To tackle the high-dimensional search space, we introduce a novel hierarchical BO approach and apply BO with dimension dropout. Third, we analyze the features' importance by calculating the SHapley Additive exPlanation (SHAP) values representing the contribution of each feature to the prediction. Additionally, we explore the CNN's performance for different forecasting horizons by conducting a sensitivity analysis. We base our results on a case study of Seattle's 911 call data and include external information. Results show that the developed CNN outperforms state-of-the art methods and industry practice by more than 9%. Incorporating feature selection in the BO reduces the number of parameters by 40.4% while the performance decreases by less than 0.05%. The SHAP values show that the CNN is more robust to changes in the historical ambulance demand, compared to its benchmarks. This enables a better handling of demand outliers or data inconsistencies. We further show that the BO with dimension dropout and the hierarchical BO outperform basic BO approaches and random search. The remainder of this paper is structured as follows. We first present related literature in Section 2. Section 3 describes our problem setting. Section 4 introduces the CNN architecture and our algorithmic framework to conduct feature selection and hyperparameter tuning. In Section 5, we introduce the numerical case study for Seattle's 911 call data and present our results in Section 6. Section 7 concludes this work. ## 2 Related work Many early but also recent studies apply regression models to predict ambulance demand, investigating the influence of socio-economic variables, e.g., population, age, gender, race, land use, living conditions, education, employment, income, or marital status, see, e.g., Aldrich et al. (1971), Siler (1975), Lowthian et al. (2011), and Steins et al. (2019). Wong & Lin (2020) apply multivariate forward regression and show that EMS demand for specific patient groups, such as elderly or critical patients, is sensitive to weather conditions and changes throughout the day. Another stream of literature focuses on time series models to predict ambulance demand. Baker & Fitzpatrick (1986) apply Winter's exponential smoothing (Winters, 1960) and present a goal programming approach to optimize the weightings given to different forecasting statistics. Tandberg et al. (1998) implement different time series models to predict hourly emergency incidents in Albuquerque and show that simple time series models can outperform more expensive, complex models. Channouf et al. (2007) include different day-of-week, month-of-year, and special-day effects in their time series models. They show that considering historic call volumes of previous hours can improve the hourly forecast accuracy. Matteson et al. (2011) make predictions on an hourly level and introduce a factor model combined with time series models. They include time information such as the weekday and week. Vile et al. (2012) present a singular spectrum analysis to forecast emergency calls in Wales and show that their approach outperforms Holt-Winters and autoregressive integrated moving-average (ARIMA) models for long-term forecasts. Wong & Lai (2014) integrate weather data in an ARIMA model and show that the prediction error of the 7-day forecast can be reduced by 10% by incorporating weather data in the model. Gijo & Balakrishna (2016) introduce a Seasonal ARIMA model to predict the emergency call volume for a state in India. Nicoletta et al. (2016) introduce a Bayesian approach, modeling ambulance demand as a generalized linear mixed model. Results show that the time-of-day, holidays, each zone's population, area, and type heavily influence the prediction's performance. Zhou et al. (2015) present a Gaussian mixture model to predict ambulance demand in Toronto, Canada, tackling data sparsity caused by fine temporal resolutions. Zhou & Matte son (2016) apply kernel density estimation and introduce a kernel warping approach, a form of Laplacian eigenmaps belonging to the field of manifold learning. Setzler et al. (2009) present a multilayer perceptron (MLP) including four categorical variables: time-of-day, day-of-week, month, and season. Chen et al. (2015) apply an ANN, moving average, sinu-soidal regression, and support vector regression model for ambulance demand prediction. For each subregion, they select the best-performing model. Wang et al. (2021) include a heterogeneous multi-graph convolutional layer in a neural network to predict ambulance demand. To form the graph, they make use of dispatching areas. Jin et al. (2021) introduce a bipartite graph convolutional network to predict ambulance demand for region-hospital pairs. They include regional features and hospital features, e.g., capacity information. In summary, many forecasting approaches exist to predict ambulance demand. These approaches include regression analyses, time series models, mixture models, ANNs, graph-based models, SVRs, and tree-based models. We provide a summary including the features considered in each study in Table 1 and refer to Reuter-Oppermann et al. (2017) for an overview. In the field of mobility demand forecasting, few approaches exist that base forecasting on CNNs, which have so far mostly been used for image classification, to implicitly capture spatio-temporal correlations (Wang et al., 2018; Guo et al., 2019). Although CNNs yield highly accurate predictions for such applications, they have not yet been applied to predict ambulance demand. Moreover, a framework that allows generic feature integration, efficient feature selection, and hyperparameter tuning for such architectures has neither been developed for general mobility demand nor for ambulance demand forecasting. To close this research gap, we introduce a novel CNN architecture for predicting ambulance demand that is able to generically incorporate external features and embed it in an efficient hyperparameter tuning framework. ## 3 Problem Setting We divide the region for which we are making the forecast into \(q\times p\) subregions and aim at predicting the ambulance demand for all subregions for a time period \(t\). For each subregion, the ambulance demand can be represented as discrete time series \(z\in\mathbb{R}^{T}\). **Definition 3.1** (Time series).: _Let \(z\in\mathbb{R}^{T}\) be a discrete time series containing a sequence of \(T\) observations sampled from a random variable \(\mathcal{Z}\) at equidistant timesteps._ Based on each time series, conventional algorithms, e.g., exponential smoothing (Baker & Fitzpatrick, 1986) or ARIMA models (Tandberg et al., 1998), can predict the future ambulance demand for every subregion in an iterative fashion. However, such iterative approaches neglect spatial correlations between the time series. For this reason, we aim at generating an integrated forecast for all subregions simultaneously. To enable an integrated forecast, we represent the ambulance demand as a spatio-temporal time series assuming a spatial correlation between the subregions' ambulance demands. \begin{table} \begin{tabular}{c **Definition 3.2** (Spatio-temporal time series).: _Assuming a spatial correlation between the subregions' time series, we can represent the ambulance demand as a \(q\times p\) matrix of spatially correlated time series, each of length T, which we denote by \(Z\in\mathbb{R}^{q\times p\times T}\)._ In addition, effectively predicting ambulance demand requires the incorporation of external features. In the existing literature, various external features have been considered such as socio-economic or weather data. These external features strongly differ in their properties. For example, socio-economic data mostly differs across city districts but only changes gradually over time. In contrast, weather data can change quickly but does not strongly differ across city districts. To generically include external features, we distinguish four types: **Three-dimensional input data.**: Let \(X^{\rm 3D}=\{X^{\rm 3D}_{i}\}_{i=1}^{n}\) be the three-dimensional features of our data set, where each data instance \(X^{\rm 3D}_{i}\in\mathbb{R}^{q\times p\times L}\) is a spatio-temporal time series representing the historic ambulance demand of \(L\) periods for all \(q\times p\) subregions. **Two-dimensional input data.**: Let \(X^{\rm 2D}=\{X^{\rm 2D}_{i}\}_{i=1}^{n}\) be the two-dimensional features of our data set, where each data instance \(X^{\rm 2D}_{i}=\{x^{\rm 2D}_{i_{j}}\}_{j=1}^{m}\) is a set of \(m\) two-dimensional vectors, i.e., \(x^{\rm 2D}_{i_{j}}\in\mathbb{R}^{q\times p}\). We include two-dimensional data in the case that only the spatial dimension is considered, e.g., for representing events taking place in the predicted period. **One-dimensional input data.**: Let \(X^{\rm 1D}=\{X^{\rm 1D}_{i}\}_{i=1}^{n}\) be the one-dimensional features of our data set, where each data instance \(X^{\rm 1D}_{i}=\{X^{\rm 1D}_{i_{j}}\}_{j=1}^{r}\) is a set of \(r\) one-dimensional vectors, i.e., \(x^{\rm 1D}_{i_{j}}\in\mathbb{R}^{l_{j}}\), where \(l_{j}\) is the length of vector \(j\). One-dimensional features are either time series such that \(l_{j}=L\), where \(L\) is the number of past periods to be considered, or one-hot-encoded information, e.g., for including time information such as the month, weekday, or hour. **Scalar input data.**: Let \(X^{\rm S}=\{X^{\rm S}_{i}\}_{i=1}^{n}\) be the scalar features of our data set, where each data instance \(X^{\rm S}_{i}=\{x^{S}_{i_{j}}\}_{j=1}^{s}\) is a set of \(s\) scalar features, i.e., \(x^{\rm S}_{i_{j}}\in\mathbb{R}\). We include scalar inputs for data without spatial or temporal distribution which is not one-hot-encoded, e.g., the temperature prediction for period \(t\) for which we neglect the spatial distribution. Let \((\mathbf{X},\mathbf{y})=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) be our dataset. \(\mathbf{x}_{i}=\{X^{\rm 1D}_{i},X^{\rm 2D}_{i},X^{\rm 3D}_{i},X^{\rm S}_{i}\} \in\mathbb{R}^{v}\) represents a \(v\)-dimensional feature vector and \(y_{i}\in\mathbb{R}\) its respective dependent variable. Each data instance is independently sampled from an unknown distribution \((\mathcal{X},\mathcal{Y})\). Our model \(\mathcal{M}\) aims at learning a function \(F_{\mathcal{M}}:\mathcal{X}\rightarrow\mathcal{Y}\) that maps each input vector \(x_{i}\) to its associated variable \(y_{i}\). ## 4 Methodology We first introduce our CNN architecture in Section 4.1. Then, we describe our BO approaches to perform hyperparameter optimization and feature selection in Section 4.2. ### Convolutional Neural Network Architecture To predict ambulance demand, we apply a novel CNN architecture. Leveraging our neural network with convolutional layers enables the generation of feature maps, i.e., transformed images, highlighting patterns identified in the input image. To form these feature maps, filters, i.e., weight matrices, traverse the input image with a defined stride, i.e., step size, and conduct convolutions to detect patterns that the filters have learned to identify. So far, CNNs have mainly been used to solve image classification tasks, in which convolutional layers detect spatial patterns in the images. We apply this concept to reveal spatio-temporal patterns within our historic ambulance demand. We further integrate external features through concatenation at different stages of the CNN. We visualize our CNN architecture in Figure 1 and provide a more detailed overview in Appendix A.1. We apply three-dimensional convolutional layers to detect spatio-temporal patterns in the historic ambulance demand, represented as spatio-temporal time series \(X^{3D}\in\mathbb{R}^{q\times p\times L}\), where \(L\) is the number of past periods that we take into account. Each convolutional layer \(l\) contains \(M_{l}\) filters. The weight of filter \(h\) in layer \(l\) at position \((w_{1},w_{2},w_{3})\) is \(w_{h}^{l}(w_{1},w_{2},w_{3})\). We define the filter size by \(k^{l}=(k_{1}^{l},k_{2}^{l},k_{3}^{l})\) and apply a stride of \(s^{l}=(s_{1}^{l},s_{2}^{l},s_{3}^{l})\). We denote the activation function in layer \(l\) by \(\vartheta^{l}(\cdot)\) and the bias by \(b_{h}^{l}\). By applying zero padding, i.e., surrounding the input image with additional zero-valued pixels, we maintain the output's size throughout consecutive layers. The coordinates \((x_{1},x_{2},x_{3})\) uniquely define the values within each feature map, assuming zero-based indexing. We calculate the number of parameters \(\phi_{l}\) learned for layer \(l\) by \[\phi_{l}=(k_{1}^{l}k_{2}^{l}k_{3}^{l}M_{l-1}+1)M_{l}, \tag{4.1}\] and derive the output of filter \(h\) in layer \(l\) by Figure 1: Visualization: CNN Architecture \[\begin{split} o_{h}^{l}\left(x_{1},x_{2},x_{3}\right)=\vartheta^{l} \Bigg{(}\sum_{m=0}^{M_{l-1}-1}&\sum_{\begin{subarray}{c}i_{1}=0, \\ \iota(x_{1},i_{1},s_{1}^{l})\end{subarray}}^{\left\lfloor\frac{x_{1}}{s_{1}^{l} }\right\rfloor}&\sum_{\begin{subarray}{c}i_{2}=0,\\ \iota(x_{2},i_{2},s_{2}^{l})\end{subarray}}^{\left\lfloor\frac{x_{2}}{s_{2}^{ l}}\right\rfloor}&\sum_{\begin{subarray}{c}i_{3}=0,\\ \iota(x_{3},i_{3},s_{3}^{l})\end{subarray}}^{\left\lfloor\frac{x_{3}}{s_{3}^{l} }\right\rfloor}& o_{m}^{l-1}\left(\left\lfloor\frac{x_{1}}{s_{1}^{l} }\right\rfloor-i_{1},\left\lfloor\frac{x_{2}}{s_{2}^{l}}\right\rfloor-i_{2}, \left\lfloor\frac{x_{3}}{s_{3}^{l}}\right\rfloor-i_{3}\right)\\ &\ast w_{h}^{l}\left(\iota\left(x_{1},i_{1},s_{1}\right)\iota \left(x_{2},i_{2},s_{2}\right)\iota\left(x_{3},i_{3},s_{3}\right)\right)+b_{h }^{l}\Bigg{)},\end{split} \tag{4.3}\] where \[\iota\left(x,i,s\right)=x-s\left(\left\lfloor\frac{x}{s}\right\rfloor-i\right). \tag{4.4}\] After conducting the convolutions (4.2) and (4.3), we concatenate the generated feature maps. To enable this concatenation, we set the stride, padding, and filter sizes such that all feature maps are of shape \(q\times p\times L\). We then fuse the concatenated feature maps along the temporal dimension. We concatenate the temporal fusion's outputs, i.e., two-dimensional feature maps of shape \(q\times p\), with the two-dimensional input data. In the following, we add a locally connected layer to combine the identified patterns with the included inputs. Here, we substitute \(w_{h}^{l}\) by \(w_{hx_{1}x_{2}}^{l}\) and \(b_{h}^{l}\) by \(b_{hx_{1}x_{2}}^{l}\) to represent the weights and bias in filter \(h\) at position \((x_{1},x_{2})\). We compute the outputs as follows \[\begin{split} o_{h}^{l}\left(x_{1},x_{2}\right)=\vartheta^{l} \Bigg{(}\sum_{m=0}^{M_{l-1}-1}&\sum_{w_{1}=0}^{k_{1}-1}\sum_{w_ {2}=0}^{k_{2}-1}o_{m}^{l-1}\left(x_{1}s_{1}^{l}+w_{1},x_{2}s_{2}^{l}+w_{2} \right)\\ &\ast w_{hx_{1}x_{2}}^{l}\left(w_{1},w_{2}\right)+b_{hx_{1}x_{2} }^{l}\Bigg{)}.\end{split} \tag{4.5}\] For the locally connected layer \(l\), we train \(\phi_{l}\) parameters calculated by \[\phi_{l}=(M_{l-1}+1)qp, \tag{4.6}\] where \(qp\) is the number of output neurons. We then embed one-dimensional data and scalar inputs through concatenation. Finally, we add two dense layers to enable the network to learn from these inputs. Each dense layer \(l\) consists of \(N^{l}\) neurons, each denoted by \(v_{i}^{l}\) where \(i=1,...,N^{l}\). The activation function and bias for layer \(l\) are \(\vartheta^{l}\left(\cdot\right)\) and \(b^{l}\). We denote the weights between neurons \(v_{i}^{l}\) and \(v_{j}^{l-1}\) by \(w_{ij}^{l}\). Thus, we calculate the output \(a_{i}^{l}\) of neuron \(v_{i}^{l}\) as follows \[a_{i}^{l}=\vartheta^{l}\Big{(}\sum_{j=1}^{N^{l-1}}w_{ij}^{l}a_{j}^{l-1}+b^{l} \Big{)}. \tag{4.7}\] We calculate the number of learned parameters for dense layer \(l\) by \[\phi_{l}=(N^{l-1}+1)N^{l}. \tag{4.8}\] In the last layer, we set the number of neurons to \(qp\) such that we can reshape its outputs into a \(q\times p\) heatmap, constituting the model's prediction. ### Hyperparameter Optimization and Feature Selection To provide a lean model and to improve the model's performance, we conduct feature selection and tune the model's hyperparameters. Adding or eliminating features may change the network architecture, as some input layers are only added when including certain external information. Thus, optimizing hyperparameters before conducting feature selection may lead to a suboptimal choice of parameters, e.g., the number of filters and their sizes may depend on the features selected. Similarly, conducting feature selection before hyperparameter tuning may lead to suboptimal results, since the selection process may be based on a network with insufficiently tuned hyperparameters. For this reason, we aim at utilizing an intrinsic feature selection method tuning the CNN's hyperparameters and conducting feature selection simultaneously. Therefore, we treat the decision of whether to include a feature or not as an additional hyperparameter of the prediction model and include these parameters in the hyperparameter tuning process. Pursuing such an approach entails the following challenges: First, the model has numerical and categorical hyperparameters that may depend on each other, e.g., adding layers requires their hyperparameters to be tuned. Vice versa, when removing layers, we can neglect their hyperparameters in the tuning process. Second, conducting feature selection for \(N\) features increases the search space by \(2^{N}\), resulting in a high-dimensional search space. Third, evaluating the predictive model is computationally expensive. Accordingly, common hyperparameter optimization strategies, such as manual or grid search, are unsuitable for adequately covering the search space in an acceptable amount of computational time. Fourth, the stochastic nature of the training process entails noise, as similar hyperparameter values can result in different function values. Finally, derivatives of the underlying target function are not given, and multiple local optima may exist due to the target function's non-convexity. To tackle these challenges and to learn from information gained by previously performed parameter combinations, we apply BO. Bayesian OptimizationBO is a sequential model-based optimization (SMBO) approach that is used to optimize the hyperparameters of a black-box function \(f\left(\cdot\right)\), which is expensive to evaluate. We leverage a surrogate model to approximate the target function to bypass its expensive evaluation. In each iteration of the BO, we update the surrogate model based on our previous observations. Then, we optimize an acquisition function utilizing the surrogate model, which derives the next promising hyperparameter settings to be evaluated. We denote \(J\) hyperparameters by \(\theta=\lambda_{1},\lambda_{2},\lambda_{3},...,\lambda_{J}\) and the search space by \(\Theta=\Lambda_{1}\times\Lambda_{2}\times\Lambda_{3}\times...\times\Lambda_{J}\). Then, we aim to solve the following optimization problem: \[\theta^{*}=\arg\min_{\theta\in\Theta}f\left(\theta\right). \tag{4.9}\] As the target function \(f\left(\theta\right)\) is computationally expensive to evaluate, we model this function via the surrogate model \(\mathcal{M}\). As surrogates, we apply Gaussian processes (Snoek et al., 2012), random forests (Hutter et al., 2011), and extremely randomized trees (Geurts et al., 2006). In each iteration \(\iota\) of the BO, we optimize the acquisition function \(\mathcal{A}\) to derive a new promising parameter setting as follows \[\theta_{\iota}\leftarrow\arg\max_{\theta\in\Theta}\mathcal{A}( \theta). \tag{4.10}\] Here, we apply an _Expected Improvement_ strategy enabling a trade-off between exploration and exploitation: We define the best (minimal) observed function value of \(n\) iterations by \(f^{*}=\min(f(\theta_{1}),...,f(\theta_{n}))\). When sampling from \(f\) at a new, unknown point, we treat its realization \(Y\) as a normally distributed variable \(Y\sim\mathcal{N}(\mu,\sigma^{2})\). Thus, the expected improvement at point \(\theta\) is \[\mathbb{E}[\mathbf{I}(\theta)]=\mathbb{E}[\max(f^{*}-Y,0)]=(f^{*}- \mu)\Phi\left(\frac{f^{*}-\mu}{\sigma}\right)+\sigma\phi\left(\frac{f^{*}-\mu }{\sigma}\right), \tag{4.11}\] where \(\phi(\cdot)\) and \(\Phi(\cdot)\) are the standard normal density and distribution functions, correspondingly (Jones et al., 1998). We present a basic pseudo-code for BO in Algorithm 1. After initializing the set of observations \(R=\{(\theta_{1},f(\theta_{1})),(\theta_{2},f(\theta_{2})),...,(\theta_{m},f( \theta_{m}))\}\) via \(m\) iterations of random search (l. 3), we conduct \(n\) iterations of BO. In each iteration \(\iota\), we update the surrogate model and optimize the acquisition function \(\mathcal{A}\) (l. 6). We then evaluate the target function \(f\) at the next promising point \(\theta_{\iota}\) and update the set of results \(R\) with our new observation (l. 7). Finally, we return the incumbent parameter settings \(\theta^{*}\) (l. 9). High-Dimensional Bayesian OptimizationHigh-dimensional search spaces challenge existing BO methods. Although classic approaches are still applied, e.g., random search (Bergstra & Bengio, 2012) or BO with random forests (Hutter et al., 2011), more advanced methods have been developed to tackle high-dimensional search spaces. For example, Wang et al. (2013) make use of low effective dimensions, i.e., dimensions which do not significantly change the objective function, to reduce the search space. However, assuming that only a subset of dimensions is effective may not be possible, e.g., if all dimensions highly influence the objective. To handle problems with high effective dimensionality, Li et al. (2018) introduce the concept of random dimension dropout, which optimizes the acquisition function only over a subset of dimensions. In this study, we implement two different approaches. First, we build upon Li et al. (2018) and apply BO with dimension dropout. Second, we present a novel hierarchical BO to reduce the search space and optimize hyperparameters sequentially. For comparison, we refer to the previously introduced BO approaches without dimension dropout as _basic_ BO approaches. A. Bayesian Optimization with Dimension DropoutBased on Li et al. (2018), we implement BO with dimension dropout as detailed in Algorithm 2. First, we derive \(m\) initial points via random search and add them to the set of observations \(R=\{(\theta_{1},f(\theta_{1})),(\theta_{2},f(\theta_{2})),...,(\theta_{m},f( \theta_{m}))\}\) (l. 3). Then, we conduct \(n\) iterations of BO with dimension dropout. In each iteration \(\iota\), we randomly draw \(d\) dimensions out of the problem's search space dimensions \(\mathcal{D}\) and refer to the subset of drawn dimensions as \(D^{\prime}\). We ensure that \(d=|D^{\prime}|<|\mathcal{D}|\). To dynamically adapt \(d\) in accordance with the total number of dimensions, we calculate \(d\) by \(d=\left\lfloor\tilde{d}*|\mathcal{D}|\right\rfloor\), where \(\tilde{d}\in[0,1]\) (l. 6). We denote the parameters of the drawn dimensions by \(\theta^{[\mathcal{D^{\prime}}]}\), and refer to the remaining parameters by \(\theta^{[\mathcal{D^{\prime}}]}\). Similarly, we denote the search space for the drawn dimensions by \(\Theta^{[\mathcal{D^{\prime}}]}\) and the best-observed values by \(\theta^{*}=(\theta^{*})^{[\mathcal{D^{\prime}}]}\cup(\theta^{*})^{[\mathcal{D ^{\prime}}]}\). We apply a Gaussian process surrogate model and optimize the acquisition function considering only the drawn dimensions (l. 7) as follows \[\theta_{i}^{[\mathcal{D^{\prime}}]}\leftarrow\arg max_{\theta^{[\mathcal{D^{ \prime}}]}\in\Theta^{[\mathcal{D^{\prime}}]}}\mathcal{A}(\theta^{[\mathcal{D^ {\prime}}]}). \tag{4.12}\] We select the values for the remaining parameters, \(\theta_{t}^{[\mathcal{D^{\prime}}]}\), by applying a _Dropout-Mix_ strategy (Li et al., 2018): With a probability of \(p\), we randomly draw the values for the remaining parameters from their respective domains. Otherwise, with a probability of \((1-p)\), the parameter values yielding the best observed function value are copied for the left-out dimensions, i.e., \(\theta_{t}^{[D\setminus D^{\prime}]}=(\theta^{*})^{[\mathcal{D^{\prime}} \setminus\mathcal{D^{\prime}}]}\) (ll. 8-13). We evaluate the target function and update the set of observations \(R\) and incumbent parameters (ll. 14-16). Finally, we return the incumbent parameter setting \(\theta^{*}\) (l. 18). B. Hierarchical Bayesian OptimizationWe further present a hierarchical approach to apply BO in high-dimensional problem settings by decomposing the search space. Here, we optimize sets of hyperparameters sequentially, enabling the application of distinct surrogate models for each set of hyperparameters. Further, we decompose high-dimensional search spaces which exceed the capabilities of basic BO approaches, such that the resulting sub-problems can be solved via BO methods for low-dimensional search spaces. We show the pseudo-code for the hierarchical approach in Algorithm 3. ``` 1:Given: target function \(f(\cdot)\), domain \(\Theta\), number of iterations \(n\), number of initial points \(m\), acquisition function \(\mathcal{A}\), observed results \(R=\emptyset\), search dimensions \(\mathcal{D}\), percentage share of search dimensions to be drawn \(\tilde{d}\), probability for fill-up strategy \(p\) 2:Output: Incumbent parameter setting \(\theta^{*}\) 3:\(R=\{(\theta_{1},f(\theta_{1})),...,(\theta_{m},f(\theta_{m}))\}\leftarrow\text {RandomSearch}(f(\cdot),\Theta,m)\) 4:\(\iota\gets m\) 5:for\(\iota=\iota+1\) to \(\iota+n\)do 6: Randomly select a subset of dimensions \(\mathcal{D}^{\prime}\) such that \(|\mathcal{D}^{\prime}|=\left\lfloor\tilde{d}*|\mathcal{D}|\right\rfloor\) 7:\(\theta_{i}^{[\mathcal{D}^{\prime}]}\leftarrow\arg max_{\theta^{[\mathcal{D}^{ \prime}]}\in\Theta^{[\mathcal{D}^{\prime}]}}\mathcal{A}(\theta^{[\mathcal{D}^ {\prime}]})\) 8:\(q\leftarrow\) random number between 0 and 1 9:if\(q<p\)then 10:\(\theta_{\iota}^{[\mathcal{D}^{\prime}]}\leftarrow\) random values within domain \(\Theta^{[\mathcal{D}^{\prime}]}\) 11:else 12:\(\theta_{\iota}^{[\mathcal{D}^{\prime}]}=(\theta^{*})^{[\mathcal{D}^{\prime} ]}\) 13:endif 14:\(\theta_{\iota}\leftarrow\theta_{\iota}^{[\mathcal{D}^{\prime}]}\cup\theta_{ \iota}^{[\mathcal{D}^{\prime}]}\) 15:\(R=R\cup(\theta_{\iota},f(\theta_{\iota}))\) 16:\(\theta^{*}=\arg\min_{\iota^{\prime}\leq_{\iota}}f(\theta_{\iota^{\prime}})\) 17:endfor 18:return\(\theta^{*}\) ``` **Algorithm 2** Bayesian Optimization with Dimension Dropout As a basis, we assign the search space dimensions to disjoint sets \(k=1,...,K\). We express the assignment of search dimension \(\Lambda_{j}\) to set \(k\) with a binary variable \(\gamma_{\Lambda_{j}k}\): \[\gamma_{\Lambda_{j}k}=\begin{cases}1&\text{if $\Lambda_{j}$ is assigned to set k}\\ 0&\text{otherwise}\end{cases}\forall k=1,...,K,\Lambda_{j}\in\Theta,j=1,...,J. \tag{4.13}\] We assign each search dimension \(\Lambda_{j}\) to exactly one set such that \[\sum_{k=1}^{K}y_{\Lambda_{j}k}=1\ \ \forall\Lambda_{j}\in\Theta,j=1,...,J. \tag{4.14}\] We initialize the hyperparameters by applying random search (ll. 3-5). Second, we apply BO for all hyperparameter sets \(k=1,...,K\) sequentially. Thus, we determine the search space of set \(k\) and refer to the parameters optimized in the set \(k\) by \(\theta^{[k]}\) (l. 7). We fix the remaining parameters denoted as \((\theta^{*})^{[K\setminus k]}\) by copying the values of the incumbent parameter setting \(\theta^{*}\) (l. 8). Before conducting BO for set \(k\), we determine \(m^{[k]}\) initial points via random search (l. 9). Before each function evaluation, we merge the varied parameters' values \(\theta^{[k]}\) with the fixed parameters' values \((\theta^{*})^{[K\setminus k]}\). We conduct \(n^{[k]}\) iterations of BO for set \(k\) (ll. 11-15). After optimizing the hyperparameters of set \(k\), we continue with the next set until all sets of parameters have been optimized. ## 5 Case Study: Ambulance Demand Prediction for Seattle We conduct a numerical case study for Seattle's 911 call data1 considering incidents of the category _Medic Response_, i.e., incidents requiring paramedical staff qualified for Advanced Life Support during 2 years (2020-2021). We divide this period into 8-hour intervals, corresponding to commonly applied 8-hour shifts in EMSs. By including ambulance demand of previous periods, we consider local, short-term dynamics such as local virus outbreaks during the Covid-19 pandemic. We divide Seattle into a grid of \(11\times 6\) subregions each with a dimension of approximately \(2.5\times 2.5\) km and apply a look back \(L\) of 6 periods, i.e., we take into account the ambulance demand of two preceding days. FeaturesWe base our feature choice on the literature review in Section 2 and focus on features that can only hardly be learned with our plain CNN architecture. We neglect infrastructural and demographic information, e.g., the mobility infrastructure or the age distribution among the population, as we only conduct short-term forecasts and these variables only change gradually over time. Since the weather changes dynamically, we include the following data from _Weather Underground2_: temperature, wind speed, humidity, dew point, sea level pressure, and precipitation. Except for the precipitation, we consider the daily minimum, maximum, and average for all weather features. As some of these inputs may be correlated, we conduct a correlation analysis and eliminate features with a correlation of \(>80\%\). We present the results of the correlation analysis in Appendix A.2. We assume that the spatial distribution of weather data is negligible for our case study since the area is sufficiently small to assume that the measurements of different weather stations within this area are highly correlated. For larger areas, our generic approach enables the inclusion of two- or three-dimensional weather data, e.g., for (spatial) weather predictions or historic (spatio-temporal) weather data, correspondingly. Footnote 2: [https://www.wunderground.com](https://www.wunderground.com) However, Wong & Lin (2020) show that temporal temperature changes influence people's health conditions. Also, age, income, sex, and health conditions influence people's sensitivity to weather conditions. Thus, the impact of temporal weather changes may vary between subregions. Therefore, we perform transposed convolution on the historic one-dimensional weather data to learn its spatial impacts while the distribution of relevant socio-economic factors can be implicitly learned by the model. Many studies include special-day effects for periods with an anomalous ambulance demand, e.g., during school holidays (Vile et al., 2012) or New Year's Day (Channouf et al., 2007). To consider such effects, we take public holidays, school holidays, and events3 into account. We represent holidays as binary features. As events take place at different locations across the depicted area, we include their locations on the two-dimensional, spatial level. We additionally consider the number of expected participants at each event to account for the events' sizes. Footnote 3: [https://data.seattle.gov/](https://data.seattle.gov/) We further include time information, as Channouf et al. (2007) show that the day-of-week, month-of-year, and hour-of-day impact ambulance demand. We embed this information as one-hot-encoded vectors. To increase the models' performance, we scale the input data such that all values range within \([0,1]\). BenchmarksThe developed CNN serves as a representative example of an integrated spatio-temporal ANN architecture. To evaluate the numerical benefits of incorporating spatio-temporal dependencies in such architectures, we compare this approach to an iterative ANN architecture, applying an MLP as an exemplary model. MLPs are fully connected feedforward ANNs commonly applied for forecasting tasks, also in the domain of ambulance demand prediction (Setzler et al., 2009; Chen et al., 2015). We refer to Equation 4.7 for a definition of a fully connected architecture. We further aim at evaluating the performance of the proposed intrinsic feature selection algorithm which integrates feature selection in the hyperparameter tuning process of the corresponding ANN architectures. For this evaluation, we compare our approach to other commonly applied intrinsic feature selection methods that are able to solve prediction tasks. One approach widely applied for regression tasks and capable of providing valuable insights into features' importance which make outcomes interpretable for practitioners are decision trees. However, decision trees are prone to overfitting, especially for high dimensional search spaces as there is mostly an insufficient number of samples for each parameter value. To apply a model more robust to overfitting, we additionally apply random forests, i.e., ensembles of multiple decision trees based on randomly selected data samples. The average prediction generated by the individual decision trees constitutes the random forest's prediction. In line with Setzler et al. (2009), Zhou et al. (2015) and Zhou & Matteson (2016), we further compare the performance of the developed CNN to a common industry practice, i.e., the Medic method. This method takes the average ambulance demand of similar time periods of the past 4 weeks, i.e., the historic ambulance demand associated with the same weekday and time, of the current and preceding years as prediction. To enable a fair comparison to the ANNs trained on data from 2020-2021, we consider the same data range for the Medic method. Hyperparameter TuningAs the hyperparameter choices for decision trees and random forests are limited, we apply grid search for tuning. However, for the CNN and MLP, we optimize the hyperparameters by applying i) random search ii) BO with Gaussian process, random forest, and extremely randomized tree surrogate models iii) BO with dimension dropout, and iv) the novel hierarchical BO with Gaussian process, random forest, and extremely randomized tree surrogate models. First, we perform these approaches without feature selection, and second, including feature selection. We present the hyperparameters and their domains in Appendix A.3. We further show the set assignments for the hierarchical BO. We determine the order in which we optimize the hyperparameter sets based on two criteria: The degree to which preliminary experiments can yield adequate values, and, the size of the search space. The better we can determine adequate hyperparameter values, the later we optimize them. This ensures that we optimize the hyperparameters that are more complex to determine on an appropriate basis, i.e., the hyperparameters that are currently not optimized are set to well-performing values which were either determined by applying random search for initialization, or, optimized at a previous level. Later, we can fine-tune the hyperparameters we fixed in the first place. For our application, we distinguish three levels. At each level, one parameter set is tuned. The first level focuses on the main architecture decisions, such as the number of layers and filters per layer. The second level tunes the activation functions. The third level controls the regularization mechanisms to avoid overfitting and decides on the batch size, learning rate, and optimizer. In the case of incorporating feature selection, we include an additional binary hyperparameter for each feature, stating whether the feature is included (1) or not (0). In the hierarchical BO, we therefore include an additional level for these decisions. We optimize this level first, such that we base the choice of the remaining hyperparameters tuning the model on the derived features. TrainingWe train our model on 60% of the data and split the remaining data into a test and a validation set, each containing 20% of the data. We apply a mean-squared-error (MSE) loss function and train the model for 300 epochs. In the case that the validation loss does not improve for 20 epochs, we abort the training process and measure the MSE on the test set. We initialize the hyperparameters of the hierarchical BO based on 500 iterations of random search. To enable a comparison between the hierarchical approach and the remaining methods, we initialize them equally. After initialization, we execute 1000 iterations. As the number of parameters optimized at each level of the hierarchical BO is limited, we reduce the number of iterations to 250, applying random search for the first 25 iterations per hyperparameter set. We implement the algorithms in Python 3.8 using TensorFlow 2.10.1. We perform BO using Scikit-Optimize 0.9.0, which we adapt to apply dimension dropout. We train the CNNs on an NVIDIA A100 GPU with 80 GB RAM. The MLPs are trained with an Intel(R) Xeon(R) processor E5-2697 v3 with 15.7 GB RAM. ## 6 Experimental Results We first compare the performance of the developed CNN architecture against its benchmarks and show the results we obtain for the different hyperparameter tuning approaches. Second, we analyze the selected features when conducting feature selection and calculate the contribution of each feature to the prediction. Third, we investigate the CNN's performance for different forecasting time intervals. ### Model and Hyperparameter Tuning Performances In Table 2 we compare the MSE of the developed CNN architecture and its benchmark models, i.e., the Medic method, MLPs, decision trees, and random forests. The learning-based approaches were tuned with the introduced hyperparameter optimization approaches and initialized equally with random search. As can be seen, the incumbent CNN achieves a 9.83% lower MSE than the best MLP. The CNN further outperforms the Medic method, decision tree, and random forest by 9.98%, 14.84%, and 11.26%. As expected, we see that the decision tree badly generalizes to unseen data leading to an incumbent tree with a depth of five. The incumbent random forest has a depth of eight. We visualize their tuning results in Appendix A.3. Although the random forest performs better on new data, it still remains inferior to the ANNs. Both tree-based approaches are unable to identify relevant patterns due to their limited depth and the resulting negligence of relationships among external features. Although the Medic method bases its prediction solely on historic ambulance demand neglecting all external features, it still outperforms the tree-based approaches and yields comparable results to the MLP. Contrarily, the CNN benefits from its ability to detect spatio-temporal patterns across subregions within historic ambulance demand and combines these patterns with external information. This ability also encourages the use of the presented feature selection approach which enables the application of the developed CNN instead of the intrinsic benchmarks, i.e., decision trees, and random forests. The advantage of the introduced intrinsic approach is the exchangeability of the model: We are not bound to an intrinsic tree-based approach to make the prediction. Instead, we can apply the developed CNN resulting in the lowest MSE. We further observe that retaining less than 50% of the features in the CNN reduces the number of trainable parameters by 40.4% and only slightly increases the MSE by less than 0.05%. For practitioners, this performance decrease remains negligible and a leaner model is easier to maintain, as less input data must be collected, validated, and pre-processed. **Result 1**.: _The CNN architecture outperforms the incumbent MLP, Medic method, random forest, and decision tree by 9.83%, 9.98%, 11.26%, and 14.84%, correspondingly._ **Result 2**.: _The tree-based approaches badly generalize to unseen data, resulting in shallow trees which are unable to identify underlying patterns._ **Result 3**.: _Incorporating feature selection in the hyperparameter tuning process reduces the number of trainable parameters by 40.4% while the performance decrease of less than 0.05% remains negligible._ When including feature selection decisions in the hyperparameter tuning process, we integrate a binary variable for each feature selection decision. This exponentially increases the search space. However, it enables the model to simultaneously decide on features and make hyperparameter decisions. This is beneficial, as the CNN's architecture, e.g., the number of layers in the model, depends on the selected features. Therefore, we evaluate the influence of including feature selection in the hyperparameter tuning process. We compare the convergence of the different tuning approaches with and without feature selection for the CNN in Figures 2 and 3. Based on the hyperparameters \begin{table} \begin{tabular}{l||c|c|c c|c c c|c c c|c||c c} \cline{2-13} & No tuning & Grid Search & \multicolumn{2}{c|}{Random Search} & \multicolumn{2}{c|}{BO with} & \multicolumn{2}{c||}{} & \multicolumn{2}{c||}{} & \multicolumn{2}{c||}{} & \multicolumn{2}{c||}{} & \multicolumn{2}{c||}{} \\ \cline{2-13} Model & & & & & & & & & & & & & & & & & \\ \hline CNN & - & & & & & & & & & & & & & & & \\ CNN & - & - & 14.74 & 14.74 & 14.74 & 14.74 & 14.72 & 14.73 & **14.67** & 14.74 & 14.70 & 14.70 & 14.70 & \(\checkmark\) \\ CNN & - & - & 14.72 & 14.72 & 14.71 & 14.72 & 14.72 & 14.72 & 14.70 & - & 14.67 & 14.66 & **14.66** & \\ MLP & - & - & 16.34 & 16.31 & 16.34 & 16.30 & 16.28 & 16.27 & **16.26** & 16.27 & 16.27 & 16.27 & \(\checkmark\) \\ MLP & - & - & 16.32 & 16.32 & 16.32 & 16.29 & 16.28 & 16.28 & 16.29 & - & 16.29 & 16.29 & **16.28** & \\ RF & - & **16.52** & - & - & - & - & - & - & - & - & - & - & - & \(\checkmark\) \\ DT & - & **17.22** & - & - & - & - & - & - & - & - & - & - & - & \(\checkmark\) \\ Medic & **16.29** & - & - & - & - & - & - & - & - & - & - & - & - & \(\checkmark\) \\ \hline \# iter. & & & & 500 & 1500* & & & & & 750* & 1000* & 1250* & \\ \hline \# iter. & & & & 500 & 1500* & & & & & & 750* & 1000* & 1250* & 1500* & \(\checkmark\) \\ \hline \end{tabular} \end{table} Table 2: **Comparison of MSE (*incl. 500 iterations random search for initialization) [\(\times\)100].** We compare the MSE of the convolutional neural networks (CNNs), multilayer perceptrons (MLPs), decision trees (DTs), random forests (RFs) and the Medic method. For hyperparameter tuning, we apply Bayesian optimization (BO) with Gaussian process (GP), RF and extremely randomized tree (ET) surrogate models. Hierarchical BO is applied for sets \(k\in\{1,2,3\}\). We add set k=0 when conducting feature selection. We train the model on 60% of the data and apply a validation and test set, each containing 20% of the data. determined by the corresponding tuning approach in each iteration, we show the MSE achieved on the test set after training the model. In Figure 2 we neglect feature selection in the hyperparameter tuning process. Here, the basic BO can only slightly decrease the MSE in 1000 iterations when utilizing a Gaussian process surrogate model. In contrast, BO with dimension dropout and the hierarchical BO can better improve the MSE. In Figure 3 we include feature selection decisions. Here, the basic BO can only slightly decrease the MSE in 1000 iterations when utilizing a random forest surrogate model. For including feature selection, we add a binary variable for each selection decision, which a tree-based surrogate can better optimize than Gaussian processes. However, for both cases, i.e., with and without feature selection, the BO with dimension dropout and the hierarchical BO are superior to tackle the high dimensional search space. **Result 4**.: _The BO with dimension dropout and the hierarchical BO outperform basic BO approaches and random search._ ### Features' Importance To further investigate the reasons for the CNN's superiority, we analyze the importance of the external features for the different models. We base this analysis on the incumbent models derived in Section 6.1 for 8-hour time-intervals. Here, we distinguish two cases: first, when including feature selection in the hyperparameter tuning approach, we present the selected features in Table 3. Second, when excluding feature selection in the hyperparameter tuning approach, we keep all features and present their importance in Figure 4. Since the decision tree always conducts feature Figure 2: Convergence of the hyperparameter tuning approaches of the CNN without Feature Selection (after applying 500 iterations random search for initialization) selection, we only present its selected features. As the random forest is an ensemble of decision trees with randomly selected features, we only present its features' importance. Selected FeaturesTable 3 shows that the amount and type of features selected by the ANNs and decision tree strongly vary. The tuning process of the decision tree reveals that the tree overfits for high depths, i.e., we observe that the model's performance decreases on unseen data when increasing the tree's depth beyond five, visualized in Appendix A.3. We further see that the decision nodes' conditions made at depths below four are exclusively based on historic ambulance demand. Only in the final splits, weather features, e.g., the maximum wind speed of past periods, or the time-of-day can be decisive. As these external features are only used by decision nodes preceding the leaf nodes, the model cannot identify relevant patterns among these features. Figure 3: Convergence of the hyperparameter tuning approaches of the CNN with Feature Selection (after applying 500 iterations random search for initialization) \begin{table} \begin{tabular}{l||c c c|c c c c c c c c c c c c c c c c c c c} Features & \multicolumn{6}{c}{Historic data from periods \([t-L,t-1]\)} & \multicolumn{6}{c}{Data for predicted period \(t\)} \\ \hline **Result 5**.: _The decision tree only uses few external features in decision nodes preceding the leaf nodes, preventing it from identifying patterns among these features._ In contrast to the decision tree, the ANNs combine information from multiple external features. Table 3 shows that although the features selected by the CNN and MLP differ, they mostly include similar types of information; while one model selects a feature with predicted data, the other model includes the same type of information but includes the feature with the corresponding historic data, and vice versa. For example, both models include information about the sea level pressure and humidity. This is in line with many studies showing the effect of humidity and air pressure on people's health conditions influencing ambulance demand, e.g., the risk for cardiovascular diseases (Ou et al. 2014, Borghei et al. 2020). In addition, both models include the hour and weekday which confirms their relevance observed in earlier studies (Setzler et al. 2009, Matteson et al. 2011). Simultaneously, some features are neglected by all models. As we include historic ambulance demand of previous periods, some features whose values remain unchanged for several periods, can be indirectly learned by the model. For this reason, the month and school holidays, mostly lasting several days, are neglected. **Result 6**.: _The CNN and MLP select common types of information, such as weather data that has already been identified in medical studies to harm people's health conditions. In addition, in line with earlier studies, our results confirm the relevance of considering time information, i.e., the time-of-day and weekday._ Features' ImportanceIn Figure 4 we present the features' importance derived for the case of retaining all features during the tuning process. To express the importance, we calculate the SHAP values using shap 0.41.0 (Lundberg & Lee 2017). We determine these SHAP values for 50 randomly selected time periods covering all subregions from the test set. Each value allows us to quantify the feature's contribution to the corresponding prediction compared to the base value. We calculate the average prediction output over 300 random time periods over all subregions from the training set as a base value. While historical data highly contributes to the prediction of the MLP and random forest, the CNN's output is mainly influenced by data characterizing the period to be predicted. This indicates that the CNN is more robust to changes in the upsampled and three-dimensional input data. Contrarily, the MLP and random forest are more sensitive to such changes since they conduct local forecasts for each subregion separately. Thus, changes in the historical data of the corresponding subregion strongly influence the forecast. In contrast, the CNN considers the historic data of all subregions simultaneously. Thus, it is more robust to changes in the historic data of a single subregion, which is advantageous, e.g., for handling demand outliers or data inconsistencies. **Result 7**.: _The features' importance strongly differ between the MLP, random forest, and CNN. The CNN is more robust against changes in the upsampled data and historic ambulance demand._ ### Sensitivity Analysis: Time Granularity Earlier studies address the challenge arising from sparse ambulance demand which we face in the case of fine spatio-temporal granularities (Zhou et al. 2015, Zhou & Matteson 2016). For an extensive study of an iterative ANN architecture, we refer to Setzler et al. (2009) who investigate the influence of fine spatio-temporal granularities on the performance of an MLP and compare it to the Medic method. To close this gap for the domain of integrated spatio-temporal ANN architectures, we conduct a sensitivity analysis and investigate the CNN's performance for five different time intervals: 2 hours, 4 hours, 8 hours, 12 hours, and 24 hours. Similar to Setzler et al. (2009), we compare the performance of the developed CNN against the Medic method. Previous results show that the hierarchical BO and BO with dimension dropout yield superior results for high-dimensional search spaces. For this reason, we apply these approaches in the following experiments when tuning the Figure 4: Features’ Importance: SHAP values CNN. For both models, we calculate the MSE obtained on the test set for each time interval. In addition, we compute the MSE for instances with zero and non-zero demands individually. Moreover, to take the amplitude of each time interval into account, we include the normalized root-mean-square error (NRMSE) in the analysis, calculated as follows \[NRMSE=\frac{\sqrt{MSE}}{y_{max}-y_{min}}, \tag{6.1}\] where \(y_{max}\) and \(y_{min}\) are the maximum and minimum observed ambulance demand across all time intervals and subregions. We show the obtained MSEs and NRMSEs for the incumbent CNN and Medic method in Figure 5. For all analyzed performance measures and time horizons, the CNN outperforms the Medic method. We see that both demand types, i.e., instances with zero and non-zero ambulance demand, can be better predicted by the CNN. The lower the granularity, the better the performance of the CNN for non-zero demands compared to the Medic method. This indicates, that in the case of higher demands, the CNN can better identify underlying patterns within the data. Considering the time interval's amplitude in this analysis, we further see that the NRMSE remains stable across all time intervals for both models. This shows that the prediction performance remains stable if we take into account the time intervals' emergency call volumes. Disregarding these amplitudes, the MSEs and the NRMSEs improve for finer time granularities due to the reduction in ambulance demand and the increase in zero-demands which can be well predicted. **Result 8**.: _The CNN outperforms the Medic method for all analyzed time intervals. Both demand types, i.e., instances with zero and non-zero ambulance demand, can be better predicted by the CNN._ Figure 5: Sensitivity Analysis: Time Granularity ## 7 Conclusion We presented a novel CNN architecture that transforms time series information into heatmaps to predict ambulance demand. It can incorporate historical ambulance demand and external features of varying dimensions and uses three-dimensional convolutional layers to detect correlations in space and time. We further introduced a hyperparameter tuning framework utilizing BO that enables to intrinsically tune the hyperparameters while selecting features. To tackle the high-dimensional search space, we applied BO with dimension dropout and introduced a novel hierarchical BO. We further calculated SHAP values to investigate the contribution of each feature to the prediction. We further analyzed the CNN's performance for different forecasting horizons. To apply the developed CNN to real data, we conducted a numerical case study for Seattle's 911 call data. Results show that the CNN outperforms all benchmarks by more than 9%. Although the ANNs select different features, they include similar types of information confirming their relevance observed also in earlier studies in the domain of ambulance demand prediction and medical studies investigating the influence of weather data on people's health conditions. We further showed that the CNN is more robust against changes in the three-dimensional and upsampled input data. When including feature selection in the hyperparameter tuning process, we observed that excluding more than 50% of the features reduces the trainable parameters by 40.4% while the performance loss of \(<0.05\%\) is negligible. Moreover, the CNN is superior to the current industry practice for all analyzed time horizons, ranging between 2-hour and 24-hour time-intervals. In future work, the CNN architecture could be further optimized for fine space and time granularities, e.g., by giving non-zero demands more weight. Moreover, the derived features' importance could be used to improve the feature selection process. In addition, further investigating the convergence behavior of the BO when including feature selection decisions remains an interesting subject for future research.
2301.05393
Interaction-Aware Trajectory Planning for Autonomous Vehicles with Analytic Integration of Neural Networks into Model Predictive Control
Autonomous vehicles (AVs) must share the driving space with other drivers and often employ conservative motion planning strategies to ensure safety. These conservative strategies can negatively impact AV's performance and significantly slow traffic throughput. Therefore, to avoid conservatism, we design an interaction-aware motion planner for the ego vehicle (AV) that interacts with surrounding vehicles to perform complex maneuvers in a locally optimal manner. Our planner uses a neural network-based interactive trajectory predictor and analytically integrates it with model predictive control (MPC). We solve the MPC optimization using the alternating direction method of multipliers (ADMM) and prove the algorithm's convergence. We provide an empirical study and compare our method with a baseline heuristic method.
Piyush Gupta, David Isele, Donggun Lee, Sangjae Bae
2023-01-13T05:32:20Z
http://arxiv.org/abs/2301.05393v2
Interaction-Aware Trajectory Planning for Autonomous Vehicles with Analytic Integration of Neural Networks into Model Predictive Control ###### Abstract Autonomous vehicles (AVs) must share the driving space with other drivers and often employ conservative motion planning strategies to ensure safety. These conservative strategies can negatively impact AV's performance and significantly slow traffic throughput. Therefore, to avoid conservatism, we design an interaction-aware motion planner for the ego vehicle (AV) that interacts with surrounding vehicles to perform complex maneuvers in a locally optimal manner. Our planner uses a neural network-based interactive trajectory predictor and analytically integrates it with model predictive control (MPC). We solve the MPC optimization using the alternating direction method of multipliers (ADMM) and prove the algorithm's convergence. We provide an empirical study and compare our method with a baseline heuristic method. ## I Introduction Motion planning for autonomous vehicles (AVs) is a daunting task, where AVs must share the driving space with other drivers. Driving in shared spaces is inherently an interactive task, i.e., AV's actions affect other nearby vehicles and vice versa [1]. This interaction is evident in dense traffic scenarios where all goal-directed behavior relies on the cooperation of other drivers to achieve the desired goal. To predict the nearby vehicles' trajectories, AVs often rely on simple predictive models such as assuming constant speed for other vehicles [2], treating them as bounded disturbances [3], or approximating their trajectories using a set of known trajectories [4]. These models do not capture the inter-vehicle interactions in their predictions. As a result, AVs equipped with such models struggle under challenging scenarios that require interaction with other vehicles [5, 6]. AVs can be overly defensive and opaque when interacting with other drivers [7], as they often rely on decoupled prediction and planning techniques [8]. The prediction module anticipates the trajectories of other vehicles, and the planning module uses this information to find a collision-free path. As a result of this decoupling, AVs tend to be conservative and treat other vehicles as dynamic obstacles, resulting in a lack of cooperation [9]. Figure 1 shows two scenarios in which the ego vehicle intends to merge into the left lane, but the inter-vehicle gaps are too narrow. In such scenarios, conservative AVs with decoupled prediction and planning are forced to wait for a long duration. In contrast, we propose an interaction-aware AV that can open up a gap for itself by negotiating with other agents, i.e., by nudging them to either switch lanes (Fig. 0(a)) or change speeds (Fig. 0(b)). Reinforcement learning (RL) techniques [10] have been used to learn control policies under interactive or unknown environments. For example, adversarial RL is designed to reach the desired goal under uncertain conditions [11], and a model-free RL agent is developed for lane-changing control in dense traffic environments [12]. However, these RL methods are not yet appropriate for safety-critical AVs due to their low interpretability and reliability. Designing interaction-aware planners presents a significant challenge, as predicting the reactions of surrounding vehicles to the ego vehicle's actions is complex and non-trivial. Data-driven approaches, such as those using recurrent neural network architectures, have been effective in capturing the complex interactive behaviors of agents [13, 14], especially in predicting driver behavior with high accuracy and computational efficiency [14]. Therefore, it is desirable to utilize these data-driven methods to predict other vehicles' interactive behavior while maintaining safety through rigorous control theory and established vehicle dynamics models. We propose a model predictive control (MPC) based motion planner that incorporates AV's decision and surrounding vehicles' interactive behaviors into safety constraints to perform complex maneuvers. In particular, we provide a mathematical formulation for integrating the neural network's predictions in the MPC controller and provide methods to obtain an (locally) optimal solution. However, the neural network integration and non-linear system dynamics make the optimization highly non-convex and challenging to solve analytically. Thus, prior efforts [6, 15, 9] that integrate neural network prediction into MPC are numerical in nature and rely on heuristic algorithms to generate a finite set of trajectory candidates. In [6] and [15], the authors generate these candidates by random sampling of control trajectories, and by generating spiral curves from source to target lane, respectively. In [9], the authors utilize a predefined set of reference trajectory candidates. Instead of solving the optimization, these approaches evaluate the cost of each candidate and choose the minimum cost trajectory that satisfies the safety constraint. Optimality is therefore Fig. 1: Dense traffic scenarios where the ego vehicle (green) intends to merge to the left lane. The red and green trajectories show the nominal (conservative) and interaction-aware trajectories for the ego vehicles, respectively, and correspondingly, their impact on the other vehicles. Due to interaction with the ego vehicle (green trajectory), in scenario (a), the blue vehicle switches lanes, and in scenario (b), the blue vehicle slows down to create space for the ego vehicle to perform a safe lane-change maneuver. restricted to trajectory candidates only, and the planner's performance depends on the heuristic algorithm design. In contrast to these prior efforts, we avoid heuristics, detail a proper formalization, and solve the optimization with provable optimality. The optimal solution provides key insights to design better planners and can be leveraged to compare trajectories obtained by other heuristic methods. The major contributions of this work are twofold: (i) we reformulate a highly complex MPC problem with a non-convex neural network and non-linear system dynamics, and systematically solve it using the Alternating Direction Method of Multiplier (ADMM) [16] with generic assumptions (Section III), and (ii) we investigate the mathematical properties of the ADMM algorithm for solving the MPC with an integrated neural network. Specifically, we provide sufficient conditions on the neural network such that the ADMM algorithm in non-convex optimization converges to a local optimum (Section IV). It is one of the first attempts in the literature toward provable mathematical guarantees for a neural network-integrated MPC. ## II Problem Formulation and Controller Design We design an MPC controller that leverages interactive behaviors of surrounding \(N\in\mathbb{N}\) vehicles conditioned on the ego vehicle's future actions. The key to leverage interactions is to integrate a neural network and interactively update controls with step-size \(\Delta t\in\mathbb{R}_{>0}\) based on its inference (i.e., predicted positions during updates). This section further details the mathematical formulation of the MPC with the neural network. Motivated by [17], we use bicycle kinematics. The corresponding states are \([\mathrm{xy}\)-coordinates, heading angle, speed\(]\) denoted by \(z(\tau)=[x(\tau),y(\tau),\psi(\tau),v(\tau)]^{\top}\) for all \(\tau\in\{0,\dots,T_{p}\}\) and the control inputs are \([\)acceleration, steering angle\(]\) denoted by \([a(\tau),\delta(\tau)]\) for all \(\tau\in\{0,\dots,T_{p}-1\}\) with the planning horizon \(T_{p}\in\mathbb{N}\). For brevity, let \(g(\tau)\) denote any general function \(g(\cdot)\) at discrete time-step \(\tau\in\mathbb{Z}_{\geq 0}\) with respect to (w.r.t) time \(t\), i.e. \(g(\tau)\equiv g(t+\tau\Delta t)\). Then, at any time \(t\), we solve the MPC to obtain the optimal control trajectories \(\mathbf{\Delta}^{\bullet}(t)\in\mathcal{D}\subset\mathbb{R}^{T_{p}}\) and \(\mathbf{\alpha}^{\bullet}(t)\in\mathcal{A}\subset\mathbb{R}^{T_{p}}\), and corresponding optimal state trajectory \(\mathbf{Z}^{\star}(t)\in\mathcal{Z}\subset\mathbb{R}^{4T_{p}}\), where: \[\mathbf{\Delta}^{\bullet}(t) =\begin{bmatrix}\delta^{\ast}(0),\dots,\delta^{\ast}(T_{p}-1) \end{bmatrix}^{\top}, \mathcal{D} =[\delta_{min},\delta_{max}],\] \[\mathbf{\alpha}^{\bullet}(t) =\begin{bmatrix}a^{\ast}(0),\dots,a^{\ast}(T_{p}-1)\end{bmatrix}^ {\top}, \mathcal{A} =[a_{min},a_{max}],\] \[\mathbf{Z}^{\star}(t) =\begin{bmatrix}z^{\ast}(1),\dots,z^{\ast}(T_{p})\end{bmatrix}^ {\top}, \mathcal{Z} =[z_{min},z_{max}].\] ### _Objective function_ The controller's objective is to move from the current lane to the desired lane as soon as possible while minimizing control effort and ensuring safety and smoothness. Let \(x^{\text{ref}}\) denote the maximum longitude coordinate until when the ego must transition to the target lane. Let \(\|\cdot\|\) denote the Euclidean norm. For \(x<x^{\text{ref}}\), we utilize the following objective (cost) function \(J(\mathbf{\Delta}(t),\mathbf{\alpha}(t),\mathbf{Z}(t))\) similar to [15]: \[J=\sum_{\tau=1}^{T_{p}}\lambda_{div}\|y(\tau)-y^{\text{ref}}\|^{2 }+\sum_{\tau=1}^{T_{p}}\lambda_{v}\|v(\tau)-v^{\text{ref}}\|^{2}\text{ (error)}\] \[+\sum_{\tau=0}^{T_{p}-1}\lambda_{\delta}\|\delta(\tau)\|^{2}+ \sum_{\tau=0}^{T_{p}-1}\lambda_{a}\|a(\tau)\|^{2}\] (control effort) \[+\sum_{\tau=0}^{T_{p}-1}\lambda_{\Delta\delta}\|\delta(\tau)- \delta(\tau-1)\|^{2}\] (steering rate) \[+\sum_{\tau=0}^{T_{p}-1}\lambda_{\Delta\alpha}\|a(\tau)-a(\tau-1 )\|^{2},\] (jerk) where \(\mathbf{\Delta}(t)\in\mathcal{D}\), \(\mathbf{\alpha}(t)\in\mathcal{A}\), and \(\mathbf{Z}(t)\in\mathcal{Z}\) are the planned steering, acceleration, and state trajectories, respectively. \(y^{\text{ref}}\in\mathbb{R}\) and \(v^{\text{ref}}\in\mathbb{R}_{>0}\) are the reference latitude coordinate of the desired lane and desired velocity, respectively, provided by a high-level planner [18]. For a detailed description of each term, we refer the interested readers to [15]. ### _State Dynamics_ Let \(\tilde{\delta},\tilde{a}\) and \(\tilde{z}\) be the last observed steering input, acceleration input and state of the ego vehicle, respectively. At any time \(t\), we linearly approximate the discrete-time kinematic bicycle model [17] of the form \(z(\tau+1)=f(\delta(\tau),a(\tau),z(\tau))\) about \((\tilde{\delta},\tilde{a},\tilde{z})\) to obtain the equality constraints for the optimization problem. We have \[f(\delta(\tau),a(\tau),z(\tau))\approx\tilde{A}\delta(\tau)+\tilde{B}a(\tau)+ \tilde{C}z(\tau)+\tilde{D}, \tag{1}\] where \(\tilde{A}\in\mathbb{R}^{4},\tilde{B}\in\mathbb{R}^{4},\tilde{C}\in\mathbb{R}^{ 4\times 4}\), and \(\tilde{D}\in\mathbb{R}^{4}\) are constant matrices given by \(\tilde{A}:=\frac{\partial f}{\partial\delta}\Big{|}_{(\tilde{\delta},\tilde{a}, \tilde{z})}\), \(\tilde{B}:=\frac{\partial f}{\partial a}\Big{|}_{(\tilde{\delta},\tilde{a}, \tilde{z})}\), \(\tilde{C}=\frac{\partial f}{\partial\tilde{z}}\Big{|}_{(\tilde{\delta},\tilde{a },\tilde{z})}\), and \(\tilde{D}:=f(\tilde{\delta},\tilde{a},\tilde{z})-\tilde{A}\tilde{\delta}- \tilde{B}\tilde{a}-\tilde{C}\tilde{z}\), respectively. Hence, the linearized system dynamics is given by: \[z(\tau+1)=\tilde{A}\delta(\tau)+\tilde{B}a(\tau)+\tilde{C}z(\tau)+ \tilde{D}\] \[\implies\tilde{A}\delta(\tau)+\tilde{B}a(\tau)+\tilde{C}z(\tau)-z( \tau+1)+\tilde{D}=0. \tag{2}\] The equality constraints based on the system dynamics over the \(T_{p}\) planning time-steps can be written as: \[F(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z}):=A\mathbf{\Delta}+B\mathbf{\alpha}+C\mathbf{Z}+D=0, \tag{3}\] where \(A\in\mathbb{R}^{4T_{p}\times T_{p}},B\in\mathbb{R}^{4T_{p}\times T_{p}},C\in \mathbb{R}^{4T_{p}\times 4T_{p}}\), and \(D\in\mathbb{R}^{4T_{p}}\) are constant matrices given by: \[A=\begin{bmatrix}\tilde{A}&\mathbf{0}&\mathbf{0}&\cdots\\ \mathbf{0}&\tilde{A}&\mathbf{0}&\cdots\\ \vdots&\vdots&\vdots&\cdots\end{bmatrix},\ B=\begin{bmatrix}\tilde{B}&\mathbf{0}& \mathbf{0}&\cdots\\ \mathbf{0}&\tilde{B}&\mathbf{0}&\cdots\\ \vdots&\vdots&\vdots&\cdots\end{bmatrix},\] \[C=\begin{bmatrix}-\mathbf{I}&\mathbf{0}&\mathbf{0}&\cdots\\ \tilde{C}&-\mathbf{I}&\mathbf{0}&\cdots\\ \mathbf{0}&\tilde{C}&-\mathbf{I}&\cdots\\ \vdots&\vdots&\vdots&\cdots\end{bmatrix},\ D=\begin{bmatrix}\tilde{D}-\tilde{C}z(0) \\ \tilde{D}\\ \tilde{D}\\ \vdots\end{bmatrix}, \tag{4}\] \(\mathbf{0}\) and \(\mathbf{I}\) denote the zero and identity matrix, respectively. **Remark 1**: _To simplify the optimization, we linearly approximate the system dynamics before solving the MPC. This is possible because the control inputs obtained through the MPC are only applied for a single time-step, using a receding horizon control approach [19]. As a result, any linearization errors from previous time-steps do not affect the MPC optimization._ ### _Safety Constraints_ The safety constraints for collision avoidance depend on the nearby vehicles' trajectory prediction and the vehicle shape model. Let \(\mathcal{V}\) denote the set of nearby vehicles surrounding the ego vehicle. Let \(\phi(\tau)\) be a trained neural network that jointly predicts the future trajectories of the ego vehicle and its surrounding vehicles for \(T_{pred}\) time-steps into the future based on their trajectories for \(T_{obs}\) time-steps in the past. \(\phi(\tau)\) is given by: \[\phi(\tau):\begin{bmatrix}(x(\tau),y(\tau))&\cdots&(x_{N}(\tau),y_{N}(\tau))\\ \vdots&\vdots&\vdots\\ (x(\tau-T_{obs}+1),&\ldots&(x_{N}(\tau-T_{obs}+1),\\ y(\tau-T_{obs}+1))&y_{N}(\tau-T_{obs}+1))\end{bmatrix}\] \[\left[(\hat{x}(\tau+1),\hat{y}(\tau+1))\right. \left.\cdots&(\hat{x}_{N}(\tau+1),\hat{y}_{N}(\tau+1))\right],\] with \(T_{pred}=1\), where the first column represents the positions of the ego vehicle followed by the positions of \(N\) surrounding vehicles. Given the buffer of \(T_{obs}\) past observations until time-step \(\tau\), the coordinates of vehicle \(i\in\mathcal{V}\) at time-step \(\tau+1\) are represented as: \[\hat{x}_{i}(\tau+1)=\phi_{i,x}(\tau),\hskip 28.452756pt\hat{y}_{i}(\tau+1)= \phi_{i,y}(\tau). \tag{5}\] Some examples of the neural network \(\phi(\tau)\) include social generative adversarial network (SGAN) [14] and graph-based spatial-temporal convolutional network (GSTCN) [20]. **Remark 2**: _Interactive predictions over planning horizon \(T_{p}\) are computed recursively using \(\phi(t)\) with \(T_{pred}=1\) based on the latest reactive predictions and ego vehicle positions from the MPC's candidate solution trajectory._ We model the vehicle shape using a single circle to obtain a smooth and continuously differentiable distance measure to enable gradient-based optimization methods. Let \((x,y)\) and \((\hat{x}_{i},\hat{y}_{i})\) be the position of the ego vehicle and the predicted positions of the surrounding vehicles \(i\in\mathcal{V}\) (obtained using \(\phi(\tau)\)), respectively. Let \(r,r_{i}\in\mathbb{R}_{>0}\) be the radius of circles modeling ego vehicle and vehicle \(i\), respectively. The safety constraint for the ego vehicle w.r.t vehicle \(i\) then reads: \[d_{i}(x,y,\hat{x}_{i},\hat{y}_{i})=(x-\hat{x}_{i})^{2}+(y-\hat{y} _{i})^{2}\] \[-(r+r_{i}+\epsilon)^{2}>0, \tag{6}\] where \(\epsilon\in\mathbb{R}_{>0}\) is a safety bound. **Remark 3**: _Using the single circle model, the safety constraints can be conservative, and consequently, the feasible solutions could be restrictive in some situations. We use it for its simplicity and to reduce the number of safety constraints. Some other alternatives for modeling the vehicle shape include the ellipsoid model [21] and three circle model [6]._ ### _Formulation of the Optimization problem_ We now present the complete optimization problem for the receding horizon control in a compact form: \[\min_{\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z}} J= \Phi_{1}(\mathbf{\Delta})+\Phi_{2}(\mathbf{\alpha})+\Phi_{3}(\mathbf{Z}),\] (7) subject to \[F(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z})=0,\quad b_{i}(\mathbf{Z})>0,\ i \in\mathcal{V}, \tag{8}\] \[\mathbf{\Delta}\in\mathcal{D},\mathbf{\alpha}\in\mathcal{A},\mathbf{Z}\in \mathcal{Z},\ \text{where}\] (9) \[\Phi_{1}(\mathbf{\Delta})=\sum_{\tau=0}^{T_{p}-1}\lambda_{\delta}\| \delta(\tau)\|^{2}+\sum_{\tau=0}^{T_{p}-1}\lambda_{\Delta\delta}\|\delta(\tau )-\delta(\tau-1)\|^{2},\] \[\Phi_{2}(\mathbf{\alpha})=\sum_{\tau=0}^{T_{p}-1}\lambda_{\alpha}\|a (\tau)\|^{2}+\sum_{\tau=0}^{T_{p}-1}\lambda_{\Delta a}\|a(\tau)-a(\tau-1)\|^{2},\] \[\Phi_{3}(\mathbf{Z})=\sum_{\tau=1}^{T_{p}}\lambda_{div}\|y(\tau)-y^{ \text{ref}}\|^{2}+\sum_{\tau=1}^{T_{p}}\lambda_{v}\|v(\tau)-v^{\text{ref}}\|^ {2},\] \[b_{i}(\mathbf{Z})=\begin{bmatrix}d_{i}(x(1),y(1),\phi_{i,x}(0),\phi_{ i,y}(0))\\ \vdots\\ d_{i}(x(T_{p}),y(T_{p}),\phi_{i,x}(T_{p}-1),\phi_{i,y}(T_{p}-1))\end{bmatrix}.\] In the next section, we solve the optimization using ADMM to determine a safe and interactive ego vehicle's trajectory. ## III Solving MPC with ADMM There are many mathematical challenges associated with the MPC problem in Section II. Namely, it has the nonlinear system dynamics, non-convex safety constraints, and dependence of the neural network predictions on its predictions in previous time steps (\(T_{obs}\neq 1\)). We now detail the systematic steps to solve the complex problem using ADMM, addressing the aforementioned mathematical challenges. First, we construct a Lagrangian by moving the safety constraints, \(b_{i}(\mathbf{Z})>0,\ i\in\mathcal{V}\), in the optimization objective: \[\min_{\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z}}J= \Phi_{1}(\mathbf{\Delta})+\Phi_{2}(\mathbf{\alpha})+\Phi_{3}(\mathbf{Z})-\sum _{i=1}^{N}\lambda_{s}^{T}b_{i}(\mathbf{Z}),\] (10) subject to \[F(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z})=0, \tag{11}\] \[\mathbf{\Delta}\in\mathcal{D},\mathbf{\alpha}\in\mathcal{A},\mathbf{Z}\in \mathcal{Z}, \tag{12}\] where \(\lambda_{s}\in\mathbb{R}_{>0}^{T_{p}}\) is the vector of Lagrange multipliers. **Remark 4**: _For theoretical analysis, we incorporate safety constraints into the optimization objective, but for our simulation study, we enforce them as hard constraints._ The optimization problem (10)-(12) is separable and the optimization variables \(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z}\) are decoupled in the objective function. Following the convention in [22], the augmented Lagrangian is given by: \[\mathcal{L}_{\rho}(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z})=\Phi_{1}(\mathbf{ \Delta})+\Phi_{2}(\mathbf{\alpha})+\Phi_{3}(\mathbf{Z})-\sum_{i=1}^{N}\lambda_{s}^{T} b_{i}(\mathbf{Z})+\] \[\mu^{\top}F(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z})+\left(\frac{\rho}{2} \right)\|F(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z})\|^{2}, \tag{13}\] where \(\rho>0\) is the ADMM Lagrangian parameter and \(\mu\) is the dual variable associated with the constraint (11). The complete algorithm is given by the Algorithm 1. Next, we provide details for solving each of the local optimization problems at iteration \(k\), for solving the MPC. _Update \(\mathbf{\Delta}^{(k+1)}=\operatorname*{argmin}_{\mathbf{\Delta}\in\mathcal{D}}\ \mathcal{L}_{\rho}(\mathbf{\Delta},\mathbf{\alpha}^{(k)},\mathbf{Z}^{(k)})\)_ The sub-optimization problem for \(\mathbf{\Delta}^{(k+1)}\) is given by \[\operatorname*{argmin}_{\mathbf{\Delta}} \sum_{\tau=0}^{T_{p}-1}\lambda_{\delta}\|\delta(\tau)\|^{2}+\sum_{ \tau=0}^{T_{p}-1}\lambda_{\Delta\delta}\|\delta(\tau)-\delta(\tau-1)\|^{2}\] \[+\mu^{k\top}A\mathbf{\Delta}+\left(\frac{\rho}{2}\right)\|A\mathbf{\Delta}-c _{\mathbf{\Delta}}^{(k)}\|^{2},\] (14) subject to \[\delta(\tau)\in[\delta_{\min},\delta_{\max}],\] where \(c_{\mathbf{\Delta}}^{(k)}=A\mathbf{\Delta}^{(k)}-F(\mathbf{\Delta}^{(k)},\mathbf{\alpha}^{(k)},\bm {Z}^{(k)})\). It is a convex problem; hence, we can use a canonical convex optimization algorithm [22] to find the optimal solution. _Update \(\mathbf{\alpha}^{(k+1)}=\operatorname*{argmin}_{\mathbf{\alpha}\in\mathcal{A}}~{} \mathcal{L}_{\rho}(\mathbf{\Delta}^{(k+1)},\mathbf{\alpha},\mathbf{Z}^{(k)})\)_ The sub-optimization problem for \(\mathbf{\alpha}^{(k+1)}\) is given by \[\operatorname*{argmin}_{\mathbf{\alpha}}\sum_{\tau=0}^{T_{p}-1} \lambda_{a}\|a(\tau)\|^{2}+\sum_{\tau=0}^{T_{p}-1}\lambda_{\Delta a }\|a(\tau)-a(\tau-1)\|^{2}\] \[+\mu^{k\top}B\mathbf{\alpha}+\left(\frac{\rho}{2}\right)\|B\mathbf{ \alpha}-c_{\mathbf{\alpha}}^{(k)}\|^{2},\] (15) subject to \[a(\tau)\in[a_{\min},a_{\max}],\] where \(c_{\mathbf{\alpha}}^{(k)}=B\mathbf{\alpha}^{(k)}-F(\mathbf{\Delta}^{(k+1)},\mathbf{\alpha}^{ (k)},\mathbf{Z}^{(k)})\). It is a convex problem; hence, we can use a canonical convex optimization algorithm [22] to find the optimal solution. _Update \(\mathbf{Z}^{(k+1)}=\operatorname*{argmin}_{\mathbf{Z}\in\mathcal{Z}}~{}\mathcal{L}_{ \rho}(\mathbf{\Delta}^{(k+1)},\mathbf{\alpha}^{(k+1)},\mathbf{Z})\)_ The sub-optimization problem for \(\mathbf{Z}^{(k+1)}\) is given by \[\operatorname*{argmin}_{\mathbf{Z}}\sum_{\tau=1}^{T_{p}}\lambda_{div }\|y(\tau)-y^{\text{ref}}\|^{2}+\sum_{\tau=1}^{T_{p}}\lambda_{v}\|v(\tau)-v^{ \text{ref}}\|^{2}\] \[-\sum_{i=1}^{N}\lambda_{s}^{T}b_{i}(\mathbf{Z})+\mu^{k\top}C\mathbf{Z}+ \left(\frac{\rho}{2}\right)\|C\mathbf{Z}-c_{\mathbf{Z}}^{(k)}\|^{2},\] (16) subject to \[z(\tau)\in[z_{\min},z_{\max}], \tag{17}\] where \(c_{\mathbf{Z}}^{(k)}=C\mathbf{Z}^{(k)}-F(\mathbf{\Delta}^{(k+1)},\mathbf{\alpha}^{(k+1)},\mathbf{ Z}^{(k)})\). Due to the nonconvexity of the neural network in \(b_{i}(\mathbf{Z})\), the objective function (16) is non-convex. We prefer the Quasi-Newton method for optimization to avoid expensive Hessian computation at each step. Hence, we utilize BFGS-SQP method [23], which employs BFGS Hessian approximations within a sequential quadratic optimization, and does not assume any special structure in the objective or constraints. For a solver, we use PyGranso [24], a PyTorch-enabled port of GRANSO, that enables gradients computation by back-propagating the neural network's gradients at each iteration. **Remark 5**: _The state trajectory \(\mathbf{Z}\) update has a larger complexity in the problem due to the presence of the non-convex neural network predictions. To expedite the \(\mathbf{Z}\) update, an offline-trained function approximator such as a neural network can be utilized to estimate the gradients of the original neural network. The training dataset for gradient approximator can be generated using automatic differentiation or central differences approximations with original network._ Henceforth, we refer to our method as ADMM-NNMPC. ## IV Convergence of MPC with ADMM Due to the inherent non-convexity of the neural network, the rigorous convergence analysis of ADMM in [16] is not readily applicable. Thus, we extend the convergence analysis of ADMM with an integrated neural network, i.e., the convergence of the inner while loop in Algorithm 1. We first make the following assumptions on the neural network: 1. [label=(A0),leftmargin=*,noitemsep,topsep=0pt] 2. At any time-step \(\tau\in[0,T_{p}]\), the neural network's outputs are bounded, i.e. \(\|\phi_{i,x}(\tau)\|\leq s_{x}\) and \(|\phi_{i,y}(\tau)|\leq s_{y}\), \(i\in\mathcal{V}\), where \(s_{x},s_{y}\in\mathbb{R}_{>0}\) are constants. 3. At any time-step \(\tau\in[0,T_{p}]\), the gradients of the neural network's outputs w.r.t. the input ego trajectory exist and are bounded, i.e. \(\|\frac{\partial\phi_{i,x}(t)}{\partial\mathbf{Z}}\|_{\infty}\leq\theta_{x}\) and \(\|\frac{\partial\phi_{i,y}(t)}{\partial\mathbf{Z}}\|_{\infty}\leq\theta_{y}\) for all \(i\in\mathcal{V}\), where \(\theta_{x},\theta_{y}\in\mathbb{R}_{>0}\) are constants and \(\|\cdot\|_{\infty}\) is the max. norm of a vector. 4. At any time-step \(\tau\in[0,T_{p}]\), the neural network's outputs are Lipschitz differentiable, i.e. \(\|\nabla\phi_{i,x}(\mathbf{Z}_{1})-\nabla\phi_{i,x}(\mathbf{Z}_{2})\|\leq L_{\nabla \phi}\|\mathbf{Z}_{1}-\mathbf{Z}_{2}\|\) and \(\|\nabla\phi_{i,y}(\mathbf{Z}_{1})-\nabla\phi_{i,y}(\mathbf{Z}_{2})\|\leq L_{\nabla \phi}\|\mathbf{Z}_{1}-\mathbf{Z}_{2}\|\) for all \(i\in\mathcal{V}\), \(\mathbf{Z}_{1},\mathbf{Z}_{2}\in\mathcal{Z}\), where \(L_{\nabla\phi}\in\mathbb{R}_{>0}\) is the Lipschitz constant for the neural network's gradient. Assumptions (A1)-(A3) are sufficient conditions under which the objective function (10) is Lipschitz differentiable, i.e., it is differentiable and its gradient is Lipschitz continuous. This allows us to establish the convergence of Algorithm 1. Assumption (A1) is satisfied for a trained neural network for a bounded input space. Furthermore, neural network outputs can be clipped based on the feasible region. Lastly, neural networks with \(C^{2}\) activation functions such as Gaussian Error Linear Unit (GELU) [25] and Smooth Maximum Unit (SMU) [26] satisfy assumptions (A2)-(A3). **Remark 6**: _Assumptions (A1)-(A3) are sufficient conditions and not necessary conditions. If the neural network architecture is unknown or it doesn't satisfy the assumptions, knowledge distillation [27] can be used to train a smaller (student) network that satisfies the assumptions from the large (teacher) pre-trained network._ **Theorem 1**: _[**Convergence of MPC with ADMM**] Under the assumptions (A1)-(A3), the inner while loop in Algorithm 1 converges subsequently for any sufficiently large \(\rho>\max\{1,(1+2\sigma_{\min}(C))L_{J}M\}\), where \(\sigma_{\min}(C)\) is the smallest positive singular value of \(C\) in (4), \(L_{J}\) is the Lipschitz constant for \(J\) in (10), and \(M\) is the Lipschitz constant for sub-minimization paths as defined in Lemma 2. Therefore, starting from any \(\mathbf{\Delta}^{(0)},\mathbf{\alpha}^{(0)},\mathbf{Z}^{(0)},\mu^{(0)}\), it generates a sequence that is bounded, has at least one limit point, and that each limit point \(\mathbf{\Delta}^{*},\mathbf{\alpha}^{*},\mathbf{Z}^{*},\mu^{*}\) is a stationary point of \(\mathcal{L}_{\rho}\) satisfying \(\nabla\mathcal{L}_{\rho}(\mathbf{\Delta}^{*},\mathbf{\alpha}^{*},\mathbf{Z}^{*},\mu^{*})=0\)._ We prove Theorem 1 using Lemmas 1-3. **Lemma 1**: **[Feasibility]** Let \(Q:=[A,B]\). Then \(\text{Im}(Q)\subseteq\text{Im}(C)\), where \(\text{Im}(\cdot)\) returns the image of a matrix, and \(A,B,\) and \(C\) is defined in (4). See Appendix A for the proof. **Lemma 2**: **[Lipschitz sub-minimization paths]** The following statements hold for the optimization problem: 1. For any fixed \(\boldsymbol{\alpha},\boldsymbol{Z}\), \(H_{1}:Im(A)\rightarrow\mathbb{R}^{T_{p}}\) defined by \(H_{1}(u)\triangleq\operatorname*{argmin}_{\boldsymbol{\Delta}}\{J( \boldsymbol{\Delta},\boldsymbol{\alpha},\boldsymbol{Z}):A\boldsymbol{\Delta}=u\}\) is unique and a Lipschitz continuous map. 2. For any fixed \(\boldsymbol{\Delta},\boldsymbol{Z}\), \(H_{2}:Im(B)\rightarrow\mathbb{R}^{T_{p}}\) defined by \(H_{2}(u)\triangleq\operatorname*{argmin}_{\boldsymbol{\alpha}}\{J( \boldsymbol{\Delta},\boldsymbol{\alpha},\boldsymbol{Z}):B\boldsymbol{\alpha}=u\}\) is unique and a Lipschitz continuous map. 3. For any fixed \(\boldsymbol{\Delta},\boldsymbol{\alpha}\), \(H_{3}:Im(C)\rightarrow\mathbb{R}^{4T_{p}}\) defined by \(H_{3}(u)\triangleq\operatorname*{argmin}_{\boldsymbol{Z}}\{J(\boldsymbol{ \Delta},\boldsymbol{\alpha},\boldsymbol{Z}):C\boldsymbol{Z}=u\}\) is unique and a Lipschitz continuous map, where \(A,B,\) and \(C\) is defined in (4). Moreover, \(H_{1},H_{2},H_{3}\) have a universal Lipschitz constant \(M>0\). See Appendix B for the proof. **Lemma 3**: **[Lipschitz Differentiability]** Under the assumptions (A1)-(A3), the objective function \(J(\boldsymbol{\Delta},\boldsymbol{\alpha},\boldsymbol{Z})\) in (10) is Lipschitz differentiable. See Appendix C for the proof. ## V Simulation Study We now present the simulation results for ADMM-NNMPC. Figure 2 and 3 show the vehicles' positions at different time steps in two scenarios in which the ego vehicle (red) intends to merge into the left lane which is occupied by four other vehicles (blue) with a narrow inter-vehicle gap. In the two-lane scenario (Fig. 2), other vehicles can only change their speeds, while in the three-lane scenario (Fig. 3), other vehicles can also move laterally to transition into the leftmost lane. The other vehicles' positions at different time steps match the neural network's predictions, and hence, the ego vehicle's actions affect the trajectory of the other vehicles. In both scenarios, the ego vehicle is able to interact with the other agents and open a gap for itself to merge into. We compare ADMM-NNMPC with a baseline method called NNMPC [15] on the two-lane and three-lane scenarios by utilizing the same cost function (cost function coefficients listed in Table I) and \(T_{p}=8\) time steps. NNMPC generates trajectory candidates by computing a finite set of spiral curves from the source lane to the target lane and selects the candidate with minimum cost. In both methods, we use a trained SGAN neural network [14] for interactive motion prediction of the other vehicles. Table II compares the simulation results for the baseline NNMPC and ADMM-NNMPC in the two-lane and three-lane scenarios. In the two-lane scenario, while the ADMM-NNMPC successfully merges in the left lane, the NNMPC method fails to make a lane change due to limited trajectory candidates. In the three-lane scenario, ADMM-NNMPC successfully switches lanes much faster than NNMPC. Furthermore, ADMM-NNMPC outperforms NNMPC in terms of maximum cost and minimum distance from other vehicles in both scenarios. Figure 3(a) compares the trajectory (top) and cost (bottom) of the ADMM-NNMPC and NNMPC solutions in the two-lane (left) and three-lane (right) scenarios until \(x_{ref}=25\). In both scenarios, while ADMM-NNMPC successfully merges into the left lane, NNMPC fails to switch lanes before \(x_{ref}\) due to limited trajectory candidates. Furthermore, the ADMM-NNMPC's cost is lower than the NNMPC solution at every time step since ADMM-NNMPC solves the optimization. Figure 3(b) compares the steering (top) and acceleration (bottom) trajectories for ADMM-NNMPC and NNMPC solutions in the two-lane (left) and three-lane (right) scenarios. Since ADMM-NNMPC solves for the optimal solution, it actively interacts with the other vehicles to open \begin{table} \begin{tabular}{|l|l|l|l|} \hline \multicolumn{2}{|c|}{**Param Description**} & \multicolumn{2}{|c|}{**Value**} \\ \hline \(\lambda_{div}\) & Weight on divergence from target lane & 1.0 \\ \hline \(\lambda_{v}\) & Weight on divergence from target speed & 1.0 \\ \hline \(\lambda_{\delta}\) & Weight on steering angle & 0.6 \\ \hline \(\lambda_{\alpha}\) & Weight on acceleration & 0.4 \\ \hline \(\lambda_{\Delta\delta}\) & Weight on steering rate & 0.4 \\ \hline \(\lambda_{\Delta\alpha}\) & Weight on jerk & 0.2 \\ \hline \(\rho\) & ADMM Lagrangian parameter & 100 \\ \hline \end{tabular} \end{table} TABLE II: Simulation results for ADMM-NNMPC and NNMPC in the two-lane and three-lane scenario. \(t_{merge}\) are the number of time steps taken by the ego vehicle to merge into the target lane. \(C_{\max}\) and \(d_{\min}\) are the maximum cost and minimum distance between the ego vehicle and other vehicles at any point of the simulation, respectively. Fig. 4: (a) compares the trajectory (top) and cost (bottom) of the ADMM-NNMPC and NNMPC solutions in the two-lane (left) and three-lane (right) scenarios until \(x_{ref}=25\). (b) compares the steering (top) and acceleration (bottom) trajectories for ADMM-NNMPC and NNMPC solutions in the two-lane (left) and three-lane (right) scenarios. \begin{table} \begin{tabular}{|l|l|l|l|} \hline \multicolumn{2}{|c|}{**Two-lane Scenario**} & \multicolumn{2}{|c|}{**Three-lane Scenario**} \\ \hline \multicolumn{2}{|c|}{**NNMPC**} & \multicolumn{2}{|c|}{ADMM-NNMPC} & \multicolumn{2}{|c|}{ANMPC-NNMPC} \\ \hline \(t_{merge}\) & Pais after 17 & 9 & 29 \\ \hline \(\alpha_{max}\) & 80.9 & 62 & 114.6 & 53.3 \\ \hline \(d_{\min}\) & 0.91 & 241 & 297 & 26.6 \\ \hline \end{tabular} \end{table} TABLE II: Simulation results for ADMM-NNMPC and NNMPC in the two-lane and three-lane scenario. \(t_{merge}\) are the number of time steps taken by the ego vehicle to merge into the target lane. \(C_{\max}\) and \(d_{\min}\) are the maximum cost and minimum distance between the ego vehicle and other vehicles at any point of the simulation, respectively. Fig. 3: **Three lane scenario: (a)-(d) shows the ADMM-NNMPC solution in a three-lane scenario after \(0,3,5,\) and \(9\) time steps, respectively. The ego vehicle (red) opens a gap for itself by nudging the vehicles to transition into the left-most lane.** Fig. 2: **Two lane scenario: (a)-(d) shows the ADMM-NNMPC solution in a two-lane scenario after \(0,5,7,\) and \(13\) time steps, respectively. The ego vehicle (red) opens a gap by nudging the vehicles to change their speeds.** a gap for itself to merge into. Therefore, the steering trajectory in ADMM-NNMPC is more aggressive that the NNMPC. Lastly, the acceleration gradually changes in ADMM-NNMPC to reach the desired speed while minimizing jerk. ### _Limitations and Future Works_ Although we reduce the problem complexity by decomposing it into smaller sub-problems, these sub-problems are still complex which makes the approach non-scalable. Furthermore, due to the large neural network size and re-computation of gradients at each iteration, our current implementation runs slower than real-time. Nevertheless, having a slow offline optimization is useful, as it can serve as a benchmark when developing faster heuristic methods, ideally, we would like to increase the efficiency. Our approach can be made faster by training another neural network to estimate the original neural network's gradients and developing faster optimization libraries. Thus, future works include: (i) designing a smaller network trained with knowledge distillation [27], or (ii) expediting neural network's gradient estimation using an offline-trained function approximator such as a neural network. ## VI Conclusions With the importance of motion planning strategies being interaction-aware, e.g., lane changing in dense traffic for autonomous vehicles, this paper investigates mathematical solutions of a model predictive control with a neural network that estimates interactive behaviors. The problem is highly complex due to the non-convexity of the neural network, and we show that the problem can be effectively solved by decomposing it into sub-problems by leveraging the alternating direction method of multipliers (ADMM). This paper further examines the convergence of ADMM in presence of the neural network, which is one of the first attempts in the literature. The simple numerical study supports the provably optimal solutions being effective. The computational burden due to the complexity is still a limitation, and improving the computation efficiency remains for future work. That said, having a provably optimal solution is valuable as a benchmark when developing heuristic methods. ### _Proof of Lemma 1_ \(C\) in (4) is a lower triangular matrix with diagonal entries as \(-1\). Hence, \(C\) is a full rank matrix of rank \(4T_{p}\), and \(\text{Im}(C)=\mathbb{R}^{4T_{p}}\). We have, \(\text{Im}(Q)=\{y\in\mathbb{R}^{4T_{p}}|~{}y=Qx=[A,B]x\text{ such that }x\in\mathbb{R}^{2T_{p}}\}\subseteq\mathbb{R}^{4T_{p}}=\text{ Im}(C)\). ### _Proof of Lemma 2_ \(A\) and \(B\) are full column rank matrices of column rank \(T_{p}\). Furthermore, \(C\) is a full rank matrix of rank \(4T_{p}\). Therefore, their null spaces are trivial, and hence, \(H_{1},H_{2},H_{3}\) reduces to linear operators and satisfies the Lemma. ### _Proof of Lemma 3_ \(\Phi_{1}(\mathbf{\Delta})\), \(\Phi_{2}(\mathbf{\alpha})\), and \(\Phi_{3}(\mathbf{Z})\) are \(C^{2}\) functions, and hence, Lipschitz differentiable. Therefore, to show the Lipschitz differentiability of \(J\), it is sufficient to show that \(b_{i}(\mathbf{Z})\), \(i\in\mathcal{V}\), is Lipschitz differentiable for any \(\tau\in\{1,\dots,T_{p}\}\). For brevity of space, we define our notations in terms of \(w\in\{x,y\}\) where \(w\) can either be \(x\) or \(y\). Let \(q_{w}(\tau):2(w(\tau)-\phi_{i,w}(\tau-1))\). We have \[\frac{\partial b_{i}(\mathbf{Z})}{\partial x(k)}=\begin{cases}-q_{x}(\tau)\frac{ \partial\phi_{i,x}(\tau-1)}{\partial x(k)}-\\ q_{y}(\tau)\frac{\partial\phi_{i,y}(\tau-1)}{\partial x(k)},\text{ for }k\leq\tau-1 \\ q_{x}(\tau),\quad\text{ for }k=\tau\\ 0,\quad\text{ for }k\in\{\tau+1,\dots,T_{p}\}.\end{cases}\] Let \(T_{k}^{w}:=\left|\frac{\partial b_{i}(\mathbf{Z}_{1})}{\partial w(k)}-\frac{ \partial b_{i}(\mathbf{Z}_{2})}{\partial w(k)}\right|\) for some \(\mathbf{Z}_{1},\mathbf{Z}_{2}\in\mathcal{Z}\), and let \((x^{m}(\tau),y^{jn}(\tau))\) denote the ego vehicle positions in \(\mathbf{Z}_{m}\), where \(m\in\{1,2\}\). Let \(\phi_{i,w}^{\mathbf{Z}_{m}}\) denote \(\phi_{i,w}\) corresponding to \(\mathbf{Z}_{m}\). Using assumption (A2) and mean-value theorem [28], the neural network's outputs are Lipschitz continuous, i.e., \(\|\phi_{i,w}^{\mathbf{Z}_{1}}=\phi_{i,w}^{\mathbf{Z}_{2}}\|\leq\theta_{w}||\mathbf{Z}_{1}- \mathbf{Z}_{2}||\). Let \(\Delta w(\tau)=|w^{1}(\tau)-w^{2}(\tau)|\), \(\varphi_{\mathbf{v}}(\tau-1)=|\phi_{i,w}^{\mathbf{Z}_{2}}(\tau-1)-\phi_{i,w}^{\mathbf{Z}_{ 1}}(\tau-1)|\), and \(\nu_{x}^{w}(\tau-1)=\left|\frac{\partial\phi_{i,w}^{\mathbf{Z}_{1}}(\tau-1)}{ \partial x(k)}-\frac{\partial\phi_{i,w}^{\mathbf{Z}_{2}}(\tau-1)}{\partial x(k)}\right|\). For any \(k\in\{1,\dots,\tau-1\}\): \[T_{k}^{x}\leq 2\Delta x(\tau)\Bigg{|}\frac{\partial\phi_{i,x}^{\mathbf{Z} _{1}}(\tau-1)}{\partial x(k)}\Bigg{|}+2\Delta y(\tau)\Bigg{|}\frac{\partial\phi _{i,y}^{\mathbf{Z}_{1}}(\tau-1)}{\partial x(k)}\Bigg{|}+\] \[2|x^{2}(\tau)|\nu_{x}^{x}(\tau-1)+2\varphi_{x}(\tau-1)\Bigg{|} \frac{\partial\phi_{i,x}^{\mathbf{Z}_{2}}(\tau-1)}{\partial x(k)}\Bigg{|}+\] \[2|y^{2}(\tau)|\nu_{x}^{y}(\tau-1)+2\varphi_{y}(\tau-1)\Bigg{|} \frac{\partial\phi_{i,y}^{\mathbf{Z}_{2}}(\tau-1)}{\partial x(k)}\Bigg{|}+\] \[2|\phi_{i,x}^{\mathbf{Z}_{1}}(\tau-1)|\nu_{x}^{x}(\tau-1)+2|\phi_{i,y }^{\mathbf{Z}_{1}}(\tau-1)|\nu_{x}^{y}(\tau-1)\] \[\leq 2\theta_{x}\Delta x(\tau)+2x_{max}\nu_{x}^{x}(\tau-1)+2\theta_{x }\varphi_{x}(\tau-1)+\] \[2s_{x}\nu_{x}^{x}(\tau-1)+2\theta_{y}\Delta y(\tau)+2y_{max}\nu_{ x}^{y}(\tau-1)+\] \[2\theta_{y}\varphi_{y}(\tau-1)+2s_{y}\nu_{x}^{y}(\tau-1)\] \[=L_{1}||\mathbf{Z}_{1}-\mathbf{Z}_{2}||,\] where \(L_{1}:=2(\theta_{x}(1+\theta_{x})+\theta_{y}(1+\theta_{y})+(x_{max}+y_{max}+s_{ x}+s_{y})L_{\nabla\phi})\), \(x_{\max}\) and \(y_{\max}\) are the bounds on the ego vehicle's \(x\) and \(y\) coordinates, respectively. Similarly, for \(k=\tau\), we have: \[T_{k}^{x} \leq 2|x^{2}(\tau)-x^{1}(\tau)|+2|\phi_{i,x}^{\mathbf{Z}_{1}}(\tau-1)- \phi_{i,x}^{\mathbf{Z}_{2}}(\tau-1)|\] \[\leq L_{2}||\mathbf{Z}_{1}-\mathbf{Z}_{2}||,\] where \(L_{2}=2(1+\theta_{x})\). Similarly, \(T_{k}^{y}\leq L_{1}||\mathbf{Z}_{1}-\mathbf{Z}_{2}||\) for any \(k\in\{0,\dots,\tau-1\}\), and \(T_{k}^{y}\leq L_{3}||\mathbf{Z}_{1}-\mathbf{Z}_{2}||\), where \(L_{3}=2(1+\theta_{y})\), for \(k=\tau\). Therefore, \(||\nabla b_{i}(\mathbf{Z}_{1})-\nabla b_{i}(\mathbf{Z}_{2})||\leq L_{g}||\mathbf{Z}_{1}-\bm {Z}_{2}||\), where \(L_{g}=T_{p}(\max\{L_{1},L_{2}\}+\max\{L_{1},L_{3}\})\). Hence, \(J(\mathbf{\Delta},\mathbf{\alpha},\mathbf{Z})\) in (10) is Lipschitz differentiable. ### _Proof of Theorem 1_ Since \(C\) is a full rank matrix, \(Im(C)=\mathbb{R}^{4T_{p}}\), and hence, \(D\in Im(C)\). Recall that the feasible sets for \(\mathbf{\Delta}\), \(\mathbf{\alpha}\), and \(\mathbf{Z}\) are bounded, i.e., \(\mathbf{\Delta}\in\mathcal{D},\mathbf{\alpha}\in\mathcal{A}\), and \(\mathbf{Z}\in\mathcal{Z}\). Using these results and Lemmas 1-3, the optimization problem satisfies all the assumptions required for convergence of ADMM in non-convex and non-smooth optimization [29]. Utilizing [29, Theorem 2] proves the convergence of Algorithm 1 for any sufficiently large \(\rho>\max\{1,(1+2\sigma_{\min}(C))L_{J}M\}\).
2304.06044
Learning solution of nonlinear constitutive material models using physics-informed neural networks: COMM-PINN
We applied physics-informed neural networks to solve the constitutive relations for nonlinear, path-dependent material behavior. As a result, the trained network not only satisfies all thermodynamic constraints but also instantly provides information about the current material state (i.e., free energy, stress, and the evolution of internal variables) under any given loading scenario without requiring initial data. One advantage of this work is that it bypasses the repetitive Newton iterations needed to solve nonlinear equations in complex material models. Additionally, strategies are provided to reduce the required order of derivative for obtaining the tangent operator. The trained model can be directly used in any finite element package (or other numerical methods) as a user-defined material model. However, challenges remain in the proper definition of collocation points and in integrating several non-equality constraints that become active or non-active simultaneously. We tested this methodology on rate-independent processes such as the classical von Mises plasticity model with a nonlinear hardening law, as well as local damage models for interface cracking behavior with a nonlinear softening law. In order to demonstrate the applicability of the methodology in handling complex path dependency in a three-dimensional (3D) scenario, we tested the approach using the equations governing a damage model for a three-dimensional interface model. Such models are frequently employed for intergranular fracture at grain boundaries. We have observed a perfect agreement between the results obtained through the proposed methodology and those obtained using the classical approach. Furthermore, the proposed approach requires significantly less effort in terms of implementation and computing time compared to the traditional methods.
Shahed Rezaei, Ahmad Moeineddin, Ali Harandi
2023-04-10T19:58:49Z
http://arxiv.org/abs/2304.06044v2
Learning solution of nonlinear constitutive material models using physics-informed neural networks: COMM-PINN ###### Abstract We applied physics-informed neural networks to solve the constitutive relations for nonlinear, path-dependent material behavior. As a result, the trained network not only satisfies all thermodynamic constraints but also instantly provides information about the current material state (i.e., free energy, stress, and the evolution of internal variables) under any given loading scenario without requiring initial data. One advantage of this work is that it bypasses the repetitive Newton iterations needed to solve nonlinear equations in complex material models. Additionally, strategies are provided to reduce the required order of derivation for obtaining the tangent operator. The trained model can be directly used in any finite element package (or other numerical methods) as a user-defined material model. However, challenges remain in the proper definition of collocation points and in integrating several non-equality constraints that become active or non-active simultaneously. We tested this methodology on rate-independent processes such as the classical von Mises plasticity model with a nonlinear hardening law, as well as local damage models for interface cracking behavior with a nonlinear softening law. Finally, we discuss the potential and remaining challenges for future developments of this new approach. keywords: Physics-informed neural networks, Constitutive relations, Nonlinear material behavior, path-dependent material models, Finite element analysis. + Footnote †: journal: ## 1 Introduction Implementing and evaluating nonlinear material models is a challenging yet essential task for achieving reliable predictions in many applications. For example, advanced models may be needed to represent the complex plastic behavior of metals, soils, and polymers, or to account for the softening of materials as damage and microcracks evolve [1; 2; 3; 4; 5]. Typically, modelers use a thermodynamic framework to derive these models, which helps them to set up all the necessary equations and evolution laws for the internal state of the materials. Once this is done, complex codes must be programmed in a specific programming language, which can be a difficult and time-consuming process. To solve material constitutive equations, the iterative Newton method is often used, which can be highly effective. However, this method can also become computationally expensive, as it requires repeating the algorithm for each integration point and nonlinear iteration within the finite element framework. In summary, developing and implementing nonlinear material models require significant expertise and effort, and access to appropriate computational resources. In addition to the previous point, material behavior is strongly influenced by lower scales. Multiscale material modeling involves solving a boundary value problem at the microscale. Once the microscale solution is obtained, the homogenized results are transferred to the structural problem at the macroscale [6]. However, the transformation of information between scales is a bottleneck in multiscale analysis, as it involves computationally-intensive tasks that can be time-consuming. A recent trend in computational material mechanics is to use deep learning (DL) methods such as neural networks (NN) to bypass expensive calculations. The concept involves training a NN on a sufficient amount of data and using the trained NN as a surrogate model for a specific material system [7; 8; 9]. The success of this approach relies heavily on the quality and quantity of available data as well as the machine learning optimizer used. For more information, readers can refer to review articles [10; 11; 12], as well as research and review papers in [13; 14; 15; 16; 17]. In the previous approach, the predictions of the neural network (NN) are solely based on the available data, which raises concerns about the reliability of the NN for cases outside the training range. However, recent studies have shown that it is possible to improve the NN's predictive accuracy for unseen situations by incorporating thermodynamic constraints into the training process [18]. Liu et al. [19] developed a data-driven multiscale material modeling method that integrates principles of homogenization theory and essential physics with machine learning techniques. See also [20] for further extension of the method to reproduce the creep behavior of fiber-reinforced materials. Heider et al. [21] discussed the application of the informed-graph-based neural networks for anisotropic elasto-plastic behavior and proposed remedies to generate a frame-invariant machine-learning constitutive model. Flaschel et al. [22] developed an approach for the unsupervised discovery of material laws belonging to an unknown class of constitutive behavior (such as viscoelasticity and elastoplasticity). The main idea is to obtain the Helmholtz free energy and the dissipation potential from the data on displacements and reaction forces. Fuhg et al. [23] discussed a modular elastoplasticity formulation where different components of the model can be chosen to be either a phenomenological or a data-driven model depending on the amount of available information. Weber et al. [24] improved NN training with physical constraints for hyperelastic materials and showed that the introduced enhancements lead to better learning behavior. Kalina et al. [25] presented a data-driven multiscale framework utilizing physics-constrained NN for finite strain hyperelasticity problems. The idea is to set the Helmholtz free energy density as output and train the NN by using a set of invariants as the input of the NN. As a result, several physical principles are fulfilled automatically. See also investigations by [26] on polyconvex neural network constitutive models. Masi et al. [27] proposed the thermodynamics-based NN for constitutive modeling. The authors proposed to learn the free energy of the material as a function of strain and internal state variables. See also investigations by [28] on learning effective strain energy of elastic cellular metamaterials. Masi and Stefanou [29] also extended their methodology to allow for identifying, from data and first principles, admissible sets of internal variables in complex materials. In addition, to feed-forward NNs, other DL methods such as recurrent neural networks (RNN) are also suitable choices for path-dependent material behavior [11; 30; 31]. He and Chen [32] proposed a data-driven constitutive modeling approach with the consideration of thermodynamics principles for path-dependent materials. Bonatti and Mohr [33] considered an elastoplastic example and proposed an RNN architecture that respects self-consistency and performs best when applied to long sequences of small increments. Koeppe et al. [34] proposed a physics-explaining approach, which interprets RNN. The authors investigated case studies on elastoplasticity, and viscoelasticity constitutive models. Danoun et al. [35] presented a hybrid physics-AI-based model to predict non-linear mechanical behaviors via RNN where specific thermodynamical constraints are considered during the training phase. See also developments by [36], [37] and [38]. In the reviewed articles, one requires enough data to train the NN. Following the idea of physics-informed neural networks (PINN) [39], one can employ the physical laws as loss functions to enhance the network outcome [40; 41]. Once the underlying physics of the problem is completely known, the PINN can be trained without any initial data and solely based on the given set of equations [42; 43; 44]. Wei et al. [45] established a consistent neural network approach to model the constitutive behavior of interfaces by integrating physical conditions such as positive energy dissipation as additional training constraints in the loss function. Haghighat et al. [46] presented a physics-informed neural network formulation for constitutive modeling where inequality constraints of elastoplasticity theory are embedded in the PINN loss functions. In [47], the authors illustrated an extension for PINN in solid mechanics for the case with von Mises elastoplasticity and show that the PINN model can accurately predict the solution for a wide range of parameters. See also investigation by [48] for the extension to finite strain elastoplasticity. Eghbalian et al. [49] presented a NN architecture that is enriched with the physics of elastoplasticity and show that embedding this aspect enhanced the extrapolation capability for loading regimes outside the training data. The reviewed works make significant contributions to two main categories. In the first group, NNs are trained on sufficient data obtained from numerical simulations or experimental measurements. To improve predictive accuracy, physical constraints are incorporated into these methods. In the second group, authors applied PINN techniques to study nonlinear material behavior for specific boundary value problems or for the characterization of constitutive models. While there has been previous research on using machine learning methods in material modeling, the specific combination of physics-informed neural networks and the ability to satisfy thermodynamic constraints and predict real-time material states without requiring initial data is a novel contribution to the field (see Fig. 1). It is worth mentioning that a simple FFNN will be used in this work, which can be integrated into any FE computation. Moreover, automatic differentiation will be used to bypass the derivation parts related to the tangent operator. In the following sections, we will summarize the steps for deriving a consistent nonlinear material model. We will also present case studies that involve elastoplastic behavior with a nonlinear isotropic hardening law, as well as a local damage model with a nonlinear softening law applied to model interface cracking. Finally, we will discuss the results, draw conclusions, and suggest possible future developments. Figure 1: Utilizing neural networks to substitute the return mapping algorithm in numerical material modeling. ## 2 Obtaining consistent constitutive relations and solving them with PINN We start with the second law of thermodynamics in terms of the Clausius-Duhem inequality. The local form of dissipation inequality for the case of small deformation, pure mechanical case, and an isothermal process reads [4] \[-\dot{\psi}(\boldsymbol{\varepsilon},\boldsymbol{\xi}_{k})+\boldsymbol{\sigma} :\dot{\boldsymbol{\varepsilon}}\stackrel{{!}}{{\geq}}0. \tag{1}\] Here, \(\psi=\psi(\boldsymbol{\varepsilon},\boldsymbol{\xi}_{k})\) is the free energy of the material, \(\boldsymbol{\sigma}\) is the stress tensor, \(\boldsymbol{\xi}_{k}\) represent a set of internal (state) variables and \(\boldsymbol{\varepsilon}\) is the strain tensor. The above inequality is simplified as \[(\underbrace{\boldsymbol{\sigma}-\partial_{\boldsymbol{\varepsilon}}\psi}_{= \boldsymbol{0}})\ \dot{\boldsymbol{\varepsilon}}-\Sigma_{k}\underbrace{\partial_{\boldsymbol{ \xi}_{k}}\psi}_{=\boldsymbol{q}_{k}}\ \dot{\boldsymbol{\xi}}_{k}\stackrel{{!}}{{\geq}}0. \tag{2}\] Next, the thermodynamic conjugate forces follow automatically as \[\boldsymbol{\sigma}=\partial_{\boldsymbol{\varepsilon}}\psi,\quad\boldsymbol{q }_{k}=\partial_{\boldsymbol{\xi}_{k}}\psi. \tag{3}\] The remaining part of the dissipation inequality (i.e. \(\mathcal{D}=-\Sigma_{k}\boldsymbol{q}_{k}\ \dot{\boldsymbol{\xi}}_{k} \stackrel{{!}}{{\geq}}0\)), still has to be satisfied for an arbitrary process. To assure a positive dissipation rate and model the nonlinear material response, usually one introduces proper yield criterion \(YLD(\boldsymbol{\xi}_{k},...)\) as well as evolution equations \(EVL(\boldsymbol{\xi}_{k},...)\) for the state variables. In the following sections, we will investigate this part in more detail for two practical examples of plasticity and damage models. At this point, we have all the ingredients to set up a neural network for predicting nonlinear material behavior. In Fig. 2, three architectures are proposed for solving equations in an arbitrary constitutive material model with physics informed neural networks (COMM-PINN). Next, we will examine the pros and cons of each suggested network. In all cases, the user defines the free energy function and the required state variables. Furthermore, the inputs of the network are the current strain and the history of the material. For case A, only the state variables are the outputs. All the thermodynamic conjugate forces (\(\boldsymbol{\sigma}^{i+1}\) and \(\boldsymbol{q}_{k}^{i+1}\)) and the consistent tangent operator \(\mathbb{C}^{i+1}=\mathrm{d}\boldsymbol{\sigma}^{i+1}/\mathrm{d}\boldsymbol{ \varepsilon}^{i+1}\) are calculated via automatic differentiation (AD). One possible drawback is the requirement for second-order differentiation, for which proper activation functions must be used [27]. Loss functions in this case are based on the yield function and evolution laws. For case B, the thermodynamic conjugate forces are defined as an additional output, and their definitions are used as additional loss functions (see also [50] and references therein for ideas on breaking the order of derivation in PINN). One advantage of the second architecture is that we only require first-order derivations to obtain the tangent operator. However, the network in this case might be denser to handle multiple outputs. In both of the above cases, the only requirement from the user is to define the free energy function, and all the thermodynamic forces, as well as updated state variables, are natural outcomes of the network. Finally, it should be noted that training the NN in the above cases might be very challenging due to the entangled nature of the equations. For case C, we make training easier by taking advantage of the analytical derivation of the free energy function and feeding the network with the analytical expression for the thermodynamic conjugate forces, such as the stress tensor. Obtaining such derivations can be a drawback for complicated energy functions, but it is usually straightforward to get such partial derivations. The advantage is fewer loss terms, less entanglement of equations, and first-order derivation for obtaining the tangent operator. Figure 2: Three suggestions as potential neural network architectures and loss functions for solving constitutive material equations via physics-informed neural networks (COMM-PINN). ## 3 A case study on von mises plasticity model ### Model derivation We limit ourselves to a 1D setting for small deformation. Nevertheless, the current investigations can be extended to higher dimensions and also case studies where more nonlinear behavior is expected. The strain variable \(\varepsilon\) is defined as derivation of displacement \(u\) with respect to the coordinate \(x\) (i.e. \(\varepsilon=\mathrm{d}u/\mathrm{d}x\)). In the case of elastoplastic behavior, the strain variable is additively decomposed into an elastic and a plastic part as \[\varepsilon=\varepsilon_{e}+\varepsilon_{p}. \tag{4}\] The Helmholtz free energy \(\psi\) is a contribution of the elastic part (\(\psi_{e}\)) and the plastic part (\(\psi_{p}\)). Therefore, we write the following expressions \[\psi(\varepsilon,\varepsilon_{p},\xi_{p}) =\psi_{e}(\varepsilon,\varepsilon_{p})+\psi_{p}(\xi_{p}), \tag{5}\] \[\psi_{e}\left(\varepsilon,\varepsilon_{p}\right) =\frac{1}{2}E(\varepsilon-\varepsilon_{p})^{2},\] (6) \[\psi_{p}\left(\xi_{p}\right) =h_{1}\left(\xi_{p}+\frac{e^{-h_{2}\xi_{p}}-1}{h_{2}}\right). \tag{7}\] In the elastic part of the energy (\(\psi_{e}\)), we have the material Young's modulus \(E\). Moreover, in the plastic part of the energy we introduce the parameter \(\xi_{p}\) which represents the accumulative plastic strain. Only isotropic hardening is considered and the constants \(h_{1}\) and \(h_{2}\) are corresponding plasticity hardening parameters. Using the chain rule of differentiation and considering the additive elastoplastic split of the strain rate, the Clausius-Duhem inequality in its isothermal local form reads \[\left(\sigma-\partial_{\varepsilon}\psi\right)\,\dot{\varepsilon}-\partial_{ \epsilon_{p}}\psi\,\,\dot{\varepsilon}_{p}-\partial_{\xi_{p}}\psi\,\,\dot{ \xi}_{p}\stackrel{{!}}{{\geq}}0. \tag{8}\] The above expression must hold for arbitrary thermomechanical processes. A common choice is to set the expressions in the bracket to zero. The state relations of the model as well as the thermodynamic conjugate forces follow automatically as \[\sigma =\partial_{\varepsilon}\psi=\partial_{\varepsilon}\psi_{e}=E( \varepsilon-\varepsilon_{p}), \tag{9}\] \[q_{p} =\partial_{\xi_{p}}\psi=\partial_{\xi_{p}}\psi_{p}=h_{1}\,\,(1-e^{ -h_{2}\xi_{p}}). \tag{10}\] The remaining part of the dissipation inequality can now be written as \[\sigma\,\,\dot{\varepsilon}_{p}-q_{p}\,\,\dot{\xi}_{p}\stackrel{{! }}{{\geq}}0. \tag{11}\] The plasticity yield criterion (based on the von Mieses model) is written as \[\phi_{p}=|\sigma|-\left(\sigma_{y0}+q_{p}\right). \tag{12}\] Moreover, the following evolution laws for the plastic internal variables are used: \[\dot{\varepsilon}_{p} =\dot{\lambda}_{p}\,\,\frac{\partial\phi_{p}}{\partial\sigma}= \dot{\lambda}_{p}\,\,\mathrm{sgn}(\sigma), \tag{13}\] \[\dot{\xi}_{p} =-\dot{\lambda}_{p}\,\,\frac{\partial\phi_{p}}{\partial q_{p}}= \dot{\lambda}_{p}. \tag{14}\] Finally, the plastic loading/unloading conditions of the model are taken into account: \[\dot{\lambda}_{p}\geq 0,\quad\phi_{p}\leq 0,\quad\dot{\lambda}_{p}\,\,\phi_{p}=0. \tag{15}\] By considering the relations 13, 14 as well as constraints in 15, the nonequality 11 is satisfied. ### Implementation and algorithmic aspects for the plasticity model Algorithm 1 summarizes the return mapping algorithm commonly used to solve the plasticity equations. In order to solve for the unknowns, the equations for plasticity (or any other nonlinear material routines) must be linearized, which is accomplished using a local Newton solver at each integration point of the finite element method. Additionally, the user must derive equations for the tangent operator to be used in finite element programs. These tasks must be repeated for each global iteration step at the structural level and for each integration point in the domain, making them computationally expensive. ``` 0: material parameters, current strain: \(\varepsilon^{i+1}\), history variables: \(\varepsilon^{i}_{p}\), \(\xi^{i}_{p}\) 0: current stress: \(\sigma^{i+1}\), current history variables: \(\varepsilon^{i+1}_{p}\), \(\xi^{i+1}_{p}\), tangent: \(C^{i+1}_{p}=\mathrm{d}\sigma^{i+1}/\mathrm{d}\varepsilon^{i+1}\) 1:\(k=1\) 2: Set trial values: \(\varepsilon^{i+1,k}_{p}=\varepsilon^{i}_{p}\), \(\xi^{i+1,k}_{p}=\xi^{i}_{p}\) 3:loop plasticity solverdo 4:if\(\phi^{tr}_{p}=|E(\varepsilon^{i+1}-\varepsilon^{i}_{p})|-\left(\sigma_{y0}+q_{p} (\xi^{i}_{p})\right)\leq 0\) 5:\(\varepsilon^{i+1}_{p}=\varepsilon^{i}_{p}\) and \(\xi^{i+1}_{p}=\xi^{i}_{p}\) and \(C^{i+1}=E\) 6:else solve for \(\varepsilon^{i+1,k+1}_{p}\) and \(\xi^{i+1,k+1}_{p}=\xi^{i}_{p}+\Delta\lambda^{k+1}_{p}\) (see Eqs 12,13) 7:\(r^{(1)}_{p}=\varepsilon^{i+1,k+1}_{p}-\varepsilon^{i}_{p}-(\xi^{i+1,k+1}_{p}- \xi^{i}_{p})\frac{\partial\phi_{p}}{\partial\sigma}\stackrel{{!}} {{=}}0\) 8:\(r^{(2)}_{p}=\phi^{i+1}_{p}=|\sigma^{i+1,k+1}|-\left(\sigma_{y0}+q_{p}\left( \xi^{i+1}_{p}\right)\right)\stackrel{{!}}{{=}}0\) 9:\([\varepsilon^{i+1,k+1}_{p}\quad\xi^{i+1,k+1}_{p}]^{T}=[\varepsilon^{i+1,k}_{p} \quad\xi^{i+1,k}_{p}]^{T}-\mathbf{K}^{-1}_{p}\mathbf{r}_{p}\) 10:endif 11:if\(\left|\varepsilon^{i+1,k+1}_{p}-\varepsilon^{i+1,k}_{p}\right|>\mathrm{tol}\) OR\(\left|\xi^{i+1,k+1}_{p}-\xi^{i+1,k}_{p}\right|>\mathrm{tol}\) 12:\(k++\) 13:CYCLE loop plasticity 14:else 15:\(g^{i+1}_{p}=g^{i+1,k+1}_{p}\), \(\xi^{i+1}_{p}=\xi^{i+1,k+1}_{p}\) 16:EXIT loop plasticity solver 17:endif 18:end loop 19: Compute stress \(\sigma^{i+1}=E(\varepsilon^{i+1}-\varepsilon^{i+1}_{p})\) see Eq. 33 20: Compute tangent \(C^{i+1}=E(1-\frac{E}{E+h_{1}h_{2}\ \exp(-h_{2}\xi^{i+1}_{p})})\) ``` **Algorithm 1** Solving the elastoplastic governing equations at pseudo time \(t^{i+1}\) The superindex \(k\) in Algorithm 1 shows the \(k-\)th iteration in the internal loop _plasticity-solver_. Furthermore, we used the notation \(\Delta\lambda_{p}=\Delta t\ \lambda_{p}\), where \(\Delta t\) is the pseudo time step. Once the plasticity is active (\(\phi_{p}>0\)) the plastic residuals (\(r^{(1)}_{p}\) and \(r^{(2)}_{p}\)) are linearized according to the Newton-Raphson method to find the solution of the unknowns (\(\varepsilon^{i+1,k+1}_{p}\) and \(\xi^{i+1,k+1}_{p}\)). The matrix \(\mathbf{K}_{p}=\partial\mathbf{r}_{p}/\partial\mathbf{U}_{p}\) includes the derivatives of the plastic residual vector \(\mathbf{r}_{p}\) with respect to the unknowns (\(\mathbf{U}_{p}=\{\varepsilon^{i+1}_{p},\xi^{i+1}_{p}\}\)). If the changes in the solution of the unknown internal variables are smaller than a certain tolerance (\(\mathrm{tol}=10^{-10}\)), the obtained results are considered as the converged values and will be used to compute the stress value and the tangent operator. Finally, we are able to set up the global residuals at the finite element level [4; 51]. ### PINN to solve for elasto-plastic behavior with nonlinear isotropic hardening law Based on the methodologies proposed by Raissi et al. [39], the input for the network is the location of the collocation points. In the context of material modeling, the collocation points include the information on the current loading state (applied strain/gap) as well as admissible values for history variables (i.e. plastic strain, damage, etc.). Here we propose to use architecture C according to Fig. 2. Therefore, the output layer includes the current (updated) history variables (i.e. new plastic strain, damage, etc.). The other quantities such as stress, energy, and tangent operator are then calculated based on the known free energy equation of the material. In other words, we do not need any additional NN for computing those quantities. The structure of each neural network takes the standard form where it can be split into a single input layer, several possible hidden layers, and the output layer. Each layer is connected to the next layer for transferring information [52]. In every single layer, there is no connection among its neurons. Therefore, we represent the information bypass from the \(l-1\) layer to \(l\) via the vector \(\mathbf{z}^{l}\). Every component of vector \(\mathbf{z}^{l}\) is computed by \[z_{m}^{l}=a(\sum_{n=1}^{N_{l}}w_{mn}^{l}z_{n}^{l-1}+b_{m}^{l}),\quad l=1,\dots,L. \tag{16}\] In Eq. (16), \(z_{n}^{l-1}\), is the \(n\)-th neuron within the \(l-1\)-th layer. The component \(w_{mn}\) shows the connection weight between the \(n\)-th neuron of the layer \(l-1\) and the \(m\)-th neuron of the layer \(l\). Every individual neuron in the \(l\)-th hidden layer owns a bias variable \(b_{m}^{l}\). The number \(N_{I}\) corresponds to the number of neurons in the \(l\)-th hidden layer. The total number of hidden layers is \(L\). The letter \(a\) stands for the activation function in each neuron. The activation function \(a(\cdot)\) is usually a non-linear function. The proper choice of the activation function is problem dependent and shall be obtained based on hyperparameter studies [42], [53]. For the case of plasticity, the input layer contains \(\varepsilon^{i+1}\), \(\varepsilon_{p}^{i}\) and \(\xi_{p}^{i}\). We denote the input layer via vector \(\mathbf{X}=\{\varepsilon^{i+1},\varepsilon_{p}^{i},\varepsilon_{p}^{i}\}\). The output layer is written as \(\mathbf{Y}=\{\varepsilon_{p}^{i+1},\xi_{p}^{i+1}\}\). We use separate fully connected FFNN for each output variable (see also Fig. 3). The trainable set of parameters of the network is represented by \(\mathbf{\theta}=\{\mathbf{W},\mathbf{b}\}\), which are the weights and biases of a neural network, respectively. Considering each neural network structure as \(\mathcal{N}\), the outcome of the constitutive material modeling via neural network reads \[\varepsilon_{p}^{i+1}=\mathcal{N}_{\varepsilon_{p}}(\mathbf{X};\mathbf{\theta}),\quad \quad\xi_{p}^{i+1}=\mathcal{N}_{\xi_{p}}(\mathbf{X};\mathbf{\theta}). \tag{17}\] The neural network outputs are functions of the trainable parameters and the training is done by minimizing physical loss functions. Next, we build the residuals and conditions for the introduced plasticity model in terms of the defined input and output layers. To do so, one requires to integrate the yield function as well as evolution equations for the internal variables into loss functions for the neural networks. By denoting the summation of total loss terms for plasticity by \(\mathcal{L}_{pt}\), and based on the Alg. 1, one defines \(\mathcal{L}_{pt}\) as \[\mathcal{L}_{pt}=\underbrace{\mathcal{L}_{uep}+\mathcal{L}_{uxp}}_{\text{ elastic response}}+\underbrace{\mathcal{L}_{ev}+\mathcal{L}_{yl}}_{\text{plastic evolution}}+\underbrace{\mathcal{L}_{kte}+\mathcal{L}_{kty}}_{\text{KKT conditions}}. \tag{18}\] In Eq. (18), different loss terms cover all the possible loading, unloading, and reloading scenarios. The first two terms (\(\mathcal{L}_{uep}\) and \(\mathcal{L}_{uxp}\)) guarantee that there is no evolution of plastic strain and accumulative plastic strain when the trial yield function (\(\phi_{p}^{tr}\)) is negative. Once \(\phi_{p}^{tr}>0\), the main plastic residuals for evolution law as well as the current yield function become active which are denoted by \(\mathcal{L}_{\textit{evp}}\) and \(\mathcal{L}_{\textit{J}\textit{p}}\), respectively. Finally, to make sure that the KKT conditions are always satisfied, we have the last two loss terms (\(\mathcal{L}_{\textit{kep}}\) and \(\mathcal{L}_{\textit{kyp}}\)). All these relevant loss terms are summarized in what follows \[\mathcal{L}_{\textit{uep}} =\text{MSE}\left((\varepsilon_{p}^{i+1}-\varepsilon_{p}^{i}) \text{Relu}(-\phi_{p}^{tr})\right), \tag{19}\] \[\mathcal{L}_{\textit{uxp}} =\text{MSE}\left((\xi_{p}^{i+1}-\xi_{p}^{i})\text{Relu}(-\phi_{p} ^{tr})\right),\] (20) \[\mathcal{L}_{\textit{evp}} =\text{MSE}\left((\varepsilon_{p}^{i+1}-\varepsilon_{p}^{i}-( \xi_{p}^{i+1}-\xi_{p}^{i})\text{sgn}(E(\varepsilon^{i+1}-\varepsilon_{p}^{i+1} )))\text{Relu}(\phi_{p}^{tr})\right),\] (21) \[\mathcal{L}_{\textit{ylp}} =\text{MSE}\left(\left(\text{abs}(E(\varepsilon^{i+1}-\varepsilon _{p}^{i+1}))-(\sigma_{y0}+h_{1}(1-\exp(-h_{2}\xi_{p}^{i+1})))\right)\text{ Relu}(\phi_{p}^{tr})\right),\] (22) \[\mathcal{L}_{\textit{kep}} =\text{MSE}\left(\text{Relu}(\phi_{p}^{i+1})\right),\] (23) \[\mathcal{L}_{\textit{kyp}} =\text{MSE}\left(\text{Relu}(-\xi_{p}^{i+1}+\xi_{p}^{i})\right). \tag{24}\] In the above relation, \(\phi_{p}^{tr}\) is the so-called trial yield function which is evaluated by means of the (known) quantities at the input layer (see also Alg. 1): \[\phi_{p}^{tr}=\text{abs}(E(\varepsilon^{i+1}-\varepsilon_{p}^{i}))-\left( \sigma_{y0}+h_{1}(1-\exp(-h_{2}\xi_{p}^{i}))\right). \tag{25}\] For completeness, the expressions "abs" and "sgn" represent absolute value and sign function, respectively. The mean squared error is defined as \[\text{MSE}(\bullet)_{\textit{type}}=\frac{1}{n}\sum_{i=1}^{n}(\bullet)^{2}, \tag{26}\] where \(n\) is the number of observation points. The final loss term is minimized at every single collocation point. The mathematical optimization problem is written as \[\mathbf{\theta}^{*}=\arg\min_{\mathbf{\theta}}\mathcal{L}_{pt}(\mathbf{X};\mathbf{\theta}), \tag{27}\] where \(\mathbf{\theta}^{*}\) are the optimal trainable parameters (weights and biases) of the network. **Remark 1** In the above mentioned loss terms, the "Relu" function acts as a switch (if condition). In other words, we differentiate between elastic loading/unloading and the evolution of plastic variables via the sign of the \(\phi_{p}^{tr}\). See also the same procedure in classical iterative solvers (i.e. Alg. 1). One can also use other options such as "sgn" function for this purpose. Based on our studies, we realized that utilizing "Relu" functions is more beneficial. In Fig. 3, we summarized all loss terms and the idea behind constitutive material modeling via physics-informed neural networks. For the sake of clarity, all the input variables are denoted in blue color while the output (unknown) variables are presented in red color. Note that the material properties for the time being are kept constant and represented by green color. After the training is completed, the network predicts the unknown internal variables (i.e. \(\varepsilon_{p}^{i+1}\) and \(\xi_{p}^{i+1}\) for the case of elastoplasticity). Having the updated internal variables as well as the current strain (\(\varepsilon^{i+1}\)), one can construct the free energy function (\(\psi^{i+1}\)) as well as the stress value (\(\sigma^{i+1}\)). Note that the functionality of stress can be obtained by analytical derivation of the energy function with respect to the strain which is according to the thermodynamical framework [27, 25], see also architecture C in Fig. 2. The final important parameter for the FE calculations is the tangent operator which is defined as the full derivation of the current stress value with respect to the current strain value. This parameter is obtained via automatic differentiation of the trained network. The main difference of the current work compared to other studies (especially those in [27]), is the fact that here we do not provide any data to the FFNN. **Remark 2** Although the material properties are kept constant in this study, one can also include them as additional input parameters. As a result, the NN learns the solution to the elastoplastic problem for vast range of given material properties. See investigations by [42, 54] in direction of using transfer learning to include the influence of material properties. **Remark 3** In the above loss functions, one can add the contribution of the data in such form: \(\mathcal{L}_{data}=\text{MSE}\left(\varepsilon_{p}^{i+1}-\varepsilon_{alg}^{i +1}\right)+\text{MSE}\left(\xi_{p}^{i+1}-\xi_{alg}^{i+1}\right)\), where subindex \(alg\) represent the data coming from the return-mapping algorithm. We examine and compare such network designs in the result section. **Remark 4** It is possible to derive the terms for the tangent operator via analytical derivation. The advantage is that one can insert the current values of strain and other state variables to find these quantities without running into the problem of vanishing gradients. The disadvantage is the possible difficulties in obtaining the analytical derivation. In the case of complicated differentiation, one can also use recent advances in parametric differentiation to avoid tedious derivations. Figure 3: Network architecture and loss functions for the COMM-PINNs applied to elasto-plastic material models with isotropic Voce type hardening law. ## 4 A case study on damage model for interface cracking ### Model derivation The gap value \(g\) is utilized to formulate the interface energy and mechanics. The gap variable is related to the displacement jump \(\left\langle\mathbf{u}\right\rangle\) or the distance between two opposite sides of the interface. Under mode I opening, we write the normal gap variable as \[g=\left\langle\mathbf{u}\right\rangle\cdot\mathbf{n}, \tag{28}\] where \(\mathbf{n}\) is the normal vector to the interface. Damage at the interface is represented via a scalar parameter \(d\) which is an internal variable and describes the loss of bonding forces. The (normal) traction parameter \(T\) is related to the gap value \(g\) and other internal variables such as damage \(d\). Similar to the stress-strain behavior for bulk we have the traction-separation relation for the mechanics of a sharp interface. The Helmholtz free energy \(\psi\) at the interface is a contribution of the elastic part (\(\psi_{e}\)) and damage hardening (\(\psi_{d}\)) which lead in total to the following expression: \[\psi(g,d,\xi_{d}) =\psi_{e}(g,d)+\psi_{d}(\xi_{d}), \tag{29}\] \[\psi_{e}\left(\varepsilon,\varepsilon_{p}\right) =\frac{1}{2}f_{\mathrm{d}}(d)\ K\ g^{2},\] (30) \[\psi_{d}\left(\xi_{d}\right) =h_{1}\left(\xi_{d}+\frac{e^{-h_{2}\xi_{d}}-1}{h_{2}}\right), \tag{31}\] Here, \(\xi_{d}\) is damage hardening variables. Moreover, \(h_{1}\) and \(h_{2}\) are interface damage hardening parameters and control the softening behavior [51]. The parameter \(K\) is the initial stiffness of the interface. Furthermore, \(f_{\mathrm{d}}(d)\) is a positive scalar value that varies between 1 for an undamaged interface and a critical value close to 0 for a completely damaged interface. A common choice for this function is as follows \[f_{\mathrm{d}}(d)=(1-d)^{2}. \tag{32}\] Applying the second law of thermodynamics in terms of the Clausius-Duhem inequality, consistent state relations of the model are derived. The conjugate forces are as follows \[T =\partial_{g}\psi=\partial_{g}\psi_{e}=f_{\mathrm{d}}(d)K\ g, \tag{33}\] \[Y =-\partial_{d}\psi=-\partial_{d}\psi_{e}=-\frac{\mathrm{d}f_{d}} {\mathrm{d}d}\frac{1}{2}Kg^{2},\] (34) \[q_{d} =\partial_{\xi_{d}}\psi=\partial_{\xi_{d}}\psi_{d}=h_{1}(1-e^{-h_ {2}\xi_{d}}). \tag{35}\] Similar to plasticity, the damage yield criterion is defined as \[\phi_{d}(Y,q_{d})=Y-\left(Y_{0}+q_{d}\right). \tag{36}\] In this model, the damage initiation point is controlled by the interface parameter \(Y_{0}\), whereas \(q_{d}\) accounts for nonlinear damage hardening. Evolution laws for the damage internal variables read: \[\dot{d} =\dot{\lambda}_{d}\ \frac{\partial\phi_{d}}{\partial Y}=\dot{ \lambda}_{d}, \tag{37}\] \[\dot{\xi}_{d} =-\dot{\lambda}_{d}\ \frac{\partial\phi_{d}}{\partial q_{d}}=\dot{ \lambda}_{d}. \tag{38}\] Finally, the loading/unloading conditions of the damage model are taken into account: \[\dot{\lambda}_{d}\geq 0,\quad\phi_{d}\leq 0,\quad\dot{\lambda}_{d}\ \phi_{d}=0. \tag{39}\] ### Implementation and algorithmic aspects for the plasticity model Similar to the case for plasticity, the superindex \(k\) in Algorithm 2 indicates the \(k\)-th iteration in the internal loop of the damage solver. In this algorithm, \(\Delta\lambda_{d}=\Delta t\ \lambda_{d}\), where \(\Delta t\) is the pseudo time step. When the damage is active (\(\phi_{d}>0\)), the damage residuals (\(r_{d}^{(1)}\) and \(r_{d}^{(2)}\)) are linearized using the Newton-Raphson method to solve for the unknowns (\(d^{k+1}\) and \(\xi_{d}^{k+1}\)). The matrix \(\mathbf{K}_{d}=\partial\mathbf{r}_{d}/\partial\mathbf{U}_{d}\) includes the derivatives of the damage residual vector \(\mathbf{r}_{d}\) with respect to the unknowns (\(\mathbf{U}_{d}=d^{i+1},\xi_{d}^{i+1}\)). If the changes in the solution of the unknown internal variables are smaller than a certain tolerance (\(\mathrm{tol}=10^{-10}\)), the results are considered converged and used to compute the traction values at the interface and the tangent operator. These quantities are used to set up the global residual vector at the finite element level [51]. ``` 0: interface's parameters, current gap: \(g^{i+1}\), history variables: \(d^{i}\), \(\xi_{d}^{i}\) 0: current traction: \(T^{i+1}\), current history variables \(d^{i+1}\), \(\xi_{d}^{i+1}\), tangent: \(C_{d}=\mathrm{d}T^{i+1}/\mathrm{d}g^{i+1}\) 1:\(k=1\) 2: Set trial values: \(g_{p}^{k}=g_{p}^{i}\), \(d^{k}=\xi_{d}^{i}=d^{i}\) 3:loop damage solverdo 4:if\(\phi_{d}^{tr}=K(g^{i+1})^{2}-(Y_{0}+q_{d}(\xi^{i}))\leq 0\) 5:\(d^{i+1}=d^{i}\), \(\xi_{d}^{i+1}=\xi_{d}^{i}\) and \(C_{d}=(1-d^{i+1})^{2}K\) 6:else solve for \(d^{k+1}\) and \(\xi_{d}^{k+1}=\xi_{d}^{i}+\Delta\lambda_{d}^{k+1}\) (see Eqs 36,37) 7:\(r_{d}^{(1)}=d^{k+1}-d^{i}-(\xi_{d}^{k+1}-\xi_{d}^{i})\frac{\partial\phi_{d}}{ \partial Y}\overset{!}{=}0\) 8:\(r_{d}^{(2)}=\phi_{d}=Y-\left(Y_{0}+q_{d}\left(\xi_{d}^{i+1}\right)\right) \overset{!}{=}0\) 9:\([d^{i+1,k+1}\quad\xi_{d}^{i+1,k+1}]^{T}=[d^{i+1,k}\quad\xi_{d}^{i+1,k}]^{T}- \mathbf{K}_{d}^{-1}\mathbf{r}_{d}\) 10:endif 11:if\(\left|d^{i+1,k+1}-d^{i+1,k}\right|>\mathrm{tol}\ \mathbf{\mathrm{OR}}\ \left|\xi_{d}^{i+1,k+1}-\xi_{d}^{i+1,k}\right|> \mathrm{tol}\) 12:\(k++\) 13:CYCLE loop damage solver 14:else 15:\(d^{i+1}=d^{i+1,k+1}\), \(\xi_{d}^{i+1}=\xi_{d}^{i+1,k+1}\) 16:EXIT loop damage solver 17:endif 18:endloop 19: Compute traction \(T=(1-d^{i+1})^{2}Kg^{i+1}\) 20: Compute tangent \(C_{d}\) ``` **Algorithm 2** Solving the damage governing equations at pseudo time \(t^{i+1}\) ### PINN to solve for local damage model or nonlinear softening behavior at the interface For the case of local damage evolution (interface fracture), the input layer includes the gap value \(g^{i+1}\), the previous damage value \(d^{i}\), and the damage hardening variable \(\xi_{d}^{i}\). We denote the input layer via vector \(\mathbf{X}=\{g^{i+1},d^{i},\xi_{d}^{i}\}\). The output layer includes \(\mathbf{Y}=\{d^{i+1},\xi_{d}^{i+1}\}\). Similar to the previous case, we intend to use the separate fully connected feed-forward neural networks for each output variable (see also Fig. 4). The outcome of the constitutive material modeling via neural network reads \[d^{i+1}=\mathcal{N}_{d}(\mathbf{X};\mathbf{\theta}),\ \ \ \ \ \xi_{d}^{i+1}=\mathcal{N}_{\xi_{d}}(\mathbf{X};\mathbf{ \theta}). \tag{40}\] Next, we build the residuals for the introduced damage model. Here, one requires to build the damage yield function as well as evolution equations for the internal variables. These expressions will be used to construct loss functions for the neural networks. Denoting the summation of total loss terms for damage by \(\mathcal{L}_{dt}\), it is defined based on the Alg. 2, as \[\mathcal{L}_{dt}=\underbrace{w_{ued}\mathcal{L}_{ued}+w_{uxd} \mathcal{L}_{uxd}}_{\text{elastic response}}+\underbrace{w_{evd}\mathcal{L}_{ evd}+w_{yld}\mathcal{L}_{yld}}_{\text{damage evolution}}+\underbrace{w_{ked}\mathcal{L}_{ked}+w_{kyd}\mathcal{L}_{kyd}}_{\text{KKT conditions}} \tag{41}\] In Eq. (41), different loss terms cover all the possible loading, unloading, and reloading scenarios. The first two terms (\(\mathcal{L}_{ued}\) and \(\mathcal{L}_{uxd}\)) guarantee that there is no evolution of damage when the trial yield function (\(\phi_{d}^{tr}\)) is negative. Once \(\phi_{d}^{tr}>0\), the main damage residuals for evolution law, as well as the current yield function, become active which is denoted by \(\mathcal{L}_{evd}\) and \(\mathcal{L}_{yld}\), respectively. Finally, to make sure that the KKT conditions are always satisfied, we have the last two loss terms (\(\mathcal{L}_{ked}\) and \(\mathcal{L}_{kyd}\)). All these relevant loss terms are summarized in what follows \[\mathcal{L}_{ued} =\text{MSE}\left((d^{i+1}-d^{i})\text{Relu}(-\phi_{d}^{tr}) \right), \tag{42}\] \[\mathcal{L}_{uxd} =\text{MSE}\left((\xi_{d}^{i+1}-\xi_{d}^{i})\text{Relu}(-\phi_{ d}^{tr})\right),\] (43) \[\mathcal{L}_{evd} =\text{MSE}\left((d^{i+1}-d^{i}-(\xi_{d}^{i+1}-\xi_{d}^{i})) \text{Relu}(\phi_{d}^{tr})\right),\] (44) \[\mathcal{L}_{yld} =\text{MSE}\left(\left((1-d^{i+1})K(g^{i+1})^{2}-(Y_{0}+h_{1}(1- \exp(-h_{2}\xi_{d}^{i+1}))\right)\text{Relu}(\phi_{d}^{tr})\right),\] (45) \[\mathcal{L}_{ked} =\text{MSE}\left(\text{Relu}(\phi_{d}^{i+1})\right),\] (46) \[\mathcal{L}_{kyd} =\text{MSE}\left(\text{Relu}(-\xi_{d}^{i+1}+\xi_{d}^{i})\right). \tag{47}\] The so-called trial yield function \(\phi_{d}^{tr}\) is evaluated by means of the quantities at the input layer (see also Alg. 2): \[\phi_{d}^{tr}=(1-d^{i})Kg^{i+1}-\left(Y_{0}+h_{1}(1-\exp(-h_{2} \xi_{d}^{i}))\right). \tag{48}\] In Fig. 4, we summarized the main loss terms for the interface fracture model with nonlinear softening behavior. For the sake of clarity, all the input variables are denoted in blue color while the output (unknown) variables are presented in red color. Note that the material (interface) properties are kept constant and represented by green color. After the training is completed, the network predicts the unknown internal variables (i.e. \(d^{i+1}\) and \(\xi_{d}^{i+1}\)). Having the updated internal variables as well as the current gap value (\(g^{i+1}\)), one can construct the free energy function as well as the traction value (\(T^{i+1}\)). Note that the functionality of traction is obtained by analytical derivation of the energy function with respect to the gap. Finally, the tangent operator which is defined as the full derivation of the current traction value with respect to the current gap value is obtained via automatic differentiation. The final loss term is minimized at every single collocation point. The mathematical optimization problem is written as \[\mathbf{\theta}^{*}=\arg\min_{\mathbf{\theta}}\mathcal{L}_{dt}(\mathbf{X}; \mathbf{\theta}), \tag{49}\] where \(\mathbf{\theta}^{*}\) are the optimal trainable parameters (weights and biases) of the network. ## 5 Generation of collocation points The location of collocation points plays a crucial role in training [17; 42]. _Collocation points_ are initial inputs for which the governing equations are going to be satisfied (collected). The location of collocation points can be different than _data points_ for which we know the solution in advance. The inputs of the NN are the given strain or gap at step \(i+1\) (i.e. \(\varepsilon^{i+1}\) or \(g^{i+1}\)) as well as state variables from the previous time step \(i\) (i.e. \(\varepsilon^{i}_{p}\) and \(\xi^{i}_{p}\) or \(d^{i}\) and \(\xi^{i}_{d}\)). In this approach, we propose to assign admissible values to the set of inputs. However, two aspects should be considered when generating these points: 1) the desired range of the inputs and 2) avoiding any violation of important physical aspects when assigning random values. For instance, in the case of elastoplasticity, the accumulative plastic strain is always positive and greater than the plastic strain. Similarly, in the case of interface damage and the local damage model, the damage variable and damage hardening variable are always positive, below one, and identical. These restrictions are typically well-known and straightforward to account for. However, for more complex material models, simpler strategies will be required for future developments. It is essential to ensure that the generated values are physically consistent and within the desired range to prevent issues during training and improve the accuracy of the model. Another idea is to generate random loading paths and gather the data from material models as initial values and substitute these models with the trained NN. For the current work we have tried both of these options and concluded that with both of these methods, one is able Figure 4: Network architecture and loss functions for the COMM-PINN applied to interface failure with nonlinear softening law. to solve the constitutive material equations. For the sake of simplicity, we report the more straightforward approach for the generation of collocation points as explained in the following algorithms. For the case of elastoplastic behavior, readers are referred to Alg. 3. Here, we are using the following intervals for collocation points generation: \(bg_{e}=0\), \(st_{e}=0.01\), \(en_{e}=+1\), \(bg_{\varepsilon_{p}}=0\), \(st_{\varepsilon_{p}}=0.01\), \(en_{\varepsilon_{p}}=+1\), \(bg_{\xi_{p}}=0\), \(st_{\xi_{p}}=0.01\), and \(en_{\xi_{p}}=+1\). The main outcome of the Alg. 3 is the location of collocation points stored in the matrix \(\mathbf{IN}_{p}\) which has three columns for the three main inputs of the NN. The number of rows depends on the user's choice of intervals and steps. ``` 0: step size: \(st\), begin \(bg\) and end of interval \(en\) 0: admissible set of inputs for training \(\mathbf{IN}_{p}\) 1:\(n=1\) 2:Initializing: \(\boldsymbol{\varepsilon}=[bg_{\varepsilon}:st_{\varepsilon}:en_{\varepsilon}]\), \(\boldsymbol{\varepsilon}_{p}=[bg_{\varepsilon_{p}}:st_{\varepsilon_{p}}:en_ {\varepsilon_{p}}]\), \(\boldsymbol{\xi}_{p}=[bg_{\xi_{p}}:st_{\xi_{p}}:en_{\xi_{p}}]\) 3:for l in \(\boldsymbol{\varepsilon}\)do 4:for j in \(\boldsymbol{\varepsilon}_{p}\)do 5:for k in \(\boldsymbol{\xi}_{p}\)do 6:\(\mathbf{IN}_{p}[n,1]=\boldsymbol{\varepsilon}[l]\) 7:\(\mathbf{IN}_{p}[n,2]=\boldsymbol{\varepsilon}_{p}[j]\) 8:if\(\boldsymbol{\xi}_{p}[k]\geq\boldsymbol{\varepsilon}_{p}[j]\) 9:\(\mathbf{IN}_{p}[n,3]=\boldsymbol{\xi}_{p}[k]\) 10:\(n++\) 11:end if ``` **Algorithm 3** Collocation point generation for solving elasto-plastic constitutive relations For the case of the interface damage model, readers are referred to Alg. 4. We are using following intervals for collocation points generation: \(bg_{g}=0\), \(st_{g}=0.02\), \(en_{g}=+1\), \(bg_{d}=0\), \(st_{d}=0.02\), \(en_{d}=+1\), \(bg_{\xi_{d}}=0\), \(st_{\xi_{p}}=0.02\), and \(en_{\xi_{d}}=+1\). The main outcome of the Alg. 4 is the location of collocation points for training the damage model stored in the matrix \(\mathbf{IN}_{d}\). ``` 0: step size: \(st\), begin \(bg\) and end of interval \(en\) 0: admissible set of inputs for training \(\mathbf{IN}_{d}\) 1:\(n=1\) 2:Initializing: \(\boldsymbol{g}=[bg_{g}:st_{g}:en_{g}]\), \(\boldsymbol{d}=[bg_{d}:st_{d}:en_{d}]\), \(\boldsymbol{\xi}_{d}=[bg_{\xi_{d}}:st_{\xi_{d}}:en_{\xi_{d}}]\) 3:for l in \(\boldsymbol{g}\)do 4:for j in \(\boldsymbol{d}\)do 5:for k in \(\boldsymbol{\xi}_{d}\)do 6:\(\mathbf{IN}_{d}[n,1]=\boldsymbol{g}[l]\) 7:\(\mathbf{IN}_{d}[n,2]=\boldsymbol{d}[j]\) 8:if\(\boldsymbol{\xi}_{d}[k]=\boldsymbol{d}[j]\) 9:\(\mathbf{IN}_{d}[n,3]=\boldsymbol{\xi}_{d}[k]\) 10:\(n++\) 11:endif ``` **Algorithm 4** Collocation point generation for solving the interface damage model For the case of the interface damage model, readers are referred to Alg. 4. We are using following intervals for collocation points generation: \(bg_{g}=0\), \(st_{g}=0.02\), \(en_{g}=+1\), \(bg_{d}=0\), \(st_{d}=0.02\), \(en_{d}=+1\), \(bg_{\xi_{d}}=0\), \(st_{\xi_{p}}=0.02\), and \(en_{\xi_{d}}=+1\). The main outcome of the Alg. 4 is the location of collocation points for training the damage model stored in the matrix \(\mathbf{IN}_{d}\). ``` 0: step size: \(st\), begin \(bg\) and end of interval \(en\) 0: admissible set of inputs for training \(\mathbf{IN}_{d}\) 1:\(n=1\) 2:Initializing: \(\boldsymbol{g}=[bg_{g}:st_{g}:en_{g}]\), \(\boldsymbol{d}=[bg_{d}:st_{d}:en_{d}]\), \(\boldsymbol{\xi}_{d}=[bg_{\xi_{d}}:st_{\xi_{d}}:en_{\xi_{d}}]\) 3:for l in \(\boldsymbol{g}\)do 4:for j in \(\boldsymbol{d}\)do 5:for k in \(\boldsymbol{\xi}_{d}\)do 6:\(\mathbf{IN}_{d}[n,1]=\boldsymbol{g}[l]\) 7:\(\mathbf{IN}_{d}[n,2]=\boldsymbol{d}[j]\) 8:if\(\boldsymbol{\xi}_{d}[k]=\boldsymbol{d}[j]\) 9:\(\mathbf{IN}_{d}[n,3]=\boldsymbol{\xi}_{d}[k]\) 10:\(n++\) 11:endif ``` **Algorithm 5** Collocation point generation for solving the interface damage model For the case of elastoplastic behavior, readers are referred to Alg. 3. Here, we are using the following intervals for collocation points generation: \(bg_{e}=0\), \(st_{e}=0.01\), \(en_{e}=+1\), \(bg_{\xi_{p}}=0\), \(st_{\varepsilon_{p}}=0.01\), \(en_{\varepsilon_{p}}=+1\), \(bg_{\xi_{p}}=0\), \(st_{\xi_{p}}=0.01\), and \(en_{\xi_{p}}=+1\). The main outcome of the Alg. 3 is the location of collocation points stored in the matrix \(\mathbf{IN}_{p}\) which has three columns for the three main inputs of the NN. The number of rows depends on the user's choice of intervals and steps. ``` 0: step size: \(st\), begin \(bg\) and end of interval \(en\) 0: admissible set of inputs for training \(\mathbf{IN}_{d}\) 1:\(n=1\) 2:Initializing: \(\boldsymbol{g}=[bg_{g}:st_{g}:en_{g}]\), \(\boldsymbol{d}=[bg_{d}:st_{d}:en_{d}]\), \(\boldsymbol{\xi}_{d}=[bg_{\xi_{d}}:st_{\xi_{d}}:en_{\xi_{d}}]\) 3:for l in \(\boldsymbol{g}\)do 4:for j in \(\boldsymbol{d}\)do 5:for k in \(\boldsymbol{\xi}_{d}\)do 6:\(\mathbf{IN}_{d}[n,1]=\boldsymbol{g}[l]\) 7:\(\mathbf{IN}_{d}[n,2]=\boldsymbol{d}[j]\) 8:if\(\boldsymbol{\xi}_{d}[k]=\boldsymbol{d}[j]\) 9:\(\mathbf{IN}_{d}[n,3]=\boldsymbol{\xi}_{d}[k]\) 10:\(n++\) 11:endif ``` **Algorithm 6** Collocation point generation for solving elasto-plastic constitutive relations ## 6 Results The algorithms developed in the current work are implemented in the SciANN package [55] and the methodology can be easily transferred to other programming platforms. For all the reported results, the Adam optimizer is employed. We start with the results of the damage model due to its simplicity compared to the case of plasticity. Since the two models share some similarities, the majority of the hyperparameter studies are only reported for one of these models to avoid unnecessary repetition. ### Case studies on the interface damage model The relevant network parameters for learning the damage model at the interface are summarized in Table 1. Some of these parameters are obtained after extensive (hyper) parameter studies. To make their influence clear to the readers, we will change these values and functions to investigate their impact on the obtained results. The material parameters reported in Table 2 are chosen in a way to keep the stress values below value one and they do not necessarily represent any realistic material at this point. Nevertheless, one can calibrate the model based on some other measurements (see [51]) and normalize the model variables to keep them in the range between \(-1\) and \(1\). As we checked, the performance of the proposed methodology did not change for other chosen material properties. The evolution of each loss term is shown in Fig. 5 as a function of epochs. The collocation points for the training are based on the last section. We used equal weightings for all the loss terms at the beginning and it turned out that the prediction of the NN model is acceptable by doing so (i.e. we have \(w_{ued}=w_{uxd}=w_{evd}=w_{ydl}=w_{ked}=w_{kyd}=1.0\)). Furthermore, for the results provided in Fig. 5, we utilize Relu as the activation function and 5 hidden layers with 100 neurons in each. The rest of the parameters are according to Table 1. All the loss functions decay simultaneously and we did not notice any significant improvement after 1000 epochs. We also observed the same response for all the loss terms through different hyper-parameter studies. \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline Input, Output & \(\{g^{i+1},d^{i},\xi_{d}^{i}\}\), \(\{d^{i+1},\xi_{d}^{i+1}\}\) \\ Activation function & Relu, Tanh, Sigmoid \\ Number of layers and neurons per layer (\(L\), \(N_{l}\)) & \((5,\,10),\quad(5,\,50),\quad(5,\,100)\) \\ Batch size & \(500\) \\ Learning rate \(\alpha\), number of epochs & \((10^{-4},10^{3})\) \\ \hline \end{tabular} \end{table} Table 1: Summary of the COMM-PINN network parameters for the damage model. \begin{table} \begin{tabular}{l l l} \hline & Unit & Value \\ \hline \hline Normal interface stiffness \(K\) & [MPa/mm] & \(5.0\) \\ Damage initiation criterion \(Y_{0}\) & [MPa mm] & \(0.1\) \\ Damage hardening parameter \(h_{1}\) & [MPa mm] & \(2.0\) \\ Damage hardening parameter \(h_{2}\) & [1/mm] & \(1.0\times 10^{2}\) \\ \hline \end{tabular} \end{table} Table 2: Material parameters for the interface damage model described in section 4.3. In what follows, we will report the performance of the trained NN versus the response we obtained based on the classical material model routine (i.e. return mapping algorithm) which is our reference solution. Since the NN has not trained based on any given data (i.e. known and available solution), any arbitrary loading path we define for the strain value can be seen as a test case to evaluate its performance. Nevertheless, as we go through the results section, we examine the performance of the NN beyond the range of the collocation points. Some early prediction for various loading paths are shown in Fig. 6. Here, the first column represents the input gap value in time (i.e. \(g(t)\)), and the second column represents the obtained traction in time (i.e. \(T(t)\)). For the first and second rows, we applied \(g(t)=t\) and \(g(t)=t^{3}\), respectively. For the third and fourth rows, we have \(g(t)=0.5|t\,\sin(5\pi t)|+0.5|\sin(2\pi t)|\) and \(g(t)=1.0|t\,\sin(3\pi t)|\), respectively. To evaluate the NN performance, we also over-plot the relative error for the predicted traction values. The error \(err\) is calculated via a point-wise comparison of the traction from NN \(T_{N}\) and calculated traction from the standard return mapping algorithm \(T_{M}\). Therefore, we have \(err=\frac{|T_{N}-T_{M}|}{T_{M}}\times 100\). The average error for different loading scenario is about 1% and in some extreme cases the maximum error goes up to 2.5%. We will discussed possible suggestions to improve the NN predictions through this section. To avoid repetition, in rest of the paper, we chose a rather complicated loading path for all the studies which includes several loading, unloading, and reloading scenarios. Consequently, we can make sure that all the possible scenarios can become active. For the following studies, we force the gap vector to change through time via the following equation \(g(t)=2.0|t\,\sin(3\pi t)|\). In this loading case, the gap will go beyond the value of 1.5mm while we only train the NN with collocation points up to a gap value of 1.0. Therefore, toward the end of the loading path, we are checking the extrapolation capabilities of the train NN. Note that even in the case of PINN without any initial data, the location of collocation points has a big impact on the final performance. In other words, by going beyond the range of the collocation points, there is no guarantee about the acceptable predictions of the NN. Figure 5: Prominent loss terms for the interface damage model with nonlinear softening law utilizing the proposed COMM-PINN algorithm. #### 6.1.1 Influence of neuron's number in NN architecture In Fig. 7, we study the influence of various NN architectures. Here collocation points are generated with step size \(0.02\) and the Relu activation function is utilized. By comparing different NNs, we conclude that one requires enough neurons to capture all the complicated nonlinear behavior. At some point, adding extra neurons or layers is not beneficial anymore and might even result in the overfitting of the model. Similar performance was also observed for the number of layers where upon choosing a very small number of layers, the performance is really poor. Interestingly enough, for the red curve where we used a proper architecture, the NN Figure 6: Predictions of the trained COMM-PINN for the interface damage model with nonlinear softening law. can accurately capture the nonlinear response for all the loading, unloading, and reloading test cases. Towards the end of the loading where we go beyond the range of the provided collocation points for the training, we observe some deviations in the predicted response. In other words, even thermodynamical constraints can be violated when it comes to evaluating the NN response beyond the range of collocation points. On the positive side, one can always increase the range and density of the collocation points easily, unlike pure data-driven methods for which we need to obtain new data for the new range. It should be noted that using more collocation points will increase the training time. A fair comparison between pure physics-driven and pure data-driven NN training is not so trivial as they are different in the nature of loss functions and initial data sets and computational costs. Nevertheless, we will provide later on some initial comparisons to observe the deviation for a purely data-driven approach when it comes to extrapolation. Figure 7: Influence of the NN layers on the prediction of the network. (a) the loading (gap over time) is based on \(g(t)=2.0|t\)\(\sin(3\pi t)|\). (b) Predicted traction \(T(t)\) via train NN over time. (c) predicted damage variable \(d(t)\) via train NN over time. (d) predicted damage hardening variable \(\xi_{d}(t)\) via train NN over time. (e) predicted free energy function \(\psi_{d}(t)\) over time. (f) predicted traction versus the gap value. #### 6.1.2 Influence of activation function In Fig. 8, we look into the influence of various choices for the activation functions on the predictions. The explanations of the initial training set are similar to those before. Here 5 layers with 100 neurons in each are utilized for all the case studies. We concluded that the Relu function shows the best performance compared to others. Although not reported, our conclusion is the same even by using other activation functions such as Swish and Softplus. The better performance for the Relu function is perhaps due to its nature with a sharp transition that can capture the sudden change of behavior in the unloading, reloading, and yield condition. #### 6.1.3 Influence of collocation point density In Fig. 9, we look into the influence of various choices for the density of the collocation points on the predictions. The chosen activation function for this study is the Relu function and the Figure 8: Influence of the choice of activation function on the prediction of the network. (a) the loading (gap over time) is based on \(g(t)=2.0|t\,\sin(3\pi t)|\). (b) Predicted traction \(T(t)\) via train NN over time. (c) predicted damage variable \(d(t)\) via train NN over time. (d) predicted damage hardening variable \(\xi_{d}(t)\) via train NN over time. (e) predicted free energy function \(\psi_{d}(t)\) over time. (f) predicted traction-separation relation. network has 5 layers with 100 neurons. The density of the collocation points is varied from 0.1 to 0.02. We concluded that even with a very low number of points, the NN can capture the overall behavior of such a nonlinear loading path. We also observe that the performance of the NN model is improved by feeding more collocation points to the NN for the training process. Furthermore, having more points will improve the extrapolation capabilities of the NN (see the obtained response after time 0.8). On the other hand, more collocation points will significantly increase the computational cost of the training. Training with step size 0.02 takes twice the time compared to step size 0.1. One may have to adapt the architecture of the NN adequately based on the number of collocation points. It should be noted that in all the cases, the test loading path is generated with 50 points in time (i.e. the time step size is 0.02). As we will discuss this further in the next section, one can choose different step sizes and call the same NN and yet have acceptable performance. Figure 9: Influence of the step size for generation of collocation points on the predictions. (a) the loading (gap over time) is based on \(g(t)=2.0|t\,\sin(3\pi t)|\). (b) Predicted traction \(T(t)\) via train NN over time. (c) predicted damage variable \(d(t)\) via train NN over time. (d) predicted damage hardening variable \(\xi_{d}(t)\) via train NN over time. (e) predicted free energy function \(\psi_{d}(t)\) over time. (f) predicted traction versus the gap value. ### Case studies on the plasticity model The relevant network parameters for the case of elastoplasticity are summarized in Table 3. There are 4 input parameters: \(E\), \(\sigma_{y0}\), \(h_{1}\), and \(h_{2}\) for the plasticity model. The chosen values for the elastoplastic model are summarized in Table 4. The current approach may also be used for reverse analysis and calibrating these material parameters based on given data [46]. The location of collocation points is according to the explanations in the previous section. Before we report the results, we introduce two modifications to enhance the NN model performance. The first modification is about balancing the value of different loss functions. For the second modification, we tend to smoothen the hard switch conditions in the plastic model. Initially, we used equal weightings for all the loss terms which resulted in an acceptable performance within the training range but does not perform very well when we test the NN beyond the range of the collocation points. After some numerical studies, we concluded that this issue might be due to unbalance nature of different loss terms in the training process. We performed a systematic parameter study and changed the weighting of different loss terms to make them closer to each other. The process is as follows. At first, we start with equal weightings, and then we raise the weighting of those loss functions which are very low in magnitude. By doing so, we observed a huge improvement in the obtained results. This idea is also according to adaptive weighting strategies available in some programming packages. Other adaptive weight techniques should be investigated further in future developments to find more easy-to-use strategies. One of the final optimum results is achieved by selecting \(w_{uep}=100\), \(w_{uxp}=100\), \(w_{evp}=1\), \(w_{ylp}=10\), \(w_{kep}=100\), \(w_{kyp}=10\). In the next modification, we loosen the condition of Relu and sgn function in Eqs. 19 to 24. Therefore, we replace \(\text{Relu}(x)\) with \(\text{Swish}(x)=x\ \text{Sigmoid}(Rx)\), where \(R\) determines how sharp is the transition zone. Finally, we replace the \(\text{sgn}(x)\) with \(\text{Sigmoid}(Rx)\). In the current study, we used \(R=300\). To make the influence of the introduced modifications clear to the readers, we perform studies without these modifications as well. The evolution of each loss term is shown in Fig. 10, where we use 5 hidden layers with 80 neurons in each. Furthermore, for the results provided in Fig. 10, we considered the above modifications. The rest of the parameters are according to Table 3. Most of the loss functions decay simultaneously, and we did not notice any significant improvement after 1000 epochs. \begin{table} \begin{tabular}{l l l} \hline & Unit & Value \\ \hline \hline Elastic stiffness \(E\) & [MPa] & 3.0 \\ Yield stress \(\sigma_{y0}\) & [MPa] & 0.6 \\ Plasticity hardening parameter \(h_{1}\) & [MPa] & 0.4 \\ Plasticity hardening parameter \(h_{2}\) & [\(-\)] & 10.0 \\ \hline \end{tabular} \end{table} Table 4: Material parameters for the plasticity model described in section 3.3. \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline Input, Output & \(\{\varepsilon^{t+1},\varepsilon^{t}_{p},\varepsilon^{t}_{p}\}\), \(\{\varepsilon^{t+1}_{p},\varepsilon^{t+1}_{p}\}\) \\ Activation function & Relu \\ Number of layers and neurons per layer (\(L\), \(N_{l}\)) & (5, 80) \\ Batch size & 100 \\ Learning rate \(\alpha\), number of epochs & \((10^{-4},10^{3})\) \\ \hline \end{tabular} \end{table} Table 3: Summary of the COMM-PINN network parameters for the plasticity damage model. #### 6.2.1 Influence of weightings in the loss function The studies presented in Fig.11 were conducted using a loading path of \(\varepsilon(t)=2.0|t\mathrm{sin}(3\pi t)|\). The time step size used to generate this loading path was 0.01. The material was initially at rest without any prior history. The results show that the yield point, nonlinear hardening path, and stress saturation for large plastic strains were accurately captured by the NN model. One should note that the NN's response for accumulative plastic strain larger than one can be considered as extrapolation. For the case with equal weights, the performance of the NN is not reliable beyond the range of the collocation points. However, we observed that by tuning and balancing the loss functions, the NN was able to accurately capture the material behavior even beyond the range of the collocation points. #### 6.2.2 Modifying switch option in the loss functions In Fig.12, we studied the case where the function Relu is used for switching between different elastic and plastic conditions (see also Eqs.19 to 24). We found that using a smoother version of this function (i.e., \(\mathrm{Relu}(x)\) with \(x\)\(\mathrm{Sigmoid}(Rx)\)) is more beneficial. Our findings indicate that this is particularly important for the condition in \(\mathrm{sgn}(x)\), which is replaced by \(\mathrm{Sigmoid}(Rx)\), where we use \(R=300\). See also investigations by Haghighat et al. [46]. #### 6.2.3 Influence of the time step size Here, we examine the output by resolving the loading path with different time step sizes. These time steps are chosen in a way to ensure that they do not overlap with the collocation points. In the first row of Fig. 13, the NN is evaluated using a time step size of 0.1. In addition to the stress-strain curve, we also plot the tangent operator, which is computed solely via automatic differentiation. For the second row, we evaluate the model using a step size of 1/80 to ensure that there is no overlap with the collocation points (which are generated using a step size of 1/100). Again, the predictions are acceptable. However, as we use finer time steps (see third row with a step size of 1/200), we observe some deviation from the correct solution, and the tangent operator shows slight oscillations. As a remedy, one can always utilize finer steps in the generation of the collocation points to improve the accuracy of the NN predictions. Figure 10: Prominent loss terms for the plasticity model utilizing the proposed COMM-PINN algorithm. Figure 11: Influence of the adapted weightings for the different loss functions. (a) stress versus strain curve. (b) predicted stress \(\sigma(t)\). (c) predicted plastic strain \(\epsilon_{p}(t)\). (d) predicted accumulative plastic strain \(\xi_{p}(t)\). Figure 12: Effect of modifying the switch option in the loss functions. (a) stress versus strain curve. (b) predicted stress \(\sigma(t)\). (c) predicted plastic strain \(\epsilon_{p}(t)\). (d) predicted accumulative plastic strain \(\xi_{p}(t)\). #### 6.2.4 Comparison with the pure data-driven approach Remark 3 suggests that a purely data-driven approach can be used for the same analysis. In this approach, the data on the incremental evolution of the state variables is obtained from a solution of the governing equations on some random loading paths using the return mapping algorithm (see Alg.1). Once the data is obtained, we use the same neural network architecture described in Table 3. In this case, the loss function is based on the difference between the predicted and actual values of plastic and accumulative plastic strain, i.e., \(\mathcal{L}_{data}=\text{MSE}\left(\varepsilon_{p}^{i+1}-\varepsilon_{alg}^{ i+1}\right)+\text{MSE}\left(\xi_{p}^{i+1}-\xi_{alg}^{i+1}\right)\). We generated 625 loading paths for data generation, which are shown in gray in part (a) of Fig.14. All loading paths are limited to a strain of 1, which is similar to the range of collocation points. Furthermore, each loading path is discretized by a time step size of 0.01, which gives us a total of 62500 data points. Although the initial data points and collocation points are similar, it is not straightforward to have an identical initial condition for these two neural networks. Figure 13: Obtained stress-strain curves and the tangent operator via the train NN and classical material model routine. The time step size for creating the loading path is refined to evaluate the performance of the NN for a strain path that does not fully overlap with the collocation points. In Fig. 14, we compare the two methodologies and their ability to predict material behavior for complex unseen loading paths. However, in parts (e) and (f) of Fig. 14, we examine the response of these methods in more detail during the first unloading cycle. While the data-driven method shows acceptable predictions, it violates some fundamental physical aspects, such as the decay of plastic strain during elastic unloading, which is unacceptable. Conversely, the results of the COMM-PINN method follow the expected straight line. For the final loading cycle, we evaluate the extrapolation capabilities of the NN models from both the physics-based and purely data-driven methods. Both methods may fail when extrapolating to extreme conditions. However, the physics-based method with physical constraints performs slightly better. In future developments, it would be interesting to explore the combination of these methods. By adding more collocation points outside the training region, we can enhance the prediction of the hybrid NN model, particularly in cases where there are limited data points for a limited strain interval. Figure 14: Comparison of the proposed approach against pure data-driven methods. (a) applied strain path is shown in blue and all the loading paths used for training the data-driven method are shown in gray. (b) stress versus strain curve. (c) predicted plastic strain \(\epsilon_{p}(t)\). (d) predicted accumulative plastic strain \(\xi_{p}(t)\). #### 6.2.5 Integration of the COMM-PINN model with finite element method We tested the performance of the COMM-PINN model by integrating it into a finite element package. Since the model is trained based on a 1D analysis, we examined a truss-based structure (or a simplified meta-material) where each rod is represented by a single 1D finite element mesh using standard linear shape functions. For this purpose, we utilized the package _trusspy_[56]. The geometry and applied boundary conditions are shown in the upper part of Fig. 15, where we fixed the whole structure on the left-hand side and loaded it from the right vertical edge. After the loading phase, the whole system was unloaded. The obtained reaction force from this process is shown in the lower part of Fig.15. We first performed finite element calculations based on the standard return mapping algorithm and introduced a VOC-type plasticity model. The results are shown in blue. The same boundary value problem was calculated using the trained COMM-PINN model instead of the standard user-defined material model. By comparing the red and blue curves, we conclude that the new method has the potential to replace the classical method in a finite element calculation without any changes to the available codes. Finally, on the right-hand side of Fig.15, we compared the stress distribution from the two methodologies at the end of the loading path and observed identical results. Figure 15: Comparison of the proposed approach against classical return mapping algorithm when they are integrated into the finite element package (trusspy [56]). Top left: the geometry and boundary conditions for a simple truss-based structure. Bottom: obtained reaction force from two methodologies. Right: distribution of stress in the truss-based system, obtained from the two methodologies. ## 7 Conclusion and outlooks This work demonstrates the potential of physics-informed neural networks in solving constitutive relations for material mechanics. The approach is applicable when the free energy function and internal variables of the model are known beforehand. We proposed strategies to construct the loss functions for training the network, utilizing expressions for thermodynamic forces, evolution laws, and yield functions. This eliminates the need for labeled data during training. We also suggested techniques to simplify the process and reduce differentiation degrees for evaluating the tangent operator. Compared to classical return mapping algorithms and other purely data-driven methods, the proposed COMM-PINN method offers several advantages. The model can instantly predict the material response for any loading path within the range of the collocation points. By tuning the weighting of loss functions and performing hyperparameter studies, the model's extrapolation capabilities can be enhanced. The proposed method avoids complex linearizations and yields the tangent operator naturally from the neural network. Labeled data is not necessary for model training, and everything is based on the governing material equations derived from standard thermodynamic principles. Interestingly, the proposed approach showed better performance than purely data-driven methods for a complex test loading path. Finally, the trained material routine can be easily integrated into any finite element package. In future studies, it will be necessary to extend the framework and train the system for other material properties. Investigations into multi-dimensional cases and more complex material routines are also required to better understand the advantages of this approach. Ideally, the NN can be trained once for each material model and used as a substitute for standard material subroutines, eliminating the need to rewrite codes and material models in different programming languages. **Data Availability**: The codes are available at COMM-PINN. **Author Statement**: S.R.: Conceptualization, Software, Supervision, Writing - Review & Editing. A. M.: Software, Writing - Review & Editing. A. H.: Software, Writing - Review & Editing.
2310.17149
Explainable Spatio-Temporal Graph Neural Networks
Spatio-temporal graph neural networks (STGNNs) have gained popularity as a powerful tool for effectively modeling spatio-temporal dependencies in diverse real-world urban applications, including intelligent transportation and public safety. However, the black-box nature of STGNNs limits their interpretability, hindering their application in scenarios related to urban resource allocation and policy formulation. To bridge this gap, we propose an Explainable Spatio-Temporal Graph Neural Networks (STExplainer) framework that enhances STGNNs with inherent explainability, enabling them to provide accurate predictions and faithful explanations simultaneously. Our framework integrates a unified spatio-temporal graph attention network with a positional information fusion layer as the STG encoder and decoder, respectively. Furthermore, we propose a structure distillation approach based on the Graph Information Bottleneck (GIB) principle with an explainable objective, which is instantiated by the STG encoder and decoder. Through extensive experiments, we demonstrate that our STExplainer outperforms state-of-the-art baselines in terms of predictive accuracy and explainability metrics (i.e., sparsity and fidelity) on traffic and crime prediction tasks. Furthermore, our model exhibits superior representation ability in alleviating data missing and sparsity issues. The implementation code is available at: https://github.com/HKUDS/STExplainer.
Jiabin Tang, Lianghao Xia, Chao Huang
2023-10-26T04:47:28Z
http://arxiv.org/abs/2310.17149v1
# Explainable Spatio-Temporal Graph Neural Networks ###### Abstract. Spatio-temporal graph neural networks (STGNNs) have gained popularity as a powerful tool for effectively modeling spatio-temporal dependencies in diverse real-world urban applications, including intelligent transportation and public safety. However, the black-box nature of STGNNs limits their interpretability, hindering their application in scenarios related to urban resource allocation and policy formulation. To bridge this gap, we propose an Explainable Spatio-Temporal Graph Neural Networks (STExplainer) framework that enhances STGNNs with inherent explainability, enabling them to provide accurate predictions and faithful explanations simultaneously. Our framework integrates a unified spatio-temporal graph attention network with a positional information fusion layer as the STG encoder and decoder, respectively. Furthermore, we propose a structure distillation approach based on the Graph Information Bottleneck (GIB) principle with an explainable objective, which is instantiated by the STG encoder and decoder. Through extensive experiments, we demonstrate that our STExplainer outperforms state-of-the-art baselines in terms of predictive accuracy and explainability metrics (_i.e._, sparsity and fidelity) on traffic and crime prediction tasks. Furthermore, our model exhibits superior representation ability in alleviating data missing and sparsity issues. The implementation code is available at: [https://github.com/HKUDS/STExplainer](https://github.com/HKUDS/STExplainer). Spatio-Temporal Data Mining; Graph Neural Networks; Urban Computing; Explainable AI + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. + Footnote †: c)hao Huang is the corresponding author. for improved deployment of spatio-temporal models in real-world scenarios. Human-interpretable explanations can assist decision-makers in effectively utilizing spatio-temporal models for various downstream tasks, including urban planning, intelligent transportation systems, and emergency resource scheduling. To address this crucial gap, we propose the development of explainability models specifically tailored for STGNNs. However, existing graph explainability models primarily focus on classification tasks, such as BA-Shapes, BA-Community, and BA-Cycles(Coles, 2018), and there is currently a lack of ground-truth datasets available for spatio-temporal explainability. Therefore, we need to tackle the following key questions to advance the field of spatio-temporal explainability: **Q1:** How can the explainability of STGNN be defined? **Q2:** How to endow STGNN with spatial and temporal explainability to provide insights underlying cross-region and time dependencies? **Q3:** How to evaluate the performance of STGNN in terms of explainability in the absence of ground-truth labels? **Contribution.** In this study, we address the aforementioned challenges by presenting Explainable Spatio-Temporal Graph Neural Networks (STExplainer). Our framework offers scalability, interpretability, and generalization capabilities. We achieve this by breaking down the STG into separate spatial and temporal graphs and employing a unified spatio-temporal graph attention network to encode the spatial and temporal dynamics. Furthermore, we integrate spatio-temporal positional information into the STG decoder layer. In our approach, we define explainability as the ability to identify influential spatial and temporal subgraphs that have a significant impact on predictive results. To accomplish this, we propose utilizing the spatio-temporal Graph Information Bottleneck (GIB) with a structure-distilled explainable objective. We employ variational approximation to make the objective tractable and instantiate the variational bounds with our proposed STG encoder and decoder. To evaluate the performance of STGNN in terms of explainability without the availability of ground-truth, we adapt two metrics, _Sparsity_ and _Fidelity_, to suit the explainable evaluation of STGNN. In summary, our work makes the following contributions: * To the best of our knowledge, we present the first systematic investigation into the explainability of STGNN, specifically focusing on identifying the most influential spatial and temporal subgraphs in relation to the prediction results. * We propose a novel explainable framework STExplainer, which integrates the structure-distilled graph information bottleneck principle with a unified spatio-temporal attentive encoder and decoder to enhance the explainability and generalization of STGNN. * In our proposed STExplainer framework, we employ the spatio-temporal graph information bottleneck principle with a structure-distilled explainable objective to control the information flow and characterize it with a unified STGNN. We utilize graph attention networks and a position-aware information fusion layer to encode both interpretable and generalizable STG representations. * We conduct extensive experiments across various settings to evaluate the performance of STExplainer in terms of predictive accuracy and explainability. Comparisons over various datasets demonstrate that our model outperforms state-of-the-art baselines. ## 2. Preliminaries **Spatio-Temporal Graph Forecasting.** In Spatio-Temporal Graph (STG) forecasting, we analyze a scenario with \(N\) nodes representing regions and \(T\) time steps. The spatio-temporal graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{A},\mathbf{X})\) is formed, where \(\mathcal{V}\) denotes the set of \(N\) nodes representing regions, \(\mathcal{E}\) represents the edges recorded by the adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\), and \(\mathbf{X}\in\mathbb{R}^{T\times N\times F}\) is the feature matrix associated with attributes like traffic volumes or crime occurrences. Here, \(F\) represents the feature dimensions, and \(T\) represents the time steps. With these definitions in place, we can formally define the task of spatial-temporal graph forecasting as follows: **Problem Statement**. In STG forecasting, the goal is to learn a predictive function denoted as \(f\). This function aims to predict specific attributes of the spatio-temporal graph in the next \(L^{\prime}\) time steps, given the previous \(L\) historical observations. \[\mathbf{Y}_{t:t+L^{\prime}-1}=f(\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{ A},\mathbf{X}_{t-Lt-1})) \tag{1}\] \(\mathbf{X}\in\mathbb{R}^{T\times N\times F}\) is the historical observations with \(F\) feature dimensions from time step \(t-L\) to \(t-1\). \(\mathbf{Y}\in\mathbb{R}^{L^{\prime}\times N\times F^{\prime}}\) represents the predictions with \(F^{\prime}\) feature dimensions for the next \(L^{\prime}\) time steps. **Explainable Graph Neural Networks.** The research community has recently been captivated by the field of explainable Artificial Intelligence (XAI) for Graphs, which focuses on providing reliable and interpretable explanations to enhance the trustworthiness of black-box Graph Neural Network (GNN) models. The primary objective of XAI for Graphs is to foster a sense of trust and enable effective utilization of these models by human users (Sutskever et al., 2017). Motivated by previous studies on the explainability of canonical graphs (Coles, 2018), we propose to enhance the explainability of STGNN by searching for subgraph \(\mathcal{G}_{S}\) based on the STG \(\mathcal{G}\) and the ground-truth label Y: \[\mathcal{G}_{S}=\operatorname*{arg\,max}_{\mathcal{G}_{S}}I(\mathbf{Y}, \mathcal{G}_{S})=H(\mathbf{Y})-H(\mathbf{Y}|\mathcal{G}_{S}) \tag{2}\] where \(I(\cdot)\) denotes the mutual information function, \(H(\cdot)\) represents the information entropy, \(\mathcal{G}_{S}=(\mathcal{V}_{S},\mathcal{E}_{S},\mathbf{A}_{S},\mathbf{X}_{S})\) represents the subgraph of \(\mathcal{G}\) with the sub-node set \(\mathcal{V}_{S}\), the sub-edge set \(\mathcal{E}_{S}\), the sub-adjacency matrix \(A_{S}\) and the sub-feature matrix \(\mathbf{X}_{S}\). The model is optimized to find subgraph \(\mathcal{G}_{S}\) which makes the most prominent contribution to predictions made by model \(f\), which helps humans comprehend the black-box GNN model \(f\) intuitively. Next, we introduce the definition of two categories of explainability approaches on graphs, _i.e._, _post-hoc_ and _intrinsic_, as follows. (i) **Post-hoc.** With the GNN model \(f\), the _post-hoc_ model aims to learn an explainability function \(\Gamma\) to identify the subgraph \(\mathcal{G}_{S}\) contributing to the performance of \(f\) the most: \[\mathcal{G}_{S}=\Gamma(\mathcal{G},\mathbf{Y},f)\quad\text{s.t.}\max_{ \mathcal{G}_{S}}I(\mathbf{Y},\mathcal{G}_{S}) \tag{3}\] (ii) **Intrinsic.** Distinct from _post-hoc_ methods, the goal of _intrinsic_ approaches is to learn a unified model \(f^{\prime}\) to simultaneously predict the target graph signals and identify the subgraph \(\mathcal{G}_{S}\) that impacts the model \(f^{\prime}\) the most, which is defined as below: \[\hat{\mathbf{Y}},\mathcal{G}_{S}=f^{\prime}(\mathcal{G})\quad\text{s.t.}\min \mathcal{L}(\mathbf{Y},\hat{\mathbf{Y}})\land\max_{\mathcal{G}_{S}}I(\mathbf{ Y},\mathcal{G}_{S}) \tag{4}\] where \(\mathcal{L}\) denotes a specific loss function supervising the predictive task on graphs, and \(\hat{\Upsilon}\) presents the predicted results of the model. We summarize the notations frequently used in our paper in Table 1. ## 3. Methodology In this section, we provide a detailed description of the technical aspects and theoretical analysis of our STExplainer framework. Our framework encompasses a unified STGNN encoder that employs spatio-temporal graph attention networks to reason about spatio-temporal dependencies. Additionally, we propose the structure-distilled Graph Information Bottleneck (GIB) for STG to select explainable subgraph structures benefiting the downstream forecasting. The overall architecture of STExplainer is illustrated in Figure 1. ### Spatio-Temporal Graph Attention Networks #### 3.1.1. **Spatial Relation Learning** Inspired by GNN's strength of reasoning the complicated correlations (Srivastava et al., 2015; Wang et al., 2016; Wang et al., 2017), especially in spatio-temporal modeling (Srivastava et al., 2015; Wang et al., 2016), we propose a unified GNN encoder which adapts graph attention networks (Wang et al., 2016) to capture the spatio-temporal dependencies. Following (Wang et al., 2016), we could employ a unified GNN-based framework to capture spatio-temporal dependencies on a unified spatio-temporal graph structure \(\mathbf{A}\in\mathbb{R}^{TN\times TN}\). To avoid the enormous time complexity of STG learning, we decouple the joint graph into a temporal graph and a spatial graph. Primarily, STG feature matrix \(\mathbf{X}\in\mathbb{R}^{T\times N\times F}\) is embeded into a \(d\)-dimensional latent space with the fully connected layer: \[\mathbf{X}^{(0)}=\mathbf{X}\cdot\mathbf{W}^{(0)}+\mathbf{b}^{(0)} \tag{5}\] where \(\mathbf{X}^{(0)}\in\mathbb{R}^{T\times N\times d}\) represents initial embeddings of the STG. \(\mathbf{W}^{(0)}\in\mathbb{R}^{F\times d}\), \(\mathbf{b}^{(0)}\in\mathbb{R}^{d}\) denote the weight and bias matrices. Furthermore, to individually encode spatial and temporal dynamics with our GAT, \(\mathbf{X}^{(0)}\) is converted to spatial embeddings \(\mathbf{X}^{(s)}=\{\mathbf{x}^{(s)}_{j}\in\mathbb{R}^{d_{s}},1\leq j\leq N\}\) employing linear transformation by: \[\mathbf{x}^{(s)}_{j}=\sum_{i=1}^{T}\mathbf{X}^{(0)}_{i,j,:}\mathbf{W}^{(s)}_{ i}+\mathbf{b}^{(s)} \tag{6}\] where \(\mathbf{W}^{(s)}\in\mathbb{R}^{T\times d\times d_{s}}\) and \(\mathbf{b}^{(s)}\in\mathbb{R}^{d_{s}}\) indicate weight and bias parameters. In this stage, we utilize the spatial subgraph in the STG \(\mathcal{G}\), which is defined by \(\mathcal{G}^{(s)}=(\mathcal{V}^{(s)},\mathcal{E}^{(s)},\mathbf{A}^{(s)}, \mathbf{X}^{(s)})\), where \(\mathbf{A}^{(s)}\in\mathbb{R}^{N\times N}\) denotes the spatial adjacency matrix recording the spatial node-wise correlations. Regarding the spatial graph reasoning, we employ GAT with stacked multi-head graph attention layers, where the \(K\)-head graph attention layer is defined as below: \[\begin{split}\mathbf{h}^{(s)}_{j}=\sum_{k=1}^{K}\sum_{j\prime} \sum_{\mathcal{N}(j)\cup\{j\}}\alpha^{k}_{j,j^{\prime}}\cdot\mathbf{W}^{k} \mathbf{x}^{(s)}_{j^{\prime}}\\ \alpha_{j,j^{\prime}}=\frac{\exp(\sigma(\vec{a}^{\top}[ \mathbf{W}\mathbf{x}^{(s)}_{j}+\mathbf{W}\mathbf{x}^{(s)}_{j^{\prime}}]))}{ \sum_{j^{\prime}\in\mathcal{N}(j)\cup\{j\}}\exp(\sigma(\vec{a}^{\top}[ \mathbf{W}\mathbf{x}^{(s)}_{j}+\mathbf{W}\mathbf{x}^{(s)}_{j^{\prime}}]))} \end{split} \tag{7}\] where \(\mathcal{N}(j)\) represents the set of neighbors of the \(j\)-th region according to \(\mathbf{A}^{(s)}\), \(\vec{a}\in\mathbb{R}^{d_{s}}\) represents the weight vector, \(\mathbf{W}\in\mathbb{R}^{d_{s}\times d_{s}}\) indicates the weight parameters, and \(\sigma(\cdot)\) denotes the LeakyReLU activation function. With multiple GAT layers, we gain the extracted spatial embeddings \(\mathbf{H}^{(s)}=\{\widehat{h}^{(s)}_{j}\in\mathbb{R}^{d_{s}},1\leq j\leq N\}\). Then we transform \(\mathbf{H}^{(s)}\) into the spatio-temporal embedding space to get \(\mathbf{H}^{\prime(s)}\in\mathbb{R}^{T\times N\times d}\) utilizing fully-connected layer with weight matrix \(\mathbf{W}^{(1)}\in\mathbb{R}^{T\times d\times d_{s}}\) and bias parameters \(\mathbf{B}^{(1)}\in\mathbb{R}^{T\times d}\) as: \[\mathbf{H}^{\prime(s)}_{i,j,:}=\mathbf{W}^{(1)}_{i}\cdot\mathbf{h}^{(s)}_{j}+ \mathbf{B}^{(1)}_{i} \tag{8}\] #### 3.1.2. **Temporal Relation Learning** We follow the similar relation learning paradigm to model the temporal graph \(\mathcal{G}^{(t)}=(\mathcal{V}^{(t)},\mathcal{E}^{(t)},\mathbf{A}^{(t)}, \mathbf{X}^{(t)})\), where \(\mathbf{A}^{(t)}\in\mathbb{R}^{T\times T}\) indicates the temporal adjacency matrix revealing the correlations among time steps, and \(\mathbf{X}^{(t)}=\{\widetilde{x}^{(t)}_{i}\in\mathbb{R}^{d_{t}},1\leq i\leq T\}\) represents the temporal feature matrix. \(\mathbf{X}^{(t)}\) is transformed from \(\mathbf{H}^{\prime(s)}\) by utilizing the similar fully connected layer as Eq 6. To model the temporal dynamics, stacked multi-head GAT layers defined analogously as Eq 7 are utilized to generate the temporal feature matrix \(\mathbf{H}^{(t)}=\{\widehat{h}^{(t)}_{i}\in\mathbb{R}^{d},1\leq i\leq T\}\). Eventually, we transform the temporal features \(\mathbf{H}^{(t)}\) into the final spatio-temporal embedding matrix \(\mathbf{H}=\mathbf{H}^{\prime(t)}\in\mathbb{R}^{T\times N\times d}\) adopting the similar transformation function as Eq 8. So, we summarize how to construct spatial and temporal graphs as follows: (i) **spatial graph (\(\mathbf{A}^{(s)}\)):** Spatial graph represents the correlations between spatial units. For the two common types of spatio-temporal prediction, _i.e._, graph-based and grid-based (Ghahramani et al., 2017), we can construct graphs using a threshold Gaussian kernel (Srivastava et al., 2015) and considering neighboring regions as neighbors (Srivastava et al., 2015; Wang et al., 2016), respectively. (ii) **temporal graph (\(\mathbf{A}^{(t)}\)):** Temporal graph represents the correlations between temporal representations at different time steps. Formally, if the historical time step is \(T\), we have temporal graph \(A^{(t)}\in\mathbb{R}^{T\times T}\) and \(A^{(t)}_{i,j}=1\) for arbitrary \(i,j\). This means that we assume that every time step influence others originally. Applying GAT for message passing on the temporal graph is equivalent to existing works (Srivastava et al., 2015) that utilize self-attention to capture temporal correlations. #### 3.1.3. **Position-Aware STG Prediction** To enhance the modeling of spatio-temporal contexts in the model inference phase of our STExplainer, we propose to inject spatial and temporal positional embeddings into the foregoing STG relational embeddings \(\mathbf{H}\). In specific, multiple free-form embeddings are leveraged by our STExplainer: the region representations \(\mathbf{E}^{(s)}\in\mathbb{R}^{N\times d}\), the _time of day_ embeddings \(\mathbf{E}^{(ToD)}\in\mathbb{R}^{T\times d}\), and the _day of week_ embeddings \(\mathbf{E}^{DoW}\in\mathbb{R}^{T\times d}\). For implementation, we randomly initialize a tensor \begin{table} \begin{tabular}{c|c} \hline **Notions** & **Description** \\ \hline \(\mathbf{X}\in\mathbb{R}^{TN\times N}\) & Original STG feature matrix. \\ & Initialized STG embeddings. \\ \(\mathbf{X}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Spatial feature matrix. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & Enstructed spatial embedding. \\ \(\mathbf{H}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{d_{s}},1\leq j\leq N)\) & The final output feature matrix of the proposed STG encoder. \\ \(\mathbf{G}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{N\times N},\) & Spatial graph with spatial adjacency matrix \(A^{(0)}\in\mathbb{R}^{N\times N}\). \\ \(\mathbf{E}^{(0)}=(\mathbf{x}^{(0)}_{i}\in\mathbb{R}^{N\times N},\) & Temporal graph with temporal embedding matrix \(A^{(0)} \(E^{(s)}\in\mathbb{R}^{N\times D}\), and the value of the tensor could be updated during back propagation (i.e., learnable). As to temporal positional embeddings, we randomly initialize a _time of day_ tensor \(E_{\text{all}}^{(ToD)}\in\mathbb{R}^{288\times D}\) and a _day of week_ tensor \(E_{\text{all}}^{(DoW)}\in\mathbb{R}^{7\times D}\), where 288 denotes a day has 288 time steps (for 5 min interval) and 7 denotes a week has 7 days. The input _time of day_ and _day of week_ index of the STG query _time of day_ and _day of week_ tensors to obtain temporal positional embeddings. Then, STExplainer makes predictions as follows: \[\mathbf{Y}=\mathbf{MLP}_{1}(\mathbf{H}\|\mathbf{E}^{(s)}\|\mathbf{E}^{(ToD)} \|\mathbf{E}^{(DoW)}\|\mathbf{MLP}_{2}(\mathbf{X})) \tag{9}\] where \(\|\) denotes concatenation, \(\mathbf{MLP}_{1}(\cdot)\) and \(\mathbf{MLP}_{2}(\cdot)\) denote two multi-layer perceptrons for making final predictions and leveraging low-level features \(\mathbf{X}\), respectively. \(\mathbf{Y}\) denotes the predictions for future STG attributes using the position-aware STG embeddings. ### Spatio-Temporal Explainability with GIB #### 3.2.1. GIB-based Explainable Structure Distillation The Graph Information Bottleneck (GIB) technique is designed to compress graph-structured data into low-dimensional representations that exhibit strong correlation with downstream labels. These compressed representations capture a subset of the original information while effectively accounting for the labels in subsequent tasks. As a result, GIB has gained recognition as an explainable model in certain literature, such as (Zhu et al., 2017; Wang et al., 2018). The underlying principle of GIB is to optimize the embeddings by minimizing the following objective: \[\min_{\mathbb{P}(\mathbf{Z}_{X}|\mathcal{G})}-I(Y,\mathbf{Z}_{X})+\beta I( \mathcal{G},\mathbf{Z}_{X}) \tag{10}\] The hidden representations of the graph feature matrix \(\mathbf{X}\) are denoted as \(\mathbf{Z}_{X}\). While the conventional GIB generates low-dimensional representations that capture the reasoning behind downstream labels, these dense hidden embeddings are often challenging for humans to comprehend. This limitation significantly restricts the applicability of using the conventional GIB for model interpretation. In order to address the objective of developing explainable spatio-temporal graph (STG) models, as outlined in Eq 2, we draw inspiration from (Zhu et al., 2017) and propose the structure-distilled GIB approach. This approach applies the Information Bottleneck (IB) principle to distilled subgraph Structures, enabling the acquisition of a small subset of interpretable STG structures. Specifically, the objective of our structure-distilled GIB is defined as follows: \[\min_{\mathbb{P}(\mathcal{G}|\mathcal{G})}-I(\mathbf{Y},\mathcal{G}_{S})+ \beta\cdot I(\mathcal{G},\mathcal{G}_{S}) \tag{11}\] The subgraph \(\mathcal{G}_{S}=(\mathcal{V}_{S},\mathcal{E}_{S},A_{S},\mathbf{X}_{S})\) represents the distilled subgraph obtained from the conditional probability distribution given the original graph \(\mathcal{G}\). In real-world scenarios, the graph structures play a crucial role in spatio-temporal graphs and are easier for humans to interpret as a rationale for model inference. Therefore, we prioritize the use of subgraph structures for interpretation purposes and simplify the objective presented in Equation 11 by defining the subgraph as \(\mathcal{G}_{S}=(\mathcal{V}_{S},\mathcal{E}_{S},A_{S},\mathbf{X})\). #### 3.2.2. Variational Bounds for Structure-Distilled GIB Since the mutual information terms \(I(\mathbf{Y},\mathcal{G}_{S})\) and \(I(\mathcal{G},\mathcal{G}_{S})\) are intractable, we resort to using variational bounds to estimate each term in the objective. For the lower bound of the first term \(I(\mathbf{Y},\mathcal{G}_{S})\), we can utilize the fact that \(\operatorname{KL}[\mathbb{P}(\mathbf{Y}|\mathcal{G}_{S}),\mathbb{Q}_{1}( \mathbf{Y}|\mathcal{G}_{S})]\geq 0\), where \(\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})\) represents an arbitrary distribution of \(\mathbf{Y}\) given \(\mathcal{G}_{S}\). Thus, we obtain: \[I(\mathbf{Y},\mathcal{G}_{S})=\mathbb{E}_{\mathbf{Y},\mathcal{G}_{S}}[\log \frac{\mathbb{P}(\mathbf{Y}|\mathcal{G}_{S})}{\mathbb{P}(\mathbf{Y})}]\geq \mathbb{E}_{\mathbf{Y},\mathcal{G}_{S}}[\log\mathbb{Q}_{1}(\mathbf{Y}| \mathcal{G}_{S})] \tag{12}\] The expression \(\log\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})\) also represents the variational approximation of \(\mathbb{P}(\mathbf{Y}|\mathcal{G}_{S})\), which can be modeled using neural networks within an end-to-end framework. Specifically, by \(\log\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})\) aims to predict the results based on the subgraph \(\mathcal{G}_{S}\). Regarding the upper bound of the second term \(I(\mathcal{G},\mathcal{G}_{S})\), we can establish that \(\operatorname{KL}[\mathbb{P}(\mathcal{G}_{S}),\mathbb{Q}_{2}(\mathcal{G}_{S}) ]\geq 0\) holds true. We can formalize it as follows: \[I(\mathcal{G},\mathcal{G}_{S})=\mathbb{E}_{\mathcal{G},\mathcal{G}_{S}}[\log \frac{\mathbb{P}(\mathcal{G}_{S}|\mathcal{G})}{\mathbb{P}(\mathcal{G}_{S})}] \leq\mathbb{E}_{\mathcal{G}}[\operatorname{KL}(\mathbb{P}(\mathcal{G}_{S}| \mathcal{G})\|\mathbb{Q}_{2}(\mathcal{G}_{S}))] \tag{13}\] Figure 1. The overall framework of the proposed STExplainer framework is as follows: the STG is decoupled into spatial and temporal graph structures to capture spatio-temporal features. These structures are then fed into the structure-distilled GIB module with the ST-Edge Encoder, resulting in spatial and temporal graph edge representations. Additionally, ST-Edge Sampling is employed to obtain explainable spatial and temporal graph structures. Finally, ST-GAT is utilized to encode spatial and temporal dependencies on the explainable structures, ultimately producing the final results. \(\mathbb{Q}_{2}(\mathcal{G}_{S})\) is the variational approximation for the marginal distribution \(\mathbb{P}(\mathcal{G}_{S})\). The ultimate objective for Eq 11 is defined as: \[\min_{\mathbb{P}(\mathcal{G}_{S}|\mathcal{G})}-\mathbb{E}_{\mathbf{Y },\mathcal{G}_{S}}[\log\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})]+\beta \mathbb{E}_{\mathcal{G}}[\mathrm{KL}(\mathbb{P}(\mathcal{G}_{S}|\mathcal{G}) \|\mathbb{Q}_{2}(\mathcal{G}_{S}))] \tag{14}\] ### Spatio-Temporal GIB Characterization To minimize the upper bound in Eq 14 for our structure-distilled GIB, it is necessary to characterize the distributions \(\mathbb{P}(\mathcal{G}_{S}|\mathcal{G})\), \(\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})\) and \(\mathbb{Q}_{2}(\mathcal{G}_{S})\). i) \(\mathbb{P}(\mathcal{G}_{S}|\mathcal{G})\): To extract the influential subgraph \(\mathcal{G}_{S}\) from the original graph \(\mathcal{G}\), we incorporate randomness into the instantiated networks. In particular, we begin by embedding the spatio-temporal graphs \(\mathcal{G}^{(s)}\) and \(\mathcal{G}^{(t)}\) using a unified STGNN encoder. This process yields the spatio-temporal node representations \(\mathbf{H}^{(s)}=\{\widetilde{h}^{(s)}_{j}\in\mathbb{R}^{d_{s}}\}\) and \(\mathbf{H}^{(t)}=\{\widetilde{h}^{(t)}_{i}\in\mathbb{R}^{d_{t}}\}\). Next, we employ the concatenation operator \(|\) and an MLP \(\mathcal{F}_{\Theta}\) with parameters \(\Theta\) to encode the spatio-temporal edge representation. This encoding step is defined as: \[\widetilde{h}^{(s)}_{uu}=\mathcal{F}_{\Theta^{(s)}}(\widetilde{h} ^{(s)}_{u}|\widetilde{h}^{(s)}_{u}),\mathrm{s.t.},u\in\mathcal{N}(v)\] \[\widetilde{h}^{(t)}_{uu}=\mathcal{F}_{\Theta^{(t)}}(\widetilde{h} ^{(t)}_{u}|\widetilde{h}^{(t)}_{u}),\mathrm{s.t.},u\in\mathcal{N}(v) \tag{15}\] where \(\mathcal{N}(v)\) denotes the neighbor set of node \(v\). Subsequently, we employ the Gumbel-Softmax reparameterization trick (Gumbel and Softmax, 1957; Gumbel and Softmax, 1957) to compute the spatio-temporal probabilities \(p^{(s)}_{uu}\) and \(p^{(t)}_{uu}\) for each edge in a differentiable manner. This enables us to have: \[p^{(s)}_{uu}=\sigma((\widetilde{h}^{(s)}_{uu}+g)/\tau),\quad p^{(t)}_{uu}= \sigma((\widetilde{h}^{(t)}_{uu}+g)/\tau) \tag{16}\] where \(g\) is a set of i.i.d. samples drawn from a Gumbel(0,1) distribution, and \(\tau\) is the temperature parameter that controls the smoothness of the resulting distribution. Consequently, we obtain the spatio-temporal explainable subgraph structures \(\mathbf{A}^{(s)}_{S}\) and \(\mathbf{A}^{(t)}_{S}\): \[\mathbf{A}^{(s)}_{S}=\alpha^{(s)}\odot A^{(s)},\quad\alpha^{(s)}_ {uu}\sim\mathrm{Bern}(p^{(s)}_{uu})\] \[\mathbf{A}^{(t)}_{S}=\alpha^{(t)}\odot A^{(t)},\quad\alpha^{(t)}_ {uu}\sim\mathrm{Bern}(p^{(t)}_{uu}) \tag{17}\] The symbol \(\odot\) is the element-wise product. \(\alpha^{(s)}\) and \(\alpha^{(t)}\) are the spatio-temporal subgraph selectors used to extract the explainable subgraphs. Consequently, the instantiation of the spatio-temporal term \(\mathbb{P}(\mathcal{G}_{S}|\mathcal{G})\) is as follows: \[\mathbb{P}(\mathcal{G}^{(s)}_{S}|\mathcal{G}^{(s)})=\prod_{u,u\in \mathcal{V}^{(s)}}\mathbb{P}(\alpha^{(s)}_{uu}|p^{(s)}_{uu})\] \[\mathbb{P}(\mathcal{G}^{(t)}_{S}|\mathcal{G}^{(t)})=\prod_{u,u\in \mathcal{V}^{(t)}}\mathbb{P}(\alpha^{(t)}_{uu}|p^{(t)}_{uu}) \tag{18}\] **ii) \(\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})\)**: The goal of this variational approximation is to infer the spatio-temporal dynamics based solely on the extracted spatio-temporal explainable subgraphs. To achieve this, we utilize the proposed spatio-temporal graph attention network (ST-GAT) architecture, which consists of the same set of learnable parameters as introduced in Section 3.1. It is important to note that when calculating \(\mathbb{Q}_{1}(\mathbf{Y}|\mathcal{G}_{S})\), our ST-GAT performs message propagation exclusively along the sampled explainable edges and nodes. **iii) \(\mathbb{Q}_{2}(\mathcal{G}_{S})\)**: Regarding the prior distribution \(\mathbb{Q}_{2}(\mathcal{G}_{S})\), we have the following formalizations for the spatial and temporal graphs: \[\mathbb{Q}_{2}(\mathcal{G}^{(s)}_{S})=\sum_{\mathcal{G}^{(t)}} \mathbb{P}(\mathcal{G}^{(s)},\mathcal{G}^{(s)}_{S})=\sum_{\mathcal{G}^{(t)}} \mathbb{P}(\mathcal{G}^{(s)}_{S}|\mathcal{G}^{(s)})\mathbb{P}(\mathcal{G}^{(s)})\] \[\mathbb{Q}_{2}(\mathcal{G}^{(t)}_{S})=\sum_{\mathcal{G}^{(t)}} \mathbb{P}(\mathcal{G}^{(t)},\mathcal{G}^{(t)}_{S})=\sum_{\mathcal{G}^{(t)}} \mathbb{P}(\mathcal{G}^{(t)}_{S}|\mathcal{G}^{(t)})\mathbb{P}(\mathcal{G}^{(t)}) \tag{19}\] Following (Gumbel and Softmax, 1957), for the given spatio-temporal graphs \(\mathcal{G}^{(s)}\) with \(n^{(s)}\) edges, we sample prior spatio-temporal selectors \(\alpha^{\prime(s)}\) and \(\alpha^{\prime(t)}\), which is defined as below: \[\alpha^{\prime(s)}\sim\mathrm{Bern}(r^{(s)}),\alpha^{\prime(t)} \sim\mathrm{Bern}(r^{(t)})\] \[\mathbb{Q}_{2}(\mathcal{G}^{(s)}_{S})=\sum_{\mathbf{n}}\mathbb{P}( \alpha^{\prime(s)}|n^{(s)})\mathbb{P}(n^{(s)})\] \[\mathbb{Q}_{2}(\mathcal{G}^{(t)}_{S})=\sum_{\mathbf{n}}\mathbb{P}( \alpha^{\prime(t)}|n^{(t)})\mathbb{P}(n^{(t)}) \tag{20}\] The selector \(\alpha^{\prime}_{uu}=1\) indicates that the edge \((v,u)\in\mathcal{E}\) in graph \(\mathcal{G}\). The hyperparameters \(r^{(s)}\) and \(r^{(t)}\) are used for sampling. Since \(\mathbb{P}(n^{(s)})\) and \(\mathbb{P}(n^{(t)})\) are constants and independent of \(\alpha^{\prime(s)}\) and \(\alpha^{\prime(t)}\), we can simplify the expression, and ultimately we obtain: \[\mathbb{Q}_{2}(\mathcal{G}^{(s)}_{S})=\mathbb{P}(n^{(s)})\prod_{u,u= 1}^{n}\mathbb{P}(\alpha^{\prime(s)}_{uu})\] \[\mathbb{Q}_{2}(\mathcal{G}^{(t)}_{S})=\mathbb{P}(n^{(t)})\prod_{u,u= 1}^{n}\mathbb{P}(\alpha^{\prime(t)}_{uu}) \tag{21}\] ### Model Optimization In our STExplainer framework, we optimize towards the objective of structure-distilled GIB as defined in Equation 14. To infer the downstream labels \(\mathbf{Y}\) using the explainable subgraph \(\mathcal{G}_{S}\), we utilize different loss functions depending on the specific spatio-temporal prediction tasks. For instance, when predicting future traffic volumes, we employ the Huber loss (Gumbel and Softmax, 1957). \[\mathcal{L}_{0}(\mathbf{Y},\hat{\mathbf{Y}})=\mathcal{H}(\mathbf{Y},\hat{ \mathbf{Y}})=\begin{cases}\frac{1}{2}(\mathbf{Y}-\hat{\mathbf{Y}}),&\left| \mathbf{Y}-\hat{\mathbf{Y}}\right|\leq\delta\\ \delta\big{(}\mathbf{Y}-\hat{\mathbf{Y}}\big{|}-\frac{1}{2}\delta),&otherwise \end{cases} \tag{22}\] where \(\delta\) denotes the hyperparameter for threshold. For the crime prediction, we instead utilize the mean absolute error (MSE) loss following (Gumbel and Softmax, 1957) and have the following loss: \(\mathcal{L}_{0}(\mathbf{Y},\hat{\mathbf{Y}})=\left\|\mathbf{Y}-\hat{\mathbf{Y}} \right\|_{2}^{2}\). For the second item in the upper-bound GIB objective (Eq 14), we employ specific loss functions for the spatial and temporal explainable subgraphs, respectively. (23) \[\mathcal{L}_{\text{S-GIB}}=\mathbb{E}_{\mathcal{G}^{(s)}}[ \mathrm{KL}(\mathbb{P}(\mathcal{G}^{(s)}_{S}|\mathcal{G}^{(s)})\|\mathbb{Q}_{2 }(\mathcal{G}^{(s)}_{S}))]\] \[\qquad\qquad\qquad=\sum_{(u,u)\in\mathcal{E}^{(s)}}p^{(s)}_{uu} \log\frac{p^{(s)}_{uu}}{r^{(s)}}+(1-p^{(s)}_{uu})\log\frac{1-p^{(s)}_{uu}}{1-r^{(s)}}+C\] \[\mathcal{L}_{\text{T-GIB}}=\mathbb{E}_{\mathcal{G}^{(t)}}[ \mathrm{KL}(\mathbb{P}(\mathcal{G}^ Combining the above loss functions, the optimization for our STExplainer framework is to minimize the below jointly-training objective, with weighing hyperparameters \(\lambda_{1},\lambda_{2}\). \[\mathcal{L}=\mathcal{L}_{0}+\lambda_{1}\mathcal{L}_{\text{S-GB}}+\lambda_{2} \mathcal{L}_{\text{T-GB}} \tag{24}\] ## 4. Experiments To evaluate the performance of STExplainer in terms of predictive accuracy and explainability, we conduct extensive experiments on three real-world traffic datasets and two crime datasets by answering questions: **RQ1**: How does STExplainer perform while predicting future traffic volume and crimes compared to various state-of-the-art baselines? **RQ2**: How does the STExplainer framework compare to different state-of-the-art explainable models in terms of quantitative explainability? **RQ3**: How do key components contribute to the performance of STExplainer framework? **RQ4**: How does the STExplainer framework perform in terms of generalization and robustness? **RQ5**: What is the influence of various hyperparameter settings on the predictive accuracy of STExplainer? **RQ6**: What visual explanations can be provided by the STExplainer? ### Experimental Settings #### 4.1.1. **Datasets and Evaluation Protocols.** The experiments are conducted on both graph-based traffic prediction tasks and grid-based crime prediction tasks, utilizing five real-world datasets. The statistics of our experimental datasets are summarized in Table 3. **Traffic Prediction.** The model evaluation is firstly conducted using three widely used traffic datasets: PeMS04, PeMS07, and PeMS08 (Feng et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). They were collected by the California Performance of Transportation (PeMS) and have a time interval of 5 minutes, covering different time ranges. To ensure a fair comparison, we split the datasets into training, validation, and testing sets in a 6:2:2 ratio. The evaluation of the models is performed using three metrics: _Mean Absolute Error (MAE)_, _Root Mean Squared Error (RMSE)_, and _Mean Absolute Percentage Error (MAPE)_. **Crime Prediction.** We also investigate the effectiveness of our model in spatio-temporal prediction using crime datasets: NYC Crime and CHI Crime. These datasets, collected from New York City and Chicago, respectively, capture crime incidents on a daily basis and are constructed with a spatial partition unit of \(3km\times 3km\). Following the approach adopted in recent literature (Wang et al., 2018; Wang et al., 2018), we generate the training and testing sets in a ratio of 7:1. In the training set, crime records from the last month are used for validation purposes. We utilize the MAE and MAPE as our evaluation metrics. **Metrics for Explainability Analysis.** Given the absence of ground-truths specifically designed for spatio-temporal explainability, we adopt metrics commonly used in the context of explainability for GNNs, namely _Sparsity_ and _Fidelity_(Zhou et al., 2018). To accommodate the spatio-temporal nature of our tasks, we make modifications to the _Fidelity_ metric, tailored for autoregressive tasks. \[Fidelity+^{(s)t}=\frac{1}{Q}\sum_{i=1}^{Q}(|f(\mathcal{G}_{i}^{(s)t})-f( \mathcal{G}_{i}^{(s)t1-m_{i}})|) \tag{25}\] The modified _Fidelity_ metric, denoted as \(Fidelity+^{(s)t}\), is utilized to measure the explainability of the Spatio-Temporal Graph (STG) framework. In this context, \(Q\) represents the number of spatial and temporal graphs, \(f\) represents the trained predictive spatio-temporal function, \(\mathcal{G}_{i}^{(s)t}\) represents the i\({}^{th}\) original spatial\(\backslash\)temporal graph, and \(m_{i}\) indicates the i\({}^{th}\) extracted explainable subgraph. Consequently, \(\mathcal{G}^{(s)t\backslash 1-m_{i}}\) refers to the masked spatial\(\backslash\)temporal graph based on the complementary subgraph structure \(1-m_{i}\). Furthermore, the _Sparsity_ metric is redefined in the context of spatio-temporal graphs to capture the level of explainability. \[Sparsity+^{(s)t}=\frac{1}{Q}\sum_{i=1}^{Q}(1-\frac{|m_{i}|}{|M_{i}|}) \tag{26}\] where \(Sparsity+^{(s)t}\) indicates spatial/temporal _Sparsity_ of explainable subgraphs, \(|m_{i}|\) and \(|M_{i}|\) represent the number of important nodes based on the explainable subgraph and the original graph. #### 4.1.2. **Compared Baseline Methods** We compare STExplainer to methods, which can be categorized into two classes, to validate its performance in terms of both accuracy and explainability. **Predictive Accuracy:** For the evaluation of our STExplainer on traffic datasets, we compare it with 18 baselines that fall into 5 different categories. Similarly, for the evaluation on crime datasets, we adopt 12 baselines categorized into 5 different categories. _Traffic Prediction:_ (1) **Conventional Statistical Methods**: HA (Han et al., 2017), VAR (Han et al., 2017); (2) **Attention Methods**: ASTGCN (Krishnam et al., 2018), DSTGANN (Krishnam et al., 2018); (3) **Neural Differential Equation Models**: STG-ODE (Feng et al., 2017), STGCNC (Wang et al., 2018), STGCNN (Wang et al., 2018); (4) **GNN-based Methods**: DCRNN (Krishnam et al., 2018), STGCN (Wang et al., 2018), GWN (Wang et al., 2018), STGCN (Krishnam et al., 2018), STGCNN (Krishnam et al., 2018), Z-GCNETs (Wang et al., 2018), TAMP-52GCNets (Wang et al., 2018), GMSDR (Krishnam et al., 2018), FOGS (Krishnam et al., 2018); (5) **Variant of STExplainer**: STExplainer-CGIB (STExplainer with Conventional GIB) _Crime Prediction:_ (1) **Conventional Statistical Methods**: HA (Han et al., 2017), SVM (Krishnam et al., 2018); (3) **CNN-based Approach**: ST-ResNet (Wang et al., 2018); (4) **Hybrid Spatio-Temporal Models**: ST-MetaNet (Wang et al., 2018), STDN (Wang et al., 2018); (4) **Attention Methods**: DeepCrime (Krishnam et al., 2018), STrans (Krishnam et al., 2018); (5) **GNN-based Models**: DCRNN (Krishnam et al., 2018), STGCN (Krishnam et al., 2018), GMAN (Han et al., 2017), ST-SHN (Wang et al., 2018), DMSTGCN (Krishnam et al., 2018). **Predictive Explainability:** To evaluate the predictive explainability of our approach, we employ four baselines, which can be grouped into two categories. (1) **Post-hoc Methods**: GNNExplainer (Han et al., 2017), PGExplainer (Han et al., 2017), GraphMask (Wang et al., 2018); (2) **Intrinsic Approach**: the STExplainer-CGIB (STExplainer with Conventional GIB). #### 4.1.3. **Hyperparameter Settings.** Our STExplainer is implemented using PyTorch and PyTorch Geometric library, with Adam optimizer, a learning rate of \(1e^{-3}\), and a decay ratio of 0.5. We utilize two GAT layers with 16 heads for spatial and temporal encoding, with dimensions of 64 and 128, respectively. The prior probabilities \(r^{(s)}\) and \(r^{(t)}\) are scheduled with a decay ratio of 0.1 and a decay interval of 10 epochs. We employ an annealing strategy for \(\lambda_{1}\) and \(\lambda_{2}\) in the loss function to gradually change from 0 to 1 as the epoch increases. For traffic forecasting, we predict the next 12 time steps based on the past 12 time steps, while for crime prediction, we use 30 days of historical records to predict the next 1 day. ### Prediction Accuracy Comparison (RQ1) Table 2 showcases the performance comparison results between our STExplainer and state-of-the-art baselines on three traffic datasets. Additionally, Table 4 presents the comparison results for crime prediction, emphasizing the best-performing model in each dataset. Based on these results, we have the following observations: * **Overall Superiority of STExplainer.** Our STExplainer consistently outperforms state-of-the-art baselines in both tasks. This is attributed to its effective architecture, utilizing an STG attentive encoder and decoder with a position-aware fusion layer. Additionally, the incorporation of explainable structure-distilled Graph Information Bottleneck (GIB) helps filter out irrelevant information and noise, improving accuracy and interpretability. * **Comparing to State-of-the-arts.** Compared to attention-based models like ASTGCN, DSTAGNN, DeepCrime, and STTrans, our STExplainer achieves significant improvements in predictive performance. The explainable GIB principle filters task-irrelevant structural correlations, allowing attentive information to propagate over influential subgraphs. The spatio-temporal GIB demonstrates generalization and robustness, extracting task-relevant information from sparse crime data. The performance gap with GNN-based approaches (FOGS, GMSDR, TAMP-S2GCNets, Z-GCNETs, DMSTGCN, ST-SHN, GMAN) highlights the effectiveness of using graph attention mechanisms to alleviate the over-smoothing effects while modeling complex spatial and temporal correlations. Furthermore, comparing our STExplainer with the variant STExplainer-CGIB, which surpasses most baselines, further confirms the effectiveness of our framework. The GIB principle, instantiated by unified STG attention networks, plays a crucial role in improving performance. * **Visualization of predictions.** We further visualize the predictive results on PEMS04, demonstrating the comparison between our STExplainer and two competitive baselines, namely STGODE and GMSDR, along with the ground-truth results. The visual comparison, depicted in Figure 3, highlights the superiority of our STExplainer. It excels in predicting inflection points that involve sharp jitter changes due to its capability to filter out task-irrelevant information, capturing essential spatio-temporal dynamics, and providing more accurate results. ### Model Explainability Evaluation (RQ2) In this subsection, we quantitatively analyze the spatio-temporal explainability of our STExplainer. We use modified metrics, namely "Sparsity" and "Fidelity", to evaluate the spatial and temporal graphs. The comparison results on PEMS04 are shown in Figure 2. To ensure a fair comparison, we employ _post-hoc_ frameworks to explain STGNN models using the same STG encoder and decoder. Higher scores in "Sparsity" and "Fidelity" indicate better predictive explainability, aiming to extract smaller, impactful spatio-temporal subgraphs. The STExplainer framework, incorporating explainable information bottleneck, achieves the best explainable performance \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{HA} & \multirow{2}{*}{VAR} & \multirow{2}{*}{DRNN} & \multirow{2}{*}{STGCN} & \multirow{2}{*}{GWN} & \multicolumn{2}{c}{AST} & \multicolumn{2}{c}{STS} & \multicolumn{2}{c}{Stem} & \multicolumn{2}{c}{Z-} & \multicolumn{2}{c}{TAMP-} & \multicolumn{2}{c}{STAP-} & \multicolumn{2}{c}{STAP-} & \multicolumn{2}{c}{STExplainer} \\ & & & & & & GCN & GCN & GNN & & GCN & S2GCNets & & & NCDE & GNN & & CGIB & \\ \hline \multirow{6}{*}{GCNETs} & MAE & 38.03 & 24.54 & 21.22 & 21.16 & 24.89 & 22.93 & 25.11 & 19.83 & 19.83 & 20.84 & 19.50 & 19.74 & 20.49 & 19.74 & 19.21 & 19.30 & 19.14 & **18.57** \\ & RMSE & 92.94 & 38.61 & 33.44 & 34.89 & 39.66 & 35.22 & 33.86 & 33.80 & 32.26 & 31.88 & 32.82 & 31.61 & 31.74 & 32.13 & 31.16 & 31.09 & 31.46 & 30.77 & **30.14** \\ & MAPF(\%) & 27.88 & 17.24 & 14.17 & 13.83 & 17.29 & 16.56 & 13.90 & 16.10 & 12.97 & 13.02 & 13.77 & 12.78 & 13.22 & 14.15 & 13.05 & 12.76 & 12.70 & 12.91 & **12.13** \\ \cline{2-15} & MAE & 45.12 & 50.22 & 52.52 & 25.23 & 25.33 & 26.40 & 24.26 & 22.23 & 22.37 & 22.07 & 22.99 & 21.77 & 21.84 & 22.27 & 21.28 & 20.53 & 21.42 & 20.55 & **20.00** \\ & RMSE & 65.64 & 75.63 & 36.61 & 39.34 & 37.40 & 41.50 & 37.87 & 90.36 & 34.66 & 35.55 & 38.00 & 37.54 & 35.17 & 35.42 & 34.94 & 34.83 & 33.84 & 34.51 & 35.12 & 33.45 \\ & MAPF(\%) & 45.31 & 22.22 & 11.97 & 10.73 & 12.01 & 9.20 & 9.12 & 9.21 & 9.21 & 10.14 & 9.25 & 9.24 & 9.86 & 8.95 & 8.80 & 9.01 & 8.61 & **8.51** \\ \cline{2-15} & MAE & 34.86 & 19.19 & 16.82 & 17.50 & 18.28 & 18.25 & 17.13 & 15.91 & 15.95 & 16.64 & 16.81 & 15.76 & 13.66 & 13.65 & 15.75 & 15.45 & 15.67 & 14.87 & **14.59** \\ \cline{2-15} & RMSE & 22.94 & 29.83 & 26.79 & 27.90 & 30.05 & 28.06 & 28.54 & 25.22 & 25.97 & 25.11 & 25.98 & 25.84 & 29.42 & 28.14 & 27.47 & 24.07 & **23.91** \\ \cline{2-15} & MAPF(\%) & 24.07 & 13.10 & 10.92 & 11.29 & 12.15 & 11.64 & 10.96 & 10.90 & 10.60 & 10.62 & 10.01 & 10.15 & 10.28 & 9.88 & 9.92 & 9.94 & 10.26 & **9.80** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison of different methods on PEMS4, 7, 8 datasets. \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Type & Volume \# Interval & * Nodes & * Time Span & * Features \\ \hline PeMSD4 & Graph & Traffic & 5 min & 307 & 01/2018 - 02/2018 & 1 \\ PeMSD7 & Graph & Traffic & 5 min & 883 & 05/2017 - 08/2017 & 1 \\ PeMSD8 & Graph & Traffic & 5 min & 170 & 07/2016 - 08/2016 & 1 \\ NNC Crime & Grid & Crime & 1 day & 256 & 01/2014 - 12/2015 & 4 \\ CHI Crime & Grid & Crime & 1 day & 168 & 01/2016 - 12/2017 & 4 \\ \hline \hline \end{tabular} \end{table} Table 3. Statistical information of the experimental datasets. Figure 2. Overall explainability comparison of STExplainer. compared to state-of-the-art approaches. This validates the effectiveness of injecting explainability into our unified STGNN architecture with the IB principle. Among the _post-hoc_ methods, PGExplainer outperforms others for STGNN models, providing effective global explanations. However, the performance gap between our _intrinsic_ models with STExplainer framework and _post-hoc_ methods highlights our inherent superiority in providing faithful explanations for spatio-temporal GNN architecture. ### Ablation Study (RQ3) We investigate the effectiveness of the proposed modules by designing variants of our STExplainer: i) "-CGIB": We replace the explainable GIB principle with the conventional one to compare different ways of controlling bottleneck information. ii) "-w/o SIB" and "-w/o TIB": We remove the explainable GIB-based GAT encoder in spatial and temporal modeling, respectively, and use canonical GAT instead. iii) "-drop 0.5", "-drop 0.3", and "-drop 0.0": We randomly drop edges of the spatio-temporal graphs with different probabilities, instead of utilizing the explainable spatio-temporal GIB. We analyze the results on PEMS04 and 08, which are shown in Figure 4. Through these experiments, we make the following discoveries: * Regarding the "-CGIB" variant, the performance improvements achieved by our STExplainer demonstrate the superiority of the explainable structure-distilled GIB over the conventional one in controlling the flow of structural information during inference. This is attributed to the significance and sensitivity of graph structure in graph neural network models. * The application of explainable GIB demonstrates its effectiveness in capturing important spatial and temporal dependencies, as evidenced by the variants "-w/o SIB" and "-w/o TIB". The influence of spatial and temporal GIB on the framework's performance depends on the credibility and noise levels present in the original spatial and temporal graph structures. In this regard, we argue that the fully connected temporal graph often contains more task-irrelevant structural noise that needs to be filtered out. * When examining the variants "-drop 0.5", "-drop 0.3", and "-drop 0.0", a noticeable performance gap becomes apparent. This gap arises because random edge dropping cannot differentiate between task-relevant and task-irrelevant edges in the graph structure. It is worth noting that "-drop 0.5" and "-drop 0.3" outperform "-drop 0.0" in terms of better generalization. This outcome validates the necessity of edge dropping and motivates us to develop more accurate and efficient strategies for edge drop in the future. ### Generalization and Robustness Study (RQ4) The inherent ability of GIB to extract task-relevant and prediction-influential spatio-temporal information allows us to further validate the generalization and robustness of our STExplainer. To achieve this, we address two specific data quality issues. **Performance _w.r.t_ Data Missing.** In real-world spatio-temporal scenarios, data missing challenges often arise due to sensor failure and privacy policies. To assess the performance of our STExplainer in such cases, we randomly drop traffic volumes on each node with proportions of 10%, 30%, and 50% for traffic prediction. It is important to note that the data drop between nodes is independent. The results on PEMS04 are presented in Table 5, where "-" indicates that the model fails in this situation. The results demonstrate that our STExplainer is capable of filtering out random noise introduced by data drop, showcasing its generalization and robustness. We compare our STExplainer with three competitive baselines, namely STGODE, GMSDR, and STG-NCDE. We observe that the performance of STGODE and GMSDR sharply decreases with an increasing proportion of dropped data. However, our STExplainer shows its robustness by achieving better performance in adapting to data missing scenarios compared to STG-NCDE. **Performance _w.r.t_ Data Sparsity.** In practical scenarios, e.g., crime prediction, spatio-temporal signals across the observed space often exhibit sparsity, with many regions or nodes having zero values. This poses a challenge for achieving better generalization and robustness of the model. To address this, we categorize regions in crime prediction tasks based on historical region density. We compare the predictive results of our STExplainer with baselines on density ranges "0-0.25" and "0.25-0.5", as depicted in Figure 5. The notable performance gap underscores the capability of our STExplainer framework to extract task-relevant information from sparse data, resulting in improved predictive performance. This is particularly noteworthy as our STExplainer outperforms methods specifically tailored for crime prediction, e.g., STtrans and ST-SHN. Figure 4. Ablation experiments of our STExplainer. Figure 5. Performance comparison on sparse regions. \begin{table} \begin{tabular}{c|c c c c c c c|c c c} \hline \hline \multirow{2}{*}{model} & \multicolumn{6}{c|}{PESM04} \\ \cline{2-13} & \multicolumn{3}{c|}{missing 10\%} & \multicolumn{3}{c|}{missing 30\%} & \multicolumn{3}{c|}{missing 50\%} \\ \cline{2-13} & MAE & RMSE & MAVE & MAE & RMSE & MAVE & MAE & RMSE & MAVE \\ \hline STGODE & 23.97 & 35.41 & 19.13 & 45.02 & 59.48 & 29.54 & - & - & - \\ GASDE & 21.69 & 34.06 & 13.81 & 25.02 & 38.45 & 15.01 & 10.51 & 131.64 & 47.31 \\ ST-NCDE & 19.36 & 31.28 & 12.79 & 14.01 & 31.30 & 13.04 & 19.98 & 32.09 & 13.48 \\ STExplainer & 19.12 & 30.84 & 12.61 & 19.34 & 31.20 & 12.86 & 19.92 & 32.05 & 13.27 \\ \hline \hline \end{tabular} \end{table} Table 5. Performance comparison against data missing. ### Hyperparameter Investigation (RQ5) We are conducting a hyperparameter investigation by varying specific hyperparameters while keeping others at their default values. We focus on four significant hyperparameters: head numbers (\(K\)), prior probability (\(r\)), spatial dimension (\(d^{(s)}\)), and temporal dimension (\(d^{(t)}\)). The results of our experiments on PEMS04 are displayed in Figure 6. Here are our detailed experiments and observations: i) **Head numbers** (\(K\)): We vary the number of heads in the spatio-temporal GAT encoder from the range of \(2,2^{2},2^{3},2^{4}\). We find that the model with \(2^{4}\) heads achieves the best performance. Increasing the number of heads enables the model to capture spatio-temporal correlations from multiple dimensions. ii) **Spatial and temporal dimensions** (\(d^{(s)}\), \(d^{(t)}\)): We search for the optimal spatial and temporal dimensions in the spatio-temporal GAT encoder within the range of \(2^{4},2^{5},2^{6},2^{7}\). We find that \(d^{(s)}=2^{6}\) and \(d^{(t)}=2^{7}\) serve as the best settings. iii) **Prior probability** (\(r\)): The prior probability \(r\) represents the spatio-temporal prior probability in Equation 23. We explore the search range of "fix 0.5", "0.9-0.3", "0.9-0.5", "0.9-0.7", where "fix 0.5" indicates fixing \(r\) at 0.5, and the last three options involve controlling \(r\) to decay from 0.9 as the epoch increases. ### Model Interpretation Case Study (RQ6) We conduct an investigation into the explanations provided by our STExplainer in identifying important subgraphs and analyzing their spatio-temporal patterns. To achieve this, we employ a method of obtaining explainable subgraphs by discarding edges with low edge weights. Subsequently, we employ extensive visualization techniques to gain a deeper understanding of the relationships. **Spatio-Temporal Pattern Explanations:** In the left side of Figure 7 (a), our STExplainer identifies node 28 as being more related to node 57 compared to nodes 242 and 243. This relationship is also apparent in the adjacent time series diagram, where node 28 and node 57 exhibit similar time trend patterns with comparable peaks and valleys. The thickness of the arrow connecting the nodes represents the edge weights encoded by the GB principle. Furthermore, in the right side of Figure 7 (a), we observe that node 47 is considered to be weakly correlated with node 36 and other nodes. This is evident from the highlighted peaks, valleys, and rising time points in the adjacent time series plot. Both figures demonstrate that our STExplainer provides explanations that accurately reflect spatio-temporal trend patterns across time and locations. **Spatial Semantics Explanation:** Due to the unavailability of coordinate information in the PEMS04, PEMS07, and PEMS08 datasets, we focus on exploring the precise semantic information provided by explanations on the CHI Crimes dataset. Figure 7 (b) visually present our findings, where interconnected regions exhibit similar regional functionality, particularly in terms of shared Point of Interest (POI) information. For example, regions 118 and 129, as well as regions 141 and 142, display comparable POI characteristics, implying functional resemblance. Conversely, region 119 stands out as relatively isolated due to its primarily oceanic nature and lack of substantial POI information. These findings underscore the significance of explainability and the prediction effectiveness. ## 5. Conclusion In this study, we emphasize the significance of explainability in spatio-temporal graph neural networks. To address this, we propose a novel framework called STExplainer that not only predicts future spatio-temporal signals accurately but also provides transparent explanations. Our framework incorporates GIB-based structure distillation with an explainable objective and employs variational approximation for tractability. Additionally, we introduce a unified STG encoder and decoder that generate explainable, generalizable, and robust STG representations. Through extensive experiments, we demonstrate the superiority of our STExplainer in terms of predictive accuracy, explainability, generalization, and robustness. Our results surpass existing state-of-the-art methods in both predictive accuracy and explainability. In future work, we plan to investigate effective approaches for integrating explainability into global spatial information propagation mechanisms, such as hypergraph neural networks, using an _intrinsic_ explainable approach. Figure 6. Hyperparameter study of the proposed STExplainer. Figure 7. Case study of our STExplainer.
2307.13092
Not Hydro: Using Neural Networks to estimate galaxy properties on a Dark-Matter-Only simulation
Using data from TNG300-2, we train a neural network (NN) to recreate the stellar mass ($M^*$) and star formation rate (SFR) of central galaxies in a dark-matter-only simulation. We consider 12 input properties from the halo and sub-halo hosting the galaxy and the near environment. $M^*$ predictions are robust, but the machine does not fully reproduce its scatter. The same happens for SFR, but the predictions are not as good as for $M^*$. We chained neural networks, improving the predictions on SFR to some extent. For SFR, we time-averaged this value between $z=0$ and $z=0.1$, which improved results for $z=0$. Predictions of both variables have trouble reproducing values at lower and higher ends. We also study the impact of each input variable in the performance of the predictions using a leave-one-covariate-out approach, which led to insights about the physical and statistical relation between input variables. In terms of metrics, our machine outperforms similar studies, but the main discoveries in this work are not linked with the quality of the predictions themselves, but to how the predictions relate to the input variables. We find that previously studied relations between physical variables are meaningful to the machine. We also find that some merger tree properties strongly impact the performance of the machine. %We highlight the value of machine learning (ML) methods in helping understand the information contained in different variables, since with its help we were able to obtain useful insights resulting from studying the impact of input variables on the resulting behaviour of galaxy properties. We conclude that ML models are useful tools to understand the significance of physical different properties and their impact on target characteristics, as well as strong candidates for potential simulation methods.
Cristian Hernández Cuevas, Roberto E. González, Nelson D. Padilla
2023-07-24T19:27:22Z
http://arxiv.org/abs/2307.13092v1
# Not Hydro: Using Neural Networks to estimate galaxy properties on a Dark-Matter-Only simulation ###### Abstract Using data from TNG300-2, we train a neural network (NN) to recreate the stellar mass (\(M^{*}\)) and star formation rate (SFR) of central galaxies in a dark-matter-only simulation. We consider 12 input properties from the halo and sub-halo hosting the galaxy and the near environment. \(M^{*}\) predictions are robust, but the machine does not fully reproduce its scatter. The same happens for SFR, but the predictions are not as good as for \(M^{*}\). We chained neural networks, improving the predictions on SFR to some extent. For SFR, we time-averaged this value between \(z=0\) and \(z=0.1\), which improved results for \(z=0\). Predictions of both variables have trouble reproducing values at lower and higher ends. We also study the impact of each input variable in the performance of the predictions using a leave-one-covariate-out approach, which led to insights about the physical and statistical relation between input variables. In terms of metrics, our machine outperforms similar studies, but the main discoveries in this work are not linked with the quality of the predictions themselves, but to how the predictions relate to the input variables. We find that previously studied relations between physical variables are meaningful to the machine. We also find that some merger tree properties strongly impact the performance of the machine. We conclude that ML models are useful tools to understand the significance of physical different properties and their impact on target characteristics, as well as strong candidates for potential simulation methods. keywords: Methods: data analysis - Cosmology: large-scale structure of Universe - Galaxy: halo ## 1 Introduction Cosmological simulations are a powerful tool to understand the nature and evolution of galaxies, large-scale structures and baryonic processes occurring within them. The \(\Lambda\)CDM universe can be described with only six observationally-tuned parameters, and the evolution of the cold dark matter in the universe is modelled by following gravitational interactions between dark matter. The complexity of the models only increases when one takes in consideration the physics of baryonic matter. Simulations follow these models with the goal of replicating the evolution of the large-scale universe and the emergence of different bodies and structures. The first simulations created were dark-matter-only (hereafter DM-Only), only having gravitational interactions (Aarseth, 1971) between particles. Simulations that model the evolution this way are referred to as N-body simulations. With time, simulations evolved into more sophisticated algorithms, such as semi-analytical models, that coupled to DM-only simulations were able to follow harmonic properties of galaxies (hereafter SAM) (Kauffmann et al., 1993), and hydrodynamical simulations (hereafter hydro) simulations (Katz et al., 1992). The main difference between SAM and hydro simulations is that SAMs take the approach of using approximate, analytic techniques to treat the various physical processes associated with galaxy formation, which makes them computationally cheaper (Cole et al., 2002). In comparison, hydro simulations solve the physics of galaxy formation by computing directly the fundamental equations of gravitation, hydrodynamics, cooling and star formation, and in some cases even radiative transfer between a large number of particles. Both SAM and hydro simulations can be contrasted with observations, thus testing the physical processes involved from different perspectives. The purpose of simulations is to understand the physical processes involved in the formation and evolution of galaxies and the universe. Since some simulations are more complex than others, there is a large variety of scales and objects to study: from star formation and evolution to galaxies and the large-scale structure of the universe. Simulations have different applications. For example, pre-analyzing large future observational surveys of galaxies (Sanchez et al., 2021; Abolfathi et al., 2021; Korytov et al., 2019). The majority of cosmological surveys are focused on observing galaxies, therefore for a simulation it is vital to consistently reproduce different features of galaxies and baryonic matter. However, the modeling of said galaxies is non-trivial due to the complex physical processes in their formation and evolution. For even a small fraction of the Universe -in the context of hydrodynamical simulations- evolving tens of billions of particles interacting under coupled effects of gravity, magneto-hydrodynamics and radiative processes over cosmic time is incredibly computationally costly. This cost increases with the volume of the desired simulation. For example, TNG300-1 (one of the simulations of the suite IllustrisTNG (Nelson et al., 2021; Pillepich et al., 2017c; Springel et al., 2017; Nelson et al., 2017; Naiman et al., 2018; Marinacci et al., 2018)), which has a simulation box of \(205\mathrm{Mpc}/h\) per side, required almost 35 million CPU hours to complete (Yip et al., 2019; Pillepich et al., 2017). This poses a challenge, since new surveys are incrementally larger and simulations must keep the pace with such large volumes of data (Djorgovski et al., 2013). In contrast to magneto-hydrodynamical simulations, DM-Only N-body simulations are computationally much cheaper as gravity is the only interacting force. For example, the DM-Only simulation Millennium, which traces the evolution of dark matter in a cube of roughly \(500\mathrm{Mpc}/h\) on a side, took only 350 thousand CPU hours Springel et al. (2005). In contrast with TNG300-1, Millenium took 100 times less computing time for a volume 14 times larger. On top of this, we must consider that finding haloes and caracterizing them is already a time-consuming task (Knebe et al., 2011). Therefore, it would be extremely interesting to find a time-efficient mapping from dark matter characteristics in N-body simulations to the baryonic properties in full hydrodynamical simulations. With this in mind, the goal would be finding this mapping to learn from a different perspective the variables that govern the galaxy formation process.While we know statistical relations between dark matter properties and baryonic features, we propose a different approach. Since simulations cover very large volumes, there is a flood of information to process and find the mapping we seek. Since DM-Only simulation can provide merger trees as their output, we can somewhat reduce the complexity of the whole simulation by tracking particles in said trees and inferring the dark matter properties of haloes and subhaloes that emerge. In a typical simulation, we can find hundreds of thousands of subhaloes (Chaves-Montero et al., 2016; Dantas, 2021; Feng & Modi, 2016; Dolag et al., 2009; G6 mez et al., 2021), with a set of features like mass, half mass radius, spin, velocity dispersion, etc. Therefore, the "domain" of this mapping contains a very large amount of data. There are quite a few theoretical models which describe different relationships between the dark matter environment of baryonic matter and its properties, using approaches with a phenomenological approach (Wang et al., 2013), simulation-based (Tacchella et al., 2018) or via machine learning (Jo & Kim, 2019; Agarwal et al., 2018) methods. But with the amount and variety of data we are trying to model we would need a quite complex model to describe the relationships between all features of interest. In the spirit of understanding large volumes of data, machine learning (hereafter ML) is a strong alternative to traditional methods. Even more so, supervised ML specializes in constructing mappings between a set of measurements (input) and a target variable (output). There is one condition though. In order for the algorithm to "learn", it needs a set of provided examples. Furthermore, a larger set of inputs and outputs may improve the performance of the mapping fit by the algorithm (Sun et al., 2017; Zhu et al., 2015). Once obtained, the mapping function can be used to predict the output of previously-unseen inputs. The main difference between traditional modeling and supervised ML is that the mapping is predefined in traditional methods, while the supervised algorithm constructs the mapping according to the training data. A second advantage for ML algorithms is the computing time as most of the computing resources are needed in the training phase of the algorithm. After that, the cost of inference of an output by a trained machine is low. There are quite a few fields where ML algorithms outperform classical methods used for the same means. In astrophysics there has been a rapid increase of studies on the applications of ML methods to process different types of data. Examples range from detection, classification and analysis of structures in astronomical images (Gonzalez et al., 2018; Jacobs et al., 2017) and spectrographic data (Baron & Poznanski, 2016). The application of ML in this area is not only oriented to observational data. Studies have been successful in using ML methods to analyze, predict and even replicate data from astrophysical simulations (Kamdar et al., 2016; Agarwal et al., 2018). The nature itself of astronomical data (both in form and volume) make astrophysics and ML kindred fields of study. For simulations to be useful for the prediction and analysis of cosmological surveys, one must take in account the nature of the surveys. Some surveys detect galaxies using photometric observations (e.g. LSST (Ivezic et al., 2019)). To replicate these observations one must simulate properties such as the stellar mass of galaxies properly. Other surveys detect galaxies by observing the line emission spectra (e.g. DESI (DESI Collaboration et al., 2016), EUCLID (Laureijs et al., 2011)). In this case, line emission responds to the ionizing flux. Therefore, properties like star formation rate must be properly simulated since it is massive, short lived stars the ones that produce most of this flux(Orsi et al., 2014). In this work we train a neural network (hereafter NN) to find a mapping from dark matter data to baryonic target variables. We train the algorithm using the publicly available catalog of IllustrisTNG, more specifically the TNG300-2 simulation. Our goal is to fit a mapping from a selection of variables of the DM-Only simulation data to two target baryonic properties found in the complete magneto-hydrodynamical simulation: stellar mass and star formation rate. We will use dark matter properties from the halo and subhalo containing a galaxy to infer its stellar mass and SFR. For SFR in particular, we time-averaged this value between consecutive snapshots \(z=0\) and \(z=0.1\), with the idea of reducing stochasticity of this variable at \(z=0\) and getting a better idea of SFR as it changes in time and not as a strictly instantaneous value. This averaging will improve our final results. This study will focus on central galaxies of haloes, ignoring galaxies contained in satellite subhaloes. In line with the nature of the methods employed, this work focuses in the influence of the data from the dark matter environment on the target galaxy properties, and not necessarily on the comprehension of the physical processes behind said influence. We study the importance of input variables on the performance of the model using a leave-one-covariate-out approach, which led to insights about the relation between input variables; in particular those that are heavily correlated. To evaluate the performance of the algorithms, we will use mean squared errors (MSE), Pearson Correlation Coefficient (PCC) and Coefficient of determination (\(R^{2}\)) metrics. On top of that, we will study the distribution of the predicted and real values of the target variables to be regressed and compare to similar studies in literature. Details about the simulation and the variables involved in the training are discussed in Section 2. In Section 3 we will discuss the theoretical background between the relationship of stellar mass and SFR of galaxies with the dark matter environment, and talk about work related to ours. In Section 4 we will discuss the nature of the ML algorithms used and its modeling process. In Section 5 we will present the results of our work. We will discuss and analyze these results in Section 6. Finally, in Section 6 we present the conclusions of our work. ## 2 Cosmological Simulation and Data In this work we use the IllustrisTNG suite of large volume, cosmological, gravo-magneto-hydrodynamical simulations that model the physical processes most relevant to the formation and evolution of galaxies in cosmological volumes (Pillepich et al., 2017). These simulations were run with the moving-mesh code AREPO (Springel, 2010) to solve the magneto-hydrodynamics and self-gravity coupled equations (Nelson et al., 2021). This moving-mesh code employs a tree-particle-mesh algorithm to solve Poisson's equation for gravity and a second-order accurate finite-volume Godunov scheme on a moving, unstructured Voronoi mesh for the equations of ideal magneto-hydrodynamics (Pillepich et al., 2017). The TNG project is made up of three flagship runs, each with different volume and particle resolution: TNG50, TNG100, and TNG300. For this work, we will focus on TNG300, the largest simulation. It has a volume of roughly \(300\ c\,Mpc^{3}\). It is presented in three versions. From highest resolution to lowest, they are: TNG300-1, TNG300-2 and TNG300-3. Our work uses data from TNG300-2. This simulation took 1.3 million CPU hours on 6000 cores (Pillepich et al., 2017). At \(z=0\), the simulation volume holds over two million subhaloes, identified with the SubFind (Springel et al., 2001) and friends-of-friends (FOF Davis et al., 1985) halo finders. This simulation includes all relevant galaxy-scale physics to follow the evolution of dark matter, stars, gas, and super massive black holes. Each simulation of the TNG suite, and specifically TNG300-2, have a DM-Only counterpart. They are run with the same initial conditions of their magneto-hydrodynamic counterpart, but only with dark matter particles (Nelson et al., 2021). Since they share initial conditions, the haloes and subhaloes that emerge are quite similar. On top of that, there is a cross-match subhalos between baryonic and dark matter runs. Said cross-match is a data product from Rodriguez-Gomez et al. (2015) for SubLink (Rodriguez-Gomez et al., 2015) found subhalo matching. There also exists a match for LHalofree (Nelson et al., 2015). This latter match is not used in this work. ### Contrast between TNG300-2 and observational data The public data release paper of IllustrisTNG (Nelson et al., 2021) reports consistent results in several aspects when compared with observations. For example, Lovell et al. (2018) finds that the dark matter fraction (DMF) at \(z=0\) falls among estimates for disk-like galaxies from the SWELLS and DiskMass samples from the SDSS survey. They also find that, for Milky-Way-Like galaxies, the total circular velocity curves beyond a few kpc from the galaxy centre behave accordingly with observational constraints. Elliptical galaxies show DMF in agreement with the measurements made from the SLUGGS survey by Wojtak and Mamon (2013), but higher than the measurements from Alabie et al. (2017). Another relation found by Pillepich et al. (2017) is that IllustrisTNG reproduces the general features of the stellar mass - halo mass relation (SMHM) semi-empirical constraints (as seen in Weinberger et al. (2016) and Pillepich et al. (2017)). This work also finds that, while the total simulated amount of stellar mass in clusters is in agreement with available observational values, the mass in central galaxies appears up to a factor of 0.5 dex larger than the observational constraints in Kravtsov et al. (2018). With respect to the SFR, it is treated following Springel and Hernquist (2003), where gas cells are stochastically converted into star particles using a density threshold criteria. Gas cells with \(n_{H}>0.1cm^{-3}\) are considered to be _star forming_. The nature of this SFR is instantaneous, and the SFR of a galaxy is measured by summing the instantaneous SFR of all its star forming gas cells (Donari et al., 2019). Due to resolution issues, any gas cell in TNG300 with \(log\,(SFR)<-3\) is considered as unresolved and assigned a SFR value of 0. When contrasted with observational data, we can see at \(z=0\) that the threshold to select star-forming v/s quenched galaxies in the UVJ diagram in Whitaker et al. (2011) can be reasonably well applied to TNG galaxies to separate in a consistent fashion red, quenched galaxies from blue, star-forming ones. With this said, the TNG galaxies populate the UVJ diagram in a broadly successful way, but not identical to observations. On top of that, TNG succesfully recreates the quenching at high stellar masses, since massive galaxies tend to be older and non star forming Donnari et al. (2019). In the aforementioned work, we can also see that TNG galaxies populate the \(SFR-M_{star}\) plane in a qualitatively consistent fashion with observations. With this said, due to the different nature of SFR observational indicators, the authors limit their comparison between TNG and observations by only focusing on the slope and mass trends of the star-forming main sequence. While the agreement with observations falls short at high redshifts, at \(z=0\) the main sequence of TNG galaxies lies inside the range of observational constraints bracketed by the measurements of Oliver et al. (2010) and Zahid et al. (2012). Finally, as previously mentioned, the SFR is calculated instantaneously which is not a factually observable measurement. Therefore, to study SFR in a way that makes sense when compared to observations, Donnari et al. (2019) propose averaging SFR over some timescale. They find that longer averaging time-scales lead to smaller levels of scatter. At low redshifts (\(z<2\)), and by accounting for measurement uncertainties in stellar mass and SFR, the main sequence scatter is overall consistent with observational findings (Davies et al., 2018). In general, TNG simulations consistently match observational relations and constraints. Even though we see that some features do not perfectly match the empirical data, we can see a plethora of factors that make TNG a reliable model which closely resembles observable attributes of haloes and galaxies; thus making it a trustworthy source of simulated astrophysical data. ### Physical Models and Numerical Methods The IllustrisTNG simulations assume a cosmology consistent with the Planck Collaboration (Ade et al., 2016) results: \(\Omega_{\Lambda,0}=0.6911\), \(\Omega_{m,0}=0.3089\), \(\Omega_{b,0}=0.0486\), \(\sigma_{8}=0.8159\), \(n_{s}=0.9667\) and \(h=0.6774\). It assumes Newtonian self-gravity, solved in an expanding Universe i.e. in a cosmological background (Nelson et al., 2021). The simulation starts at \(z=127\) and runs until \(z=0\). At \(z=127\), the initial conditions of TNG300-2 consist of \(1250^{3}\) DM particles with \(m_{DM}=470\times 10^{6}M_{\odot}\) and \(1250^{3}\) gas cells with \(m_{gas}=88\times 10^{6}M_{\odot}\). Baryonic TNG runs include additional physical components, including feedback, seeding and growth of supermassive black holes and pressurization of the interstellar medium. Other relevant components for this work are stochastic star formation in dense interstellar medium gas above a threshold density criterion, and evolution of stellar populations, with associated chemical enrichment and mass loss (Nelson et al., 2021). The details on the behaviour and validation of the physical models are presented in Pillepich et al. (2017) and Weinberger et al. (2016). ### Identifying cosmological structures The data product of each simulation is divided in 100 snapshots, each at a different redshifts. At every snapshot, two types of group catalogs are provided: haloes, identified and catalogued by the friends-of-friends (FoF) algorithm, and subhaloes identified with the SUBFIND algorithm (Springel et al., 2001). FoF places any two particles with a separation less than some linking length \(b\) into the same group. In this way, particle groups (or haloes) are formed, corresponding to regions approximately enclosed within isodensity surfaces with density inversely correlated with the volume of the sphere of radius \(b\). For an appropriate choice of b, groups are selected that are close to the virial overdensity predicted by the spherical collapse model (Tomita, 1969). This simulation uses a linking length of \(b=0.2\)(Nelson et al., 2021). The FOF algorithm is not capable of detecting substructures inside larger virialized objects with a linking length of this value (Springel et al., 2001). To identify subhaloes, the SUBFIND algorithm is run over the FOF groups data. The use of FOF-groups as input data provides a mean to organize the groups in a simple two stage hierarchy consisting of 'background group' and'substructure'. This algorithm first identifies overdensities within a given, FOF group. It begins agglomerating neighbour particles to the most dense particle in the vicinity, and setting the subhalo boundaries with criteria based on the density gradient. To switch from a criteria based only on the spatial distribution of particles to a more physical definition, a requirement of self-boundedness is set. This is done by removing any particle with positive energy, which are considered unbound. The unbinding is performed in physical coordinates, where velocities (and therefore energy) are computed by using the most bound particle (the one with less potential energy) as the center. (Springel et al., 2001). ### Halo catalogues Among the group catalogs generated by the FoF algorithm, and the subhalo catalog generated by the SUBFIND algorithm, the release of IllustrisTNG also makes available 100 snapshots which contain data for every particle and cell in the whole volume. Each snapshot captures the state of galaxies, haloes, particles and cells at different refshifts. In this work we will use data from two different full snapshots at redshifts \(z=0\) and \(z=0.1\). Another data product of this simulation are the merger trees. Merger trees are a data structure which follow the growth and mergers of dark-matter haloes over cosmic history. These give important insights into the growth of cosmic structure, allowing to trace the history of the dark matter interactions involved during the formation of a halo or subhalo (Srisawat et al., 2013). IllustrisTNG has available merger trees created using SubLink (Rodriguez-Gomez et al., 2015) and LHALofTree (Springel et al., 2005). Merger trees will allow us to trace back on time major mergers (mergers between haloes with a mass ratio of 1/3) and the past mass of subhaloes at different redshifts. ### Input and Output Data For this work, we focus on present day (\(z=0\)) central subhaloes. In order to train the machine we will use properties from the subhalo, the halo hosting this subhalo, neighbour halos, and merger history of subhaloes. This data is obtained from the DM-Only version of TNG300-2. At the same time, we take three baryonic properties of subhaloes from the baryonic TNG300-2 run, which are linked to the DM-Only data by the previously mentioned cross-match catalog. The input data we use in our algorithms, from the TNG300-2 DM-Only simulation are described in Illustris' web page1 are: Footnote 1: IllustrisTNG Data Specifications. #### 2.5.1 Subhalo properties We use subhalo properties as these have been shown to be highly correlated to galaxy properties such as the stellar mass Rodri guez-Puebla et al. (2016). 1. \(S_{\rm sub}\) (\(\frac{kpc}{h}\frac{km}{s}\)): Magnitude of the spin of the subhalo, computed for each as the mass weighted sum of the relative coordinate times relative velocity of all member particles/cells. 2. \(\sigma_{\rm sub}\) (\(\frac{km}{s}\)): One-dimensional velocity dispersion of all the member particles/cells in the subhalo (the 3D dispersion divided by \(\sqrt{3}\)). 3. \(v_{max}\) (\(\frac{km}{s}\)): Maximum value of the spherically-averaged rotation curve, i.e. maximum circular velocity of the subhalo. #### 2.5.2 Host halo properties We also include host halo properties since these can be of importance to central galaxies. If we were to include satellites, the halo properties would also come into play as environmental properties. 1. \(m_{halo}\) (\(log(M_{\odot}/h)\)): Logarithm of the sum of the individual masses of every DM particle in the halo. 2. \(r_{crit,200}\) (\(ckpc/h\)): Comoving Radius of a sphere centered at the most bound particle in the halo with a mean density of 200 times the critical density of the Universe, at \(z=0\). 3. \(r_{crit,500}\) (\(ckpc/h\)): Comoving Radius of a sphere centered at the most bound particle in the halo with a mean density of 500 times the critical density of the Universe, at \(z=0\). 4. \(m_{crit,200}\) (\(10^{10}M_{\odot}/h\)): Total Mass of this halo enclosed in a sphere whose mean density is 200 times the critical density of the Universe, at \(z=0\). #### 2.5.3 Environment properties We include environmental properties as these can be related to historical events in the evolution of a galaxy. See Section 3.1. 1. \(\rho_{n}\) (\((ckpc/h)^{-3}\)): Numerical density of neighbour haloes, computed as \(5/V_{5}\), where \(V_{5}\) is the volume of the sphere with a radius equal to the distance to the fifth closest halo. 2. \(\rho_{mass}\) (\((10^{10}M_{\odot})(ckpc/h)^{-3}\)): Mass density of halo neighbourhood, computed as \(m_{5}/V_{5}\), where \(V_{5}\) is the volume of the sphere with a radius equal to the distance to the fifth closest halo and \(m_{5}\) is the sum of the masses of the five closest haloes. #### 2.5.4 Historical properties Related to the environmental properties, we can also include historical properties directly in the analysis to then compare their relative influence. 1. \(z_{1/2}\): Redshift at which this subhalo had half of its actual mass. 2. \(\dot{m}_{subhalo}\) (\(10^{10}M_{\odot}/Gyr\)): Free dark matter particles accreted by the subhalo from \(z=0.1\) to \(z=0\) 3. \(z_{last}\): Redshift at which this subhalo had its last major merger. This is, when it merged with another subhalo such as their mutual mass ratio is at least 1/3. #### 2.5.5 Output features On the other hand, we choose two output galaxy properties that come from the TNG300-2 magneto-hydrodynamical simulation. These properties are fundamental and key in the selection process of surveys. In addition we also include a variant of one of the two, that will come in handy for our analysis in later sections: * \(m_{*}\) (\(log(M_{\odot}/h)\)): stellar mass obtained as the sum of the masses of all star and wind particles within twice the stellar half mass radius. * SFR (\(M_{\odot}/yr\)): star formation rate obtained as the sum of the individual star formation rates of all gas cells within twice the stellar half mass radius. Instantaneous measure. * meanSFR (\(M_{\odot}/yr\)): Time-scale averaged SFR between \(z=0\) and \(z=0.01\). As mentioned in SS3, this measure is a better representation of the observationally measured SFR and presents less scatter than the instantaneous SFR. ## 3 Theoretical background ### Galaxy properties and the relation to their host haloes and environment As the universe evolves, baryonic and dark matter interactions shape the structures we are able to observe today. Since the dark matter only interacts through gravity, and it dominates the gravitational potential in the universe, it is safe to assume that its properties and the ones of the baryonic matter have an impact on each other. Several studies have found correlations between the properties of haloes and the galaxies that inhabit them. For example, studies have found strong correlations between measured halo mass and the observed stellar mass of the galaxies they host (Tinker et al., 2017; Huang et al., 2021; Wang et al., 2013; Girelli et al., 2020). In the literature there are records of a strong relation between SFR and halo mass, although this relation is mediated by (i.e. more related to) the stellar mass of the galaxy (Wang et al., 2013; Kusakabe et al., 2018; Salmon et al., 2015; Lee et al., 2018; Gabor et al., 2010). This relation is known to evolve with redshift. Numerical studies and observational estimates show that on top of that, the past events of haloes also have an impact on the SFR, where mergers show an elevated SFR in comparison to similar non-merging galaxies (Pearson et al., 2019; Horstman et al., 2020; Cortijo-Ferrero et al., 2017). Since they both respond to the gravitational potential, the features of the dark matter of a halo should be able to characterize, one way or another, its hosted galaxy. But finding relations between a high number of features requires several observational data and complex models of coupled equations. But, if said relations exist, one avenue should be to describe them with a physically motivated model and this has been done extensively (Cole et al., 2002; Croton et al., 2016; Cora et al., 2018). Instead of constructing a model this way, in this work we use machine learning methods to implicitly infer the underlying relations between halo and galaxy properties. This approach will not provide us an analytical model to understand said relations, but it will allow to study the impact of the different halo features in the galaxy properties. There are machine learning methods that ideally require large amounts of pre-analyzed data. This poses a challenge if we intend to learn from observations, because on top of needing to process these, training on data from different datasets (e.g. telescopes) can result in different predictions (Crammer et al., 2008). For this reason we apply these methods to data not from observations, but from the cosmological simulation IllustrisTNG. ### Previous ML related work Even though there are several works studying the performance of ML to infer galaxy and baryonic properties from DM only simulations (Kamdar et al., 2015; Villaescusa-Navarro et al., 2021; de Santi et al., 2022), we will concentrate our comparisons to two studies, previously cited in this work, and highlight differences and similarities with them. In first place, we have Agarwal et al. (2018) work. They also developed a ML framework to infer baryonic properties such as Metallicity (Z), neutral (\(H_{1}\)) and molecular (\(H_{2}\)) hydrogen, and our variables of study: SFR and stellar mass. They use the hydrodynamical simulation MUFASA (Dave et al., 2016), which is smaller and has less resolution than TNG300-2. They explore a few ML algorithms, among them Multi-layer perceptrons (MLP hereafter). While the scatter of the relationship between output variables and halo mass was underepredicted, they recovered the mean trends of output quantities with halo mass highly accurately. In their work they didn't get the best results using MLP but using Random Forest. Also, they study the impact of additionally inputting key baryonic properties (like stellar mass or SFR) when predicting \(H_{1}\) and \(H_{2}\), as would be available e.g. from an equilibrium model. They found that in doing so, their results improved. This result inspired us to use ML inferred baryonic properties to improve the performance of our models, as will be detailed in Section 4. We will also compare the metrics of the regressions obtained in this work with ours. The results from their regressions, which we compare later with our results, are shown in Table 1. The second related study is (Jo & Kim, 2019). In this work they employ ML methods to estimate baryonic properties of a galaxy inside haloes from a DM-only simulation. They work with TNG100, a smaller simulation with higher resolution than TNG300-2. They train a machine to predict features like stellar mass and star formation rate in a galaxy based on the DM content of the halo that hosts it. The ML algorithm used by them is Extremely Randomized Trees (Geurts et al., 2006), a variation of the Random Forest algorithm. For a baseline training, they use only 3 properties of the halo: DM mass, velocity dispersion and maximum circular velocity of the halo. Then, they use different approaches to improve their results. One approach is augmenting the baseline dataset. They add the halo spin, historical properties like number of mergers and last major merger mass ratio, and environmental properties like local density of haloes and number of local halos. Another approach involves a two-stage learning procedure, where they use a first machine to predict a baryonic property. This property is then used as an input to train a second machine. One last approach is to use an error function with logarithmic scaling. While different combinations of approaches sometime interfere with each other, they find that using adequate combinations (which are different for each baryonic property) the results improve. Once they find the best machine for each property, they generate a galaxy catalog with the studied baryonic properties for another DM-Only simulation: MultiDark-Planck (Riebe et al., 2013)(Rodriguez-Puebla et al., 2016)(Klypin et al., 2016). Finally, they compare the machine's performance against semi-analytic model (SAM) data, the MDPL-Sag catalogue (Cora et al., 2018). They compare the probability distribution function (PDF) of each baryonic property between TNG100, SAM (Sag) and their machine. Overall, they find that while the machine replicates better the PDF of TNG100 (which it was trained to do), there are some clear mismatches in some higher or lower ends of the distributions of properties, reported to be due to small number statistics. In summary, they found that adding environment and historical properties and employing a two-stage learning method \begin{table} \begin{tabular}{c c c} \hline & stellar mass & SFR \\ \hline \(R^{2}\) & 0.909 & 0.555 \\ PCC & 0.953 & 0.745 \\ \hline \end{tabular} \end{table} Table 1: Metrics of best regressions for stellar mass and SFR in (Agarwal et al., 2018) at \(z=0\) improves their results, and that a catalog generated by their method is largely compatible with a SAM catalog. ## 4 Machine learning methods and modeling In this Section we present our machine learning setup. ### Supervised Learning Supervised learning algorithms are trained to find complex relations from previous data. They must be provided with a set of input-output pairs. For example, in this work the input data corresponds to the DM-Only features while the output data are the two baryonic properties: stellar mass and SFR. The machine tries to learn the best mapping from inputs to outputs, so that when a new input (from which the machine didn't learn) is given to the machine it outputs a prediction based on the data used to learn said mapping. To learn, the machine begins answering randomly and it iteratively improves its answers by adjusting its weights under an optimization scheme to reduce a given loss function, i.e. gradient descent algorithms. The data used by the algorithm to learn the optimal mapping is called training set. To evaluate how good is the mapping learned by the machine, we must take a set of data which the machine has not "looked at" before (i.e. it is not in the training set). By taking data for which we know the output, we can compare the machine's prediction with the real value and use mathematical methods (metrics) to evaluate the performance of the trained machine. The set of data used to evaluate the performance of the machine is called validation set. Apart from the internal parameters previously mentioned, supervised algorithms have a set of parameters which must be previously given by the user (called hyperparameters). These hyperparameters are tuned to achieve the best metrics when evaluating in the validation set. Once we find an optimal set of hyperparameters, the final performance of the machine is evaluated on a third set of data that is different from both the training and validation set. This set of data is called testing set. To begin training an algorithm, one usually divides all the data available in three mutually exclusive sets: training, testing and validation. ### ML Setup For this work we build custom MLP using the _keras_ and _tensorflow_ packages for machine learning in a _python_ script. We train two MLP, one for each output. In early stages of the development of this research, both outputs were predicted using only one model, but this proved to be inefficient metric-wise and it reduced the volume of data due to the restrictions mentioned in section 4.4. While we will explore the behaviour of the machine in various methods for predicting SFR, we will attune the machine for meanSFR since it is a more significant value from a physical perspective, as mentioned in section 2.1. For short, we will address this machine as the SFR machine. After thoroughly exploring the hyperparameter space and architectures, we chose the number of hidden layers, neurons per layer, learning rate and batch size for training. Each variable to be regressed has a different machine, i.e. with different hyperparameters. The hidden layer neurons use a ReLU activation function (Glorot et al., 2011), and the output layer uses a softplus activation function (Dugas et al., 2000). We use an ADAM optimizer (Kingma and Ba, 2017) in both cases. We train using mean squared error as the loss function, since it gives a harsher penalization on large errors. The MLP for the stellar mass prediction has 3 hidden layers of 30 neurons each. The learning rate for the Adam optimizer has a value of \(5\times 10^{-4}\). The machine was trained for 20 epochs using a batch size of 128. On the other hand, the SFR machine has 4 hidden layers with 40 neurons each. The learning rate for the Adam optimizer has a value of \(5\times 10^{-4}\). The machine was trained for 22 epochs using a batch size of 64. ### Hyperparameters, width and depth exploration To determine the optimal learning rate, batch size, number of hidden layers and number of neurons per layer; we intensively explored combinations of hyperparameters. For each combination, we run five trainings with different random seeds at a fixed number of epochs. Once a training finishes, we evaluate metrics on the validation set and record those values. Metrics are chosen by studying boxplots of the performance of each hyperparameter. Then we train using the chosen combination for different epochs, evaluating how the distribution of predicted versus real values behaves. After that, we repeat the hyperparameter exploration in a smaller range of values around the previously found best values, using the lowest number of epochs where a good fit was observed. We repeat this back-and-forth exploration iteratively until results converge. In figure 1 we see the evolution of the distribution of real versus predicted values for different epochs for the stellar mass prediction on the testing set. There is a clear improvement with smaller dispersion and better predictions at 20 epochs, and we can see in the figure that for more epochs the correlation barely changes, and the distribution of predicted values underestimates stellar mass for higher TNG values. In particular, when exploring the optimal number of epochs, we tested with values as high as 200 epochs, but the model began converging towards stable metrics at near 20 epochs in both cases. ### Data selection and preparation Given the resolution of the simulation and the nature of the predicted value we apply some cuts to the simulation data to construct our datasets. In the first place, we only consider galaxies contained in haloes with \(m_{halo}>10^{11}m_{\odot}\). As mentioned in section SS2, the resolution of dark matter particles in TNG300-2 is \(m_{DM}=470*10^{6}m_{\odot}\). This means we are considering haloes with at least 200 DM particles. Setting a threshold on masses is a resolution-based criteria for preprocessing also used in Jo and Kim (2019) and Agarwal et al. (2018). Then, we make two additional cuts; one for predicting stellar mass and the second for SFR. For stellar mass, we will make predictions on galaxies with \(M^{*}>10^{9}m_{\odot}\), which correspond to about 100 stellar particles per galaxy in TNG300 and is set as the criterion for the minimum stellar mass values for haloes in Pillepich et al. (2017c). As for SFR, the only cut we apply is that it must be greater than 0, given that TNG has its own threshold to consider a group of gas cells to be star forming (SFR \(<10^{-3}m_{\odot}/yr\) are treated as quenched, i.e. SFR = 0) (Donnari et al., 2019). For meanSFR, we use the same cuts, allowing galaxies with \(SFR_{z=0.1}=0\). As mentioned in SS2.1, the portion of quenched galaxies in TNG is consistent with observational parameters, so these cuts should not pose a problem if one intends to use this method to populate synthetic catalogs using data from dark matter only simulations. On top of the aforementioned cuts, we also performed a min-max normalization on the input data using the _MinMaxScaler_ function for the _sklearn_ package in _python_. This is a standard procedure when preprocessing input and output data, since it improves performance and reduces propagation errors in ML (Sola and Sevilla, 1997). ### Regression performance criteria and metrics In this subsection we present the metrics adopted throughout. #### 4.5.1 Mean Squared Error The mean squared error (MSE) is the first metric we use to evaluate the performance of the machine, and it is also the loss function of the machines i.e. the value the MLP minimizes to learn the best fit. It is calculated as \[\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}\, \tag{1}\] where \(n\) is the size of the sample, \(y_{i}\) is the real value, and \(\hat{y}_{i}\) is the predicted value. This metric evaluates how far the predictions are from the real values. Therefore, better predictions will produce a MSE closer to 0. #### 4.5.2 Coefficient of Determination (\(R^{2}\)) The coefficient of determination, or R\({}^{2}\) score, represents the proportion of variance in the predicted values that can be explained from the observed values. Instead of correlation between variables, it explains to what extent the variance of the real values explain the variance of the predicted values. To calculate R\({}^{2}\) we must take in account the residual sum of squares (RSS) and the total sum of squares (TSS). These values are computed as: \[\text{RSS}=\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2} \tag{2}\] \[\text{TSS}=\sum_{i=1}^{n}(y_{i}-\bar{y}_{i})^{2}\, \tag{3}\] where \(n\) is the size of the sample, \(y_{i}\) is the real value, \(\hat{y}_{i}\) is the predicted value, and \(\bar{y}_{i}\) is the mean of the real values. Finally, R\({}^{2}\) is calculated as: \[\text{R}^{2}=1-\frac{\text{RSS}}{\text{TSS}}. \tag{4}\] As opposed to MSE, R\({}^{2}\) will be higher (and closer to 1) for good predictions. #### 4.5.3 Pearson Correlation Coefficient The Pearson Correlation Coefficient (PCC) measures linear correlation between two predicted and real values. It is the ratio between the covariance of two variables and the product of their standard deviations. It is a normalised measurement of the covariance, with values that range between -1 and 1. PCC is calculated as: \[\text{PCC}=\frac{\text{cov}(y,\hat{y})}{\sigma_{y}\sigma_{\hat{y}}} \tag{5}\] where \(\text{cov}(y,\hat{y})\) is the covariance between real and predicted values, \(\sigma_{y}\) is the standard deviation of the real values, and \(\sigma_{\hat{y}}\) is the standard deviation of the predicted values For PCC, a value of 1 indicates a perfect linear correlation and a value of -1 indicates a perfect inverse correlation. A PCC of 0 indicates no correlation whatsoever. Since we seek to match exact values, we aim for a PCC as close to 1 as possible. ### Chained-network method: SFR+ As previously discussed in section 3.1, while SFR shows relation with the dark matter halo properties, stellar mass has a very significant relation with SFR in galaxies. While we cannot measure stellar mass from a DM-Only simulation, we can estimate it and use it as input, as previously done by Jo and Kim (2019). In this case, and based on the quality of the stellar mass predictions presented in section 5 and discussed in 6, we use the pre-trained MLP described in section 4.2, trained using a dataset of 42705 galaxies, to predict the stellar mass of a galaxy and then use this prediction as another input to train a machine able to predict SFR (and meanSFR). The idea of chaining of different estimators to improve results was first addressed by Wolpert (1992) and by (Ting and Witten, 1999), where is referred as Stacked Generalization. This ensemble method is not restricted just to neural networks, but the concept is tightly related with residual neural networks (He et al., 2015). When presenting results and in the discussion, we will address machines using chained networks as SFR+ and meanSFR+, i.e., adding the plus sign to the shorthand. Figure 1: Kernel density estimate (KDE) plot of four stellar mass predictions on the testing set using the same hyperparameters at different epochs to illustrate the evolution of the performance of the predictions. On the first panel from left to right it can be seen that the fit is already reasonably good after the first epoch of training. In the second panel the fit has evolved and is closer to the perfect fit shown as the dotted line. The third and fourth plots show that the predictions become less accurate due to overfitting to the training set. These panels illustrate how we explore for the optimal number of epochs. ## 5 Results ### Stellar mass After applying the constrains mentioned in section 4.4 our dataset contains a total of 71177 galaxies, divided into 42705 galaxies for the training set and 14236 for both validation and testing set. The metrics for the predictions made by the machine with this training are presented in Table 2 and the KDE plot of the TNG and predicted values is presented in Figure 2. The KDE plot shows that the results tightly agglomerate around the ideal prediction. This indicates that the predicted stellar masses closely resemble the real values. We can also see in Figure 3 that the stellar mass function (SMF) of the predicted values closely resembles the original TNG SMF, indicating that the distribution of stellar masses is recovered by the machine, but presents a peak on higher values. The distribution also presents a distinct peak on lower values, as shown in figures 4 and 6. Figure 4 shows the distribution of predicted and real values for Stellar and Halo masses. The difference in PCC implies that there is a stronger linear correlation between these variables in the prediction than in the TNG data. This hints that the scatter and distribution of the prediction are slightly off from the simulated data. To quantify the difference in the scatter we compute the distance correlation between halo mass and stellar mass of TNG and predicted values. This measure is closer to 1 for variables that are less scattered from their mean relation. In the case of TNG values, we get a distance correlation of 0.908 and in the prediction, we get a distance correlation of 0.935, which implies a higher scatter in the TNG values in the stellar mass to halo mass relation. On top of that, we can see that the machine is unable to predict the lower values of the real sample. We also notice what, for high halo masses, there appears to be a constant upper threshold in the predictions from the machine, as shown in figure 5. This phenomenon can also be seen at the higher mass end in Figure 3 and Figure 6. As these figures show, this behavior only occurs at high halo masses, where there is only a small number of galaxies, only 0.6% of the total sample. With this in mind, we infer that this is due to small number statistics. Also, in the same figure, a sharp drop at lower stellar masses can be seen, which is consistent with the inability to reproduce lower values previously mentioned. Finally, the metrics of our results show that our machine outperforms the one trained in Agarwal et al. (2018), presented in Table 1. They report better results from a random forest than from a neural network. This suggests that a proper hyperparameter tuning, plus the new features we introduced, make neural networks a stronger candidate for performing predictions from dark matter simulations. #### 5.2.1 Metrics for predictions of SFR The metrics made by the machine with this training are presented in Table 2. As shown in this table, this is the worst of the 4 regressing methods for SFR. Nontheless, the PCC metric surpasses the values obtained by Agarwal et al. (2018) shown in Table 1, but our R\({}^{2}\) score is considerably lower for the same prediction. As discussed in Section 4.5, this means that, while the predicted and real values are more correlated, the scatter in the predicted values does not resemble the scatter in the TNG values, which is better reproduced in Agarwal et al. (2018). #### 5.2.2 Metrics for predictions of SFR+ The metrics made by the machine with this training are presented in Table 2. We can see an improvement in all metrics when compared with SFR. While the results are better, we still find that only the PCC metric is better than the one at Agarwal et al. (2018) shown in Table 1, while R\({}^{2}\) score is still lower than theirs. From this comparison in metrics, we can infer the same as in Section 5.2.1 about how the data correlates. #### 5.2.3 Metrics for predictions of MeanSFR We can see an important improvement in the metrics in this case. This method improves the metrics over SFR better than the SFR+ method. The R\({}^{2}\) score metric is the one that sees the higher improvement. This makes sense when considering the discussion in Section 3.1 and the scatter plot in Figure 7, since a lower scatter in values makes R\({}^{2}\) a more forgiving metric. #### 5.2.4 Metrics for predictions of MeanSFR+ This approach, combining both previous methods, gives the best results obtained for SFR regressions. This time, both R\({}^{2}\) and PCC are better than the ones from Agarwal et al. (2018). For this regression, the KDE plot is presented in Figure 8, which compares the meanSFR values of TNG against predicted values using the meanSFR+ method. Figure 4: Density plots of the TNG and predicted stellar mass v/s Halo Mass for the sample. Colors indicate the number of samples in each bin. The comparison between both plots hints that the scatter and distribution are slightly off from the simulated data. There appears to be an overdensity on lower values in the predicted values (the colorbar of the right figure reach values, which represent density, higher than 60 points per pixel, while the left figure only reaches about 40 points per pixel). We can also see this density difference in at lower values in the KDE plot in figure 5 Figure 5: Distributions of predicted and TNG Stellar Mass values. We can see that the predicted distribution presents a higher density at lower values, which is consistent with the overdensity observed in figure 4. Figure 6: Scatter plot showing Halo Mass against predicted Stellar Mass values. We can see there are some values that are mapped to a constant upper bound, which appears as a horizontal line in the higher values of the scatter plot. The percentage of values presenting this behaviour only corresponds to 0.6% of the predictions. Figure 7: 2D histogram comparing SFR and meanSFR distributions with respect to stellar mass. The latter is a good predictor for SFR as discussed in Section 3.1. We can appreciate a higher scatter in SFR than in meanSFR around the main body of values. On top of that, the distribution of meanSFR propagates to lower values, as the horizontal grid lines allow to observe. This is because some galaxies are not star forming at \(z=0.1\), and therefore the SFR is reduced when calculating the mean between redshifts. It can be seen that while the distribution lies around the ideal prediction, the machine overpredicts values for low SFR and underpredicts values for high SFR. This can be seen in the density of predictions lying over the ideal prediction line for lower meanSFR values, and the density lying under the ideal prediction at higher values. This is consistent with the scatters observed in Figure 10 and with the SFR function in Figure 9, where the machine is seen to be unable to reproduce the lower and higher values of the mean SFR from TNG using the meanSFR+ method. ### Predicting departures from mean values and their sign. As stated in section 3.1, observations indicate that a mean relation between stellar mass and halo mass, and also between SFR and stellar mass, can be estimated from observations. With this in mind, we will explore how well our machine predicts in terms on how far each galaxy is from the mean relation. In other words, we are interested in studying if the deviation from the mean relation (to either higher or lower values) is recovered. We will study how well can we recover this deviation, or _delta_, for stellar mass and meansFF. We calculate this delta from mean relations as: \[\delta_{M^{*}}=M_{*,TNG}-\langle M_{*,TNG}\rangle M_{halo,TNG} \tag{6}\] \[\delta_{mSFR}=\mathrm{mSFR}_{TNG}-\langle\mathrm{mSFR}_{TNG}\rangle M_{*, TNG}\, \tag{7}\] where \(M_{*,TNG}\) is the stellar mass from TNG300-2, \(\langle m_{*,TNG}\rangle M_{halo,TNG}\) is the value at the same halo mass from the mean relation of TNG's stellar mass with respect to TNG's DM-only halo mass, mSFR\({}_{TNG}\) is the meanSFR from TNG300-2, and \(\langle\mathrm{mSFR}_{TNG}\rangle M_{*,TNG}\) is the value at the same stellar mass from the mean relation of TNG's meanSFR with respect to TNG's stellar mass. In both equations, we compute the mean relation as a piecewise linear function that goes through the mean values of binned stellar masses (or meanSFRs), in bin intervals of halo mass (or stellar mass). For calculating predicted values' means, we change the predicted value for its respective TNG value. #### 5.3.1 Stellar mass In the case of stellar mass, the mean relation is well predicted for \(log(m_{halo})<13m_{\odot}\). After that, the scatter plus the upper threshold predictions mentioned in section 5.1 make the mean relation to deviate upwards. With that being said, we can see in the KDE plot in figure 12 that there is a correlation between the deltas from TNG and the simulation. While the deltas are not tightly gathered around the ideal prediction, we can see the orientation of the distribution Figure 8: KDE Plot comparing meanSFR computed from the TNG300-2, and the meanSFR predicted by the MLP trained with the TNG300-2 DM-Only data with the meanSFR+ method. Darker shades of blue indicate a higher density of points. \(n_{bin}\) represents the number of objects in one bin. Both sets of SFRs are divided in 200 bins by default. The black, dotted line represents the ideal prediction. The density being above the ideal prediction at lower values and under the same line at higher values shows that the machine is not predicting values in the same range as the TNG data. Figure 10: Scatter plots comparing the distributions of TNG and predicted meanSFR+ values vs. stellar mass. In the left we have the distribution of TNG values and in the right the predicted values. We can see the machine having problems at reproducing the natural scatter of this property, and its inability to predict high and low values of mean star formation. Figure 9: Star formation rate Function of TNG and machine predicted values (different colours, shown in the figure key). We use meanSFR as the TNG value and the meanSFR+ for the predicted value. We can see the machine overpredicts low SFR values and cannot predict values past a threshold, as discussed in section 5.2.4 follows the identity line, which means the sign of the deviation from the mean is well predicted. #### 5.3.2 MeanSFR For this variable, the lack of values at upper and lower values makes the mean relation to have a narrower curve, as seen in figure 13. With this said, it reproduces the quenching of galaxies at high masses discussed at section 2.1, and the sudden increase for \(M^{*}>11*10^{10}m_{\odot}\). Since the prediction presents less scatter, it is reasonable to assume that the deltas in this case will be smaller. In figure 14 we can see this is the case, but the correlation between the sign of the deviations seem weakly correlated. ### Influence of input variables in results #### 5.4.1 Parameter ranking From comparisons with previous work and by studying the chained-network method, we observe that adding new variables impacts positively the performance of the machine. The parameter ranking has been approached in different ways in studies similar to this one. In Jo & Kim (2019) and Agarwal et al. (2018), where they use random forests to make the regression, they count how many times a variable appears in the trees, with the most important features appearing more times. Calderon & Berlind (2019) use the same approach and also use an xgboost regressor which natively computes feature importance to study the impact of each input variable. Shao et al. (2022), on the other hand, use saliency values which identify the most important variables that contribute to the relationship between all inputs and outputs. In this work we take a different approach for checking how each input variable affects the model running multiple trainings removing one input feature at a time. We run 10 trainings for each variable, to check that the change in the performance is smaller than the standard deviation of the metrics, allowing to check if the difference in the performance is influenced by the removed variable or by the intrinsic stochasticity of NN. Because of their similar nature, we removed \(m_{halo}\) and \(m_{crit,200}\) together, because one variable could give information contained in the other one. The same applies for \(r_{crit,200}\) and \(r_{crit,500}\). Table 3 shows that, for \(M^{*}\), the maximum circular velocity was the most impactful feature to remove, followed by the \(r_{\rm crit}\). The maximum velocity being a good predictor of the stellar mass is a result that has been reported already (Zehavi et al., 2019), which supports our results. In third and fourth place we have \(z_{1/2}\) and \(z_{last}\). This suggests that the history of the evolution of the dark matter medium has a strong influence on this variable, not only in terms of the number of mergers or their mass ratio as studied in Jo & Kim (2019), but also the age of the universe for these events. In the case of halo mass, it is not very intuitive for it to come in fifth place. While the halo mass's strong correlation with stellar mass has been thoroughly investigated, one may argue that \(v_{max}\) and \(r_{crit}\) are strongly correlated with \(m_{halo}\) and therefore, the machine can infer to some extent this relation with halo mass (Zehavi et al., 2019). Figure 15 presents a correlation matrix between all input and out Figure 11: Scatter plots showing the distribution of stellar mass in relation to halo mass. In the left figure, we can see TNG data. In the right figure, we see the prediction’s distribution. We can picture an erratic behaviour at high halo masses for the prediction due to low number statistics. Figure 12: KDE plot comparing the difference from the mean value of the Halo Mass/stellar mass relation. We can see that the machine is able to partially reproduce the sign of the deviation from the mean relation present in the TNG galaxies. put variables, with the goal to study the influence between variables and gain insights into how the information they provide, or the absence of it, impacts the model. This matrix shows that the three aforementioned input variables are strongly correlated. The ranking continues with the environmental properties. From the table, we can see these variables improve the results of the predictions, but they are clearly not as influential as historical or environmental dark matter properties. Finally, we highlight the low influence shown by the halo spin and velocity dispersion. These two variables play an important role in Jo and Kim (2019) and Agarwal et al. (2018), where they are relatively well predicted. We believe that this is relevant enough to be studied on its own, and while we conjecture that the relevant information these variables contribute is encompassed by other variables, deepening these relations is beyond the scope of this work. On the other hand, for SFR Table 4 shows the predicted stellar mass as the most impactful feature when removed. This reinforces the value of adding such a feature to the input data and the consistency of chaining networks. Once again, \(z_{1/2}\) appears high in the ranking, showing the importance of historical data for inferring baryonic properties. While \(m_{halo}\) coming in third place is relevant to mention, the fourth place deserves special attention. \(\dot{m}_{subhalo}\) is another variable that was not explicitly seen in literature, and it suggests a relation Figure 14: KDE plot comparing the difference from the mean value of the stellar mass/SFR relation. The distribution slightly tilts towards the direction of the identity line, which indicates a weak relation. The range of the machine deltas is narrower. This makes sense since the machine’s SFR distribution has a considerably lower scatter. Figure 15: Correlation matrix between input and output variables. We can see a particularly high correlation between input variables \(v_{max}\), \(m_{halo}\), \(r_{crit}\) and \(\sigma_{\mathrm{sub}}\). As 3 and 4 show, \(m_{halo}\) and \(\sigma_{\mathrm{sub}}\) doesn’t impact the model when removed by themselves, which we infer is due to the high correlation to the more impactful variables \(v_{max}\) and \(r_{crit}\). We can also observe that both output variables, \(M^{*}\) and \(meanSFR\) have higher correlation with those four input variables. Figure 13: Scatter plots showing the distribution of star formation rate in relation to stellar mass. In the left figure, we can see TNG data. In the right figure, we see the prediction’s distribution. The white dotted line represents a piecewise function, where we bin meanSFR in 20 Stellar Mass bins, and compute the mean of the values for each bin. between the accretion of a subhalo and the star formation happening inside. In fifth place, and very close to \(\dot{m}_{subhalo}\) comes \(z_{last}\), once again showing the importance of historical properties. Environmental properties have a positive, but smaller impact. We interpret this as them being consistently influential in the studied galaxy properties, even if said influence is overshadowed by more impactful variables. The spin and velocity dispersion have again a low place in the ranking. But this time, unlike with stellar mass, the critical radius and maximum circular velocity are also low in the ranking. While this may reinforce the conjecture made earlier in this section, we infer that this may also suggest that the dynamic properties of a halo are not as important as historical and environmental properties on the SFR, though to study this in depth is beyond the scope of this work. We deepen the discussion about input variables in section 6.2. #### 5.4.2 Distribution of best and worst prediction inputs We will also analyze the histogram distribution of some input variables in the best 10% and worst 10% predictions. This will shed light on how these parameters behave and influence the quality of the predictions. These features might not be the most influential overall as seen in Section 5.4.1, but they present the highest shift between the best and worst predictions. In figure 16 we can see the frequency distribution for three variables: \(M^{*}\), \(v_{max}\) and \(S_{subhalo}\). In the case of stellar mass, for the three variables, we can see that the worst predictions tend to have higher values. This trend becomes more evident for the spin and maximum velocity. While not as notorious as in the stellar mass, the three chosen variables also have higher frequencies at higher values for meanSFR. With this on mind, we believe that for future works, this kind of behaviour could be studied in the preprocessing stage. This will be discussed in more detail in section 6.3. ## 6 Discussion ### Main Results In this work, we trained a machine that was able to predict stellar mass in a very consistent way, not only in metrics but also replicating distributions and deviations from mean relations. While the results for meanSFR were not as robust as the ones obtained for stellar mass, our results for both properties still present improved metrics with respect to, for instance, Agarwal et al. (2018). We explored approaches to improve the predictions for SFR, where time-averaging SFR and chaining neural networks were successful in doing so. In particular, time-averaging was quite impactful in the results. This approach to predicting SFR and its benefits have not been addressed in similar studies. Time-averaging not only reduced the scatter in the SFR values (which makes it harder for the machine to find an appropriate mapping for), but also makes more sense when taking into account that the SFR measured in real galaxies corresponds to a time averaged quantity. This should be taken into account if a ML method is to be used to reproduce the SFR measurements of particular surveys to match the estimated timescale of averaging present in the observations. This being said, SFR proved to be a tricky property to predict for our model. In section 6.3 we discuss ideas for improving predictions. In terms of the predicted values, both \(M^{*}\) and SFR had issues at the lower and higher ends of the range of values. While some issues can be interpreted from small number statistics, the inability of reproducing extreme values is an important flaw in this method, and should be treated with special attention in future work. The intrinsic scatter of the predicted values could not be fully reproduced, this being particularly true for meanSFR. With respect to the deviation from observed mean relations, the machine was partially successful in doing so for \(M^{*}/m_{halo}\) and, arguably, minimally achieved for \(\mathrm{meanSFR}/M^{*}\). We studied the feature importance and impact of physical properties using a leave-one-covariate-out approach, which has not been previously used in the literature. Doing so, and studying the correlation between input variables, we observe that some variables which would be impactful by themselves (e.g. \(m_{halo}\) for \(M^{*}\)), when removed do not impact the results of the predictions as much as expected. We infer that this is due to their high correlation to the two most relevant features according to the analysis (e.g. \(v_{max}\) and \(r_{crit}\) in the case of \(M^{*}\)) Finally, in Section 5.4.1 we discovered that \(z_{last}\) and \(z_{1/2}\) were quite relevant in the performance of the machine for both \(M^{*}\) and mean SFR. \(\dot{m}_{subhalo}\) was also particularly impactful for regressing SFR. These and other interesting features will be discussed in section 6.2. ### Remarkable input variables As stated in Section 6.1, historical properties like \(z_{1/2}\) and \(z_{last}\) have an important impact on the predictions. This makes sense considering that the properties of a galaxy are a product of its evolution, and shows that this kind of features should be considered as important when generating models to predict baryonic features of galaxies. On top of that, \(\dot{m}_{subhalo}\), which could also be seen as a historical property (since it considers data from previous redshifts) had a surprising impact in meanSFR. These variables had not been addressed in similar studies (although Jo and Kim (2019) used other historical properties). Other input feature of interest, particularly for \(M^{*}\), is \(v_{max}\). Naively one tends to think of the mass of the halo as the main descriptor of stellar mass, however, it has been shown in the literature that there is a stronger relation with the maximum circular velocity (see for instance Kulier et al., 2019). Therefore it is quite remarkable that the machine is able to identify its importance. The information lost by taking \(m_{halo}\) from the inputs might be compensated by the critical radius and maximum circular velocity. As figure 15 shows, the correlation between these three variables is very high, which supports this inference. Regarding the variables studied in Section 5.4.2, we can observe in figure 16 that the top left histogram shows that better \(M^{*}\) predictions tend to have lower halo mass values. This is also true for \(v_{max}\) and \(S_{subhalo}\) in the upper middle and upper left histograms. For these variables, the inclination towards low values for good predictions is more evident, although these values (unlike \(m_{halo}\)) are not in logarithmic scale. The same patterns are also present in the three histograms for meanSFR in the bottom row. There also seems to be a tendency to have better results at lower values, but the impact is not as clear as it is for \(M^{*}\). In overall, this shows that bad predictions tend to have higher input values. We suggest that these variables should be treated carefully and taken in special consideration when exploring methods similar to the one presented here. ### On how can this method be improved As shown in the results section, our method struggled to replicate results in the upper and lower limits of the range of predicted properties as seen in Figure 10, where the machine is unable to reproduce the range of values; and Figure 4, where the machine could not predict values in the lower bound and mapped some values to the same upper threshold. The limits issue could be approached by exploring other kinds of normalization, such as gaussian or quantile normalizations. We should also address the stack of values of predicted \(M^{*}\) for large halo masses. This is a direct sign that a larger dataset could improve results, as we can see a decay in the performance at high masses where there is a smaller number of galaxies, although a weighting scheme may be needed for the machine to learn about the sparse sample of objects with the highest masses. We propose tackling this issue by balancing the dataset either generating synthetic data to have more high mass candidates for the machine to learn from, or by reducing the number of galaxies with average values to have a homogeneous distribution of galaxies of different nature. This second approach might hurt the performance of the machine because of the reduced amount of data. Another possibility is to remove outlier values with clustering methods (like DBSCAN) to reduce the amount of subhaloes with extreme values that might affect the learning of the machine. Alternatively, in order to improve the performance one could try log-scaling the input features or using a principal component analysis before inputting values into the MLP. Several works in the literature have already shown other methods to improve the tail of the distributions (see, Jo & Kim (2019); de Santi et al. (2022); Stiskalek et al. (2022)). For instance, modifying the loss function in Jo & Kim (2019) and, as proposed early for balancing the dataset, it was already done in de Santi et al. (2022). These improvements are beyond the scope of this study, but it will be interesting to address in future research. As we saw in Sections 5.2 and 5.4.1, chaining networks by using a predicted feature as input to predict a second feature remarkably improved the results. With this in mind, the model could be improved by predicting new baryonic properties which, if they can be robustly predicted, may be used as inputs to predict more erratic properties such as the SFR. An example of a feature that could be used this way is metallicity, which was consistently predicted in Agarwal et al. (2018). In section 4.3, we discussed about how we explored the hyperparameter space to fine-tune our machine. The employed method, where we manually went over different combinations of hyperparameters is called grid search, and it is one of the most basic approaches to this end. In future works, one might employ more sophisticated hyperparameter optimization methods, eg. gradient-based or bayesian optimizations. For this work we used only 12 input features (\(S_{\text{sub}}\), \(\sigma_{\text{sub}}\), \(\nu_{max}\), \(m_{halo}\), \(r_{crit}\),200, \(r_{crit}\),500, \(m_{crit}\),200, \(\rho_{n}\), \(\rho_{mass}\), \(z_{1/2}\), \(z_{last}\), and predicted \(m_{*}\) only for meanSFR). Considering the amount of data products that come from a DM-Only discussed in section 2.4, the number of input features could be increased. Particularly, we propose that considering more historical features could greatly improve the ability of the machine to predict complex attributes like SFR. Considering how important the halo growth history proved to be, adding data from earlier redshifts might also improve the method. The MLP is one of the most basic NN architectures. This is not an issue in this work, since we use little input data. But if the amount of data were increased, one could make use of more sophisticated machine learning methods to process said data. If, for example, we decide to use direct historical data (eg. the masses measured in pre \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Feature & MSE & MSE \% & R\({}^{2}\) Score & R\({}^{2}\) \% & PCC & PCC \% \\ \hline \(m_{*pred}\) & 0.1518 & 12.436\% & 0.4576 & 2.595\% & 0.8114 & 20.558\% \\ \(z_{1/2}\) & 0.1460 & 8.116\% & 0.4989 & 1.647\% & 0.8193 & 13.385\% \\ \(m_{halo}\) & 0.1447 & 7.182\% & 0.5060 & 1.407\% & 0.8213 & 12.150\% \\ \(\dot{m}_{subhalo}\) & 0.1445\% & 7.016 & 0.5078\% & 1.394 & 0.8214 & 11.839\% \\ \(\dot{\mathrm{b}}\) & 0.1444 & 6.951\% & 0.5082 & 1.392\% & 0.8214 & 11.775\% \\ \(\rho_{n}\) & 0.1432 & 6.052\% & 0.5103 & 1.166\% & 0.8233 & 11.399\% \\ \(\rho_{mass}\) & 0.1430 & 5.903\% & 0.5116 & 1.128\% & 0.8236 & 11.177\% \\ \(\nu_{max}\) & 0.1426 & 5.595\% & 0.5174 & 1.081\% & 0.8240 & 10.166\% \\ \(\sigma_{\text{sub}}\) & 0.1400\% & 3.737 & 0.5377\% & 0.622 & 0.8278 & 6.653\% \\ \(r_{crit}\) & 0.1393\% & 3.156 & 0.5308 & 0.447\% & 0.8293 & 7.849\% \\ \(S_{\text{sub}}\) & 0.1392\% & 3.082 & 0.5365 & 0.274\% & 0.8307 & 6.864\% \\ \hline \hline \end{tabular} \end{table} Table 4: Parameter ranking for meanSFR prediction. Percentages indicate how much the metric worsened by removing the corresponding feature in that row, with respect to the best results obtained. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Feature & MSE & MSE \% & R\({}^{2}\) Score & R\({}^{2}\) \% & PCC & PCC \% \\ \hline \(\nu_{max}\) & 0.0194 & 14.378\% & 0.9234 & 0.467\% & 0.9655 & 1.456\% \\ \(r_{crit}\) & 0.0190 & 11.614\% & 0.9253 & 0.329\% & 0.9668 & 1.253\% \\ \(z_{1/2}\) & 0.0184 & 8.226\% & 0.9288 & 0.274\% & 0.9673 & 0.872\% \\ \(\dot{z}_{last}\) & 0.0183 & 7.676\% & 0.9296 & 0.277\% & 0.9673 & 0.791\% \\ \(m_{halo}\) & 0.0179 & 5.572\% & 0.9303 & 0.237\% & 0.9677 & 0.710\% \\ \(\dot{\rho}_{mass}\) & 0.0179 & 5.434\% & 0.9305 & 0.210\% & 0.9680 & 0.693\% \\ \(\rho_{n}\) & 0.0177 & 4.252\% & 0.9318 & 0.224\% & 0.9678 & 0.550\% \\ \(S_{\text{sub}}\) & 0.0177 & 4.089\% & 0.9314 & 0.202\% & 0.9680 & 0.601\% \\ \(\sigma_{sub}\) & 0.0177 & 3.981\% & 0.9313 & 0.209\% & 0.9680 & 0.604\% \\ \(\dot{m}_{subhalo}\) & 0.0175 & 3.058\% & 0.9322 & 0.197\% & 0.9681 & 0.516\% \\ \hline \hline \end{tabular} \end{table} Table 3: Parameter ranking for stellar mass prediction. Percentages indicate the level to which the metric worsened by removing the corresponding feature in that row, with respect to the best results obtained. vious snapshots), recurrent neural networks could be an excellent candidate since they have the advantage of being efficient at learning from sequential data (Schmidt, 2019). ## 7 Conclusions In this work we employed machine learning methods, and in particular neural networks, to predict baryonic properties of galaxies from dark matter data. Our networks learned from hydrodynamic simulations. We made predictions for stellar mass with a fairly good performance, but when predicting the SFR of galaxies the results were far from ideal. In both cases, predictions also presented undesirable behaviours at the lower and upper limits of the input values. However, the main results in this work are not linked with the quality of the predictions themselves, but on how the predictions relate with the chosen input variables. Our results showed that stellar mass is a consistently predictable variable. Apart from the troublesome results at low and high values, we were able to reproduce to a fair extent the stellar mass function, the distribution of stellar masses with respect to halo mass, and how much they deviate from the mean of this relation. We outperformed the metrics achieved in other studies, both in stellar mass and SFR predictions. But, unlike stellar mass, SFR was not as well predicted. With the goal of reducing the scatter of this latter property, we used a time-averaged SFR, which is also a more meaningful property given that observational estimates of SFR are time averaged to different degrees. We highlight this time-averaging approach since it has not been adopted in previous works. This averaging improved our results in predicting SFR to some extent. We also achieved better results by using as input the output of a second neural network that reproduced stellar mass robustly, showing an interesting approach of linking networks to dig deeper in the information that can be deduced from our data. While there was a resemblance in the relation between SFR and stellar mass, the values predicted by the machine fall in a smaller range of values than the ones from TNG. Also, the deviation from the mean relation was, arguably, weakly recovered. All this points towards the need for more data, or different approaches in order to produce a consistent SFR prediction. With respect to the relevance of input variables, we highlight in first place \(z_{last}\) and \(z_{1/2}\), two variables which have an important impact on the results of the predictions of both baryonic properties. Specifically for SFR, \(\dot{m}_{subhalo}\) also proved to be a feature of importance. This shows the relevance of considering events in the evolution of a galaxy and its environment when describing or predicting its baryonic properties. We also highlight the leave-one-covariate-out approach used to study the relevance of input variables, which is different to the methods used in similar studies. Our implementation shows insights Figure 16: Histograms of the distribution of \(m_{halo}\), \(v_{max}\) and \(S_{subhalo}\) for the 10% best and worst predictions for \(M^{*}\) and meanSFR. Histograms in the top row belong to \(M^{*}\). Histograms in the bottom row belong to meanSFR. In all histograms, the \(y\) axis is log-scaled. The left column shows the \(m_{halo}\) distribution, the center column shows the \(v_{max}\) distribution, and the right column shows the \(S_{subhalo}\) distribution. about how some relevant variables lose impact when added to the model with other highly correlated variables. Our results support the idea of not only considering instantaneous properties, which is also being considered in other related works. When analyzing the importance of input features, we also found that \(v_{max}\) has a stronger impact than \(m_{halo}\) in the prediction of \(M^{*}\), consistent with previous analyses (eg. Kulier et al., 2019) but obtained independently by the machine which was able to infer that \(v_{max}\) has more information pertinent to the stellar content of galaxies. This is interesting to highlight since, for example, when we study the correlation between input and output variables for \(M^{*}\) (as seen in figure 15) we can see that the most correlated feature is \(m_{halo}\), but this variable ranks fifth in importance. We can also see that variables with high importance, like \(z_{half}\) in the case of \(meanSFR\), is not highly correlated with the output variable. These and other insights from the comparison between correlation and impact in the model are relevant to address in future studies. The analysis itself of how each input variable affects the results can be of special interest and shed light on interesting relations. This could help, for example, to improve SAMs by hinting which properties should be given special attention. While it is a valid goal to use machine learning methods to populate dark matter simulations with less computational cost than required to run SAMs, these methods can also support and complement galaxy formation modelling. Finally, taking all points in consideration, we conclude that machine learning models are not only strong candidates for potential simulation methods, but they also give us new tools and perspectives to understand the significance of different properties and their impact on target characteristics, enabling us to improve the already existing methods for recreating and studying the evolution of the universe and the Galaxies that live within it. ## Acknowledgements We thank the Referee who significantly helped improve the clarity of this manuscript. CH acknowledges support from Fondecyt Regular 1191813, ANID, Chile. NP was supported in part by a RAICES and a PICT-2021-0700 grants from the Ministerio de Ciencia Tecnologia e Innovacion of Argentina. ## Data Availability No new data were generated in this work.
2310.08055
Log-Gaussian Gamma Processes for Training Bayesian Neural Networks in Raman and CARS Spectroscopies
We propose an approach utilizing gamma-distributed random variables, coupled with log-Gaussian modeling, to generate synthetic datasets suitable for training neural networks. This addresses the challenge of limited real observations in various applications. We apply this methodology to both Raman and coherent anti-Stokes Raman scattering (CARS) spectra, using experimental spectra to estimate gamma process parameters. Parameter estimation is performed using Markov chain Monte Carlo methods, yielding a full Bayesian posterior distribution for the model which can be sampled for synthetic data generation. Additionally, we model the additive and multiplicative background functions for Raman and CARS with Gaussian processes. We train two Bayesian neural networks to estimate parameters of the gamma process which can then be used to estimate the underlying Raman spectrum and simultaneously provide uncertainty through the estimation of parameters of a probability distribution. We apply the trained Bayesian neural networks to experimental Raman spectra of phthalocyanine blue, aniline black, naphthol red, and red 264 pigments and also to experimental CARS spectra of adenosine phosphate, fructose, glucose, and sucrose. The results agree with deterministic point estimates for the underlying Raman and CARS spectral signatures.
Teemu Härkönen, Erik M. Vartiainen, Lasse Lensu, Matthew T. Moores, Lassi Roininen
2023-10-12T06:08:34Z
http://arxiv.org/abs/2310.08055v2
# Log-Gaussian Gamma Processes for Training Bayesian Neural Networks in Raman and CARS Spectroscopies ###### Abstract We propose an approach utilizing gamma-distributed random variables, coupled with log-Gaussian modeling, to generate synthetic datasets suitable for training neural networks. This addresses the challenge of limited real observations in various applications. We apply this methodology to both Raman and coherent anti-Stokes Raman scattering (CARS) spectra, using experimental spectra to estimate gamma process parameters. Parameter estimation is performed using Markov chain Monte Carlo methods, yielding a full Bayesian posterior distribution for the model which can be sampled for synthetic data generation. Additionally, we model the additive and multiplicative background functions for Raman and CARS with Gaussian processes. We train two Bayesian neural networks to estimate parameters of the gamma process which can then be used to estimate the underlying Raman spectrum and simultaneously provide uncertainty through the estimation of parameters of a probability distribution. We apply the trained Bayesian neural networks to experimental Raman spectra of phthalocyanine blue, aniline black, naphthalene red, and red 264 pigments and also to experimental CARS spectra of adenosine phosphate, fructose, glucose, and sucrose. The results agree with deterministic point estimates for the underlying Raman and CARS spectral signatures. ## 1 Introduction Raman and coherent anti-Stokes Raman scattering (CARS) spectroscopies are vital tools used in chemistry, physics, and biomedical research [1, 2, 3]. The insights they offer into molecular vibrations, structural dynamics, and chemical compositions are invaluable. However, working with their data presents challenges. Measurement artifacts including noise, and, especially, background signals in Raman and CARS spectra often obscure crucial molecular information. Traditional methods for data correction are typically manual and may fall short in capturing the full complexity of the data. For instance, standard approaches used for removing the background signals include asymmetric least squares polynomial fitting, wavelet-based methods, optimization with Tikhonov regularization, and Kramers-Kronig relations [4, 5, 6, 7, 8, 9, 10, 11, 12]. While appealing, these methods suffer from practical drawbacks such as the need for manual tuning of the model or regularization parameters. The need for automated, robust, and statistically sound solutions to enhance our spectroscopic analyses is evident. Deep neural networks offer a compelling solution for automatic spectral correction across various applications, from weather predictions [13, 14, 15] to medical imaging [16, 17, 18] and many others [19, 20, 21, 22, 23]. In the realm of Raman spectroscopy, deep neural networks have been used in chemical species identification and background removal [24, 25, 26, 27, 28]. Similarly, they have been applied to extract the underlying Raman spectra from CARS measurement [29, 30, 31, 32, 33, 34, 35]. Despite their efficacy, non-Bayesian neural networks lack a critical feature: the ability to quantify uncertainty in Raman spectrum estimation. Bayesian inference, on the other hand, provides an avenue to solve this problem. Bayesian inference treats the parameters of a given model as random variables. These models consist of a likelihood function that is combined with prior distributions for the parameters to produce posterior estimates. The likelihood function is analogous to a utility function in an optimization context. It quantifies how well the model fits the observed data. The aforementioned prior distributions for the model parameters represent the information known beforehand, including any constraints dictated by the physical nature of the parameters, such as non-negativity. In spectroscopic analysis, the model parameters can be, for example, amplitudes, locations, and widths of Gaussian, Lorentzian, or Voigt line shape functions. The combination of the likelihood and the priors results in a posterior distribution over the model parameters. The posterior is a probabilistic representation of the uncertainty in the parameter estimates. Bayesian approaches have been considered for estimating spectrum parameters, where the authors used sequential Monte Carlo algorithms to numerically sample from the posterior distribution [36, 37]. While the uncertainty quantification provided by Bayesian modeling and Markov chain Monte Carlo (MCMC) methods is compelling, the approach is known to be computationally expensive, see for example [38]. This becomes a major issue particularly with hyperspectral data sets, where an image can contain millions of individual spectra which are to be analyzed. Bayesian neural networks are a synthesis of the aforementioned two ideas. Bayesian neural networks model the weights and biases of standard neural networks as random variables, which can be assigned prior distributions. When combined with a likelihood according to Bayes' theorem, the resulting utility function corresponds to the posterior for the neural network parameters. Advantages of this Bayesian neural network approach in comparison to non-Bayesian neural networks include robustness in terms of overfitting, providing uncertainty estimates instead of only point estimation, sequential learning, and better generalization [39]. In particular, uncertainty quantification has seen widespread research covering many application areas and topics, for example [40]. One of the challenges of Bayesian neural networks is that they typically contain an enormous number of parameters. For instance, our network comprises over 11 million parameters, far beyond what is commonly considered high-dimensional for MCMC [41, 42]. Some neural networks, such as large language models (LLMs), can have billions of parameters [43]. Thus, it can be challenging to establish convergence of such a large number of parameters in a statistically rigorous manner. To combat this, partially-Bayesian neural networks have been used as a practical tool to provide uncertainty estimation with neural networks. In addition to empirical validation through practice, studies have provided compelling analytical and numerical evidence that partially-Bayesian neural networks are indeed capable of providing posterior estimates on par or even superior performance to fully-Bayesian neural networks [44]. The above points lead us to construct our neural network for this study as a partially-Bayesian neural network. Neural networks typically require large volumes of training data. This has been noted to be a problem also in spectroscopic applications as it is difficult to acquire large sets of independent data sets [28]. Therefore, many studies mentioned above use synthetic data to train the neural networks. The synthetic data is usually generated using random linear combinations of Lorentzian line shapes, where the amplitudes, locations, and widths are sampled from predefined probability distributions, see for example [25, 29, 45]. The background data is generated similarly. The backgrounds are modeled explicitly using a parametric functional form, such as a polynomial or a sigmoidal function, and the parameters of the model are again sampled from a predefined probability distribution [25, 24, 46]. An extension to this is to use experimental Raman spectra on top of the randomly generated spectra [34]. Stochastic processes can be used to draw samples of random functions. A typical example of a stochastic process is the widely-used Gaussian process (GP). Properties of the drawn samples such as differentiability are governed through kernel functions, which are used to model dependencies between data points. For readers unfamiliar with GPs, we recommend the book by Rasmussen and Williams [47]. Instead of using explicit, parametric functions to model the spectroscopic features, we propose using stochastic processes as a more flexible tool for the purpose. In this study, we use GPs as a generative model for the additive and multiplicative backgrounds of Raman and CARS spectra, see Fig. 1. For the purpose of generating synthetic Raman spectral signatures, we propose a specific type of doubly-stochastic Levy process which we call a log-Gaussian gamma process. Our construction of the log-Gaussian gamma process is inspired by log-Gaussian Cox process which the authors have previously used as a model for spectra [48]. While it makes sense to model spectra as a Cox process where the relaxation from higher energy levels happens at a constant rate and results in counts of photons, the data is often available in scaled floating-point numbers which prevents direct application of the log-Gaussian Cox process model. Gamma-distributed variables have direct connections to Poisson-distributed variables, which constitute the Cox process, making the extension to a log-Gaussian gamma process intuitive as a model for Raman spectroscopy. The log-Gaussian gamma process can be used to generate arbitrary amounts of synthetic spectra once parameters of the stochastic process have been estimated. We perform the estimation using MCMC methods which allow us to construct a Bayesian posterior distribution for the model parameters, thereby including uncertainty of the parameter estimates in our data generation. This also applies to our GP-based background model. We present a high-level diagram of our stochastic process method for data Fig. 1: Structure of our generative spectrum model using GPs and log-Gaussian gamma processes. On top, an experimental CARS spectrum of adenosine phosphate in blue and an example multiplicative background in red. We model the backgrounds as a GP. At the bottom, an example underlying Raman spectral signature in blue. We assume the Raman peaks to be distributed according to our proposed log-Gaussian gamma process model. The stochastic processes are parameterized according to \(\mu_{e}\), \(\mathbf{\theta}_{e}\), \(\alpha\), and \(\beta(\nu)\). We further model \(\beta(\nu)\) using GPs which are parameterized according to \(\mu_{\beta}\) and \(\mathbf{\theta}_{\beta}\). We construct statistical samples with MCMC for the model parameters which allow us to generate synthetic spectra for training our Bayesian neural network. generation in Fig. 1. Fig. 2 shows an example of the aim of this paper, a Raman spectral signature extracted from a CARS spectrum using a Bayesian neural network. We provide a pseudo-code description of our approach in Algorithm 1. The key contributions of this paper are the following. We propose using log-Gaussian gamma processes for modeling Raman spectral signatures and GPs to model additive or multiplicative background signals. The aforementioned doubly-stochastic processes are sampled randomly, enabling us to generate an arbitrary number of synthetic spectra that are statistically similar to experimental spectra. Finally, we present a partially-Bayesian neural network for analyzing Raman and CARS spectral measurements, which we train using the sampled synthetic spectra. Once trained, we use these neural networks to estimate the spectral signatures for experimental Raman spectroscopy measurements of phthalocyanine blue, naphthol red, aniline black, and red 264 pigments and for experimental CARS spectra of adenosine phosphate, fructose, glucose, and sucrose in addition to synthetic test spectra. ``` Step 1: Fit a log-Gaussian gamma process to Raman spectrum data. Step 2: Fit a GP to background data. Step 3: Draw a large number of realizations from the fitted log-Gaussian gamma process. Step 4: Draw a large number of realizations from the fitted GP. Step 5: Use a forward model to combine the realizations to form a data set of synthetic spectra. Step 6. Train a Bayesian neural network using the data set of synthetic spectra. ``` **Algorithm 1** Log-Gaussian gamma process data generation for training Bayesian neural networks The rest of the paper is structured as follows. We detail the steps used to generate the synthetic training data in three stages in the following three sections. We first present the log-Gaussian gamma process as a model for Raman spectral signatures and explain how to draw realizations of this doubly-stochastic process. This is followed by a description of our GP-based additive and multiplicative background models. We finalize the explanation of our synthetic data generation method with definitions of the forward models used to simulate synthetic training data for Raman and CARS measurements with additive and multiplicative backgrounds, respectively. Next, we present our partially-Bayesian neural network architecture, which we train against the synthetic data sets that we have generated. We document computational details and prior distributions in the next section, followed by a presentation of our results for both artificial and real experimental data. Finally, we conclude with a discussion of the significance and other potential applications for our method. ## 2 Log-Gaussian gamma process spectrum model We model a Raman spectral signature as a collection of conditionally-independent, gamma-distributed random variables \[r_{k}:=r(\nu_{k})\sim\text{Gamma}\left(\alpha,\beta(\nu_{k})\right), \tag{1}\] where \(r_{k}\) denotes a Raman measurement at wavenumber location \(\nu_{k}\) with \(\alpha\) and \(\beta(\nu_{k})\) being the shape and scale parameters of the gamma distribution, respectively. The above construction is motivated by log-Gaussian Cox processes [49] but without the restriction of modeling of only integer-valued data and with an additional parameter in the stochastic process allowing for more flexible modeling of uncertainty. Poisson-distributed random variables, which constitute the Cox process, have a single parameter to control both the mean and variance of the distribution. Very often in real data, this assumption is found to be too restrictive, leading to a model that is either under- or over-dispersed [50]. In contrast, the gamma distribution has two parameters which together allow for a range of different variances for a given mean. We extend Eq. (1) by modeling the log-scale as a GP, resulting in a hierarchical model \[\log\beta(\mathbf{v})\sim\text{GP}(\mu_{\beta},\Sigma_{\beta}(\mathbf{v},\mathbf{v},\mathbf{ \theta}_{\beta})), \tag{2}\] where \(\mathbf{v}:=(\nu_{1},\dots,\nu_{K})^{T}\) is a vector of the wavenumber locations with \(\mu_{\beta}\in\mathbb{R}\) and \(\Sigma_{\beta}(\mathbf{v},\mathbf{v},\mathbf{\theta}_{\beta})\in\mathbb{R}^{K\times K}\) being a constant mean and a covariance matrix parameterized according to hyperparameters \(\mathbf{\theta}_{\beta}\). This doubly-stochastic model introduces dependence between values \(r_{i}\) and \(r_{j}\) at different wavenumbers \(\nu_{i}\) and \(\nu_{j}\). For the covariance function of the log-scale GP, we use the squared exponential kernel \[\left[\Sigma_{\beta}(\mathbf{v},\mathbf{v},\mathbf{\theta}_{\beta})\right]_{ij}=\sigma_{ \beta,f}^{2}\exp\left(-\frac{1}{2}\frac{\left(\nu_{i}-\nu_{j}\right)^{2}}{l_{ \beta}^{2}}\right)+\sigma_{\beta}^{2}\delta(\nu_{i}-\nu_{j}), \tag{3}\] where \(\left[\Sigma_{\beta}(\mathbf{v},\mathbf{v},\mathbf{\theta}_{\beta})\right]_{ij}\) denotes the \(ij\)th element of the covariance matrix \(\Sigma_{\beta}(\mathbf{v},\mathbf{v},\mathbf{\theta}_{\beta})\), \(\sigma_{\beta,f}^{2}\) is the signal variance, \(l_{\beta}\) is the length scale, \(\sigma_{\beta}^{2}\) denotes the noise variance, and \(\delta(\nu_{i}-\nu_{j})\) is the Dirac delta function with \(\mathbf{\theta}_{\beta}=\left(\sigma_{\beta,f}^{2},l_{\beta},\sigma_{\beta}^{2} \right)^{T}\). The GP construction yields an analytical form for the log-scale \(\log\beta(\mathbf{v})\) which we will Fig. 2: On top, an experimental CARS spectrum of adenosine phosphate in blue. With a trained Bayesian neural network, we can extract the underlying Raman spectral signature from the data along with an uncertainty estimate for the spectrum. At the bottom, the corresponding Bayesian neural network median Raman spectrum estimate and 90% confidence interval of the estimate for the adenosine phosphate data. detail below as we construct the posterior distribution according to Bayes' theorem. This log-GP parameterization is identical to the log-intensity model for Poisson variables that features in log-Gaussian Cox processes. For more details on the log-Gaussian Cox process, see [49] and for example [51]. The posterior distribution involves the likelihood function \(\mathcal{L}(\mathbf{r}\mid\alpha,\beta(\mathbf{\nu}))\), a log-GP prior for the scale \(\pi_{0}(\beta(\mathbf{\nu})\mid\mu_{\beta},\mathbf{\theta}_{\beta})\), and a joint prior distribution \(\pi_{0}(\alpha,\mu_{\beta},\mathbf{\theta}_{\beta})\) for rest of the model parameters. Given a measured Raman spectrum \(\mathbf{r}:=(r(\nu_{1}),\ldots,r(\nu_{K}))^{\intercal}\), we can formulate the likelihood as a product of conditionally-independent, gamma-distributed random variables \[\mathcal{L}(\mathbf{r}\mid\alpha,\beta(\mathbf{\nu}))\propto\prod_{k=1}^{K}\frac{r_{k} ^{\alpha-1}\exp(-r_{k}/\beta_{k})}{\Gamma(\alpha)\beta_{k}^{\alpha}}, \tag{4}\] where \(\beta_{k}:=\beta(\nu_{k})\), and \(\Gamma(\alpha)\) is the gamma function. The hierarchical prior for \(\beta(\mathbf{\nu})\) can be evaluated as \[\pi_{0}(\beta(\mathbf{\nu})\mid\mu_{\beta},\mathbf{\theta}_{\beta})=\frac {1}{\sqrt{(2\pi)^{k}}}\left|\Sigma_{\beta}(\mathbf{\nu},\mathbf{\nu};\mathbf{\theta}_{\beta })\right|^{-1/2} \tag{5}\] \[\times\exp\left(-\frac{1}{2}\left(\beta(\mathbf{\nu})-\mu_{\beta} \right)^{\intercal}\Sigma_{\beta}(\mathbf{\nu},\mathbf{\nu};\mathbf{\theta}_{\beta})^{-1} \left(\beta(\mathbf{\nu})-\mu_{\beta}\right)\right),\] where \(\left|\Sigma_{\beta}(\mathbf{\nu},\mathbf{\nu};\mathbf{\theta}_{\beta})\right|\) denotes the determinant of the covariance matrix. With the above and a joint prior \(\pi_{0}(\alpha,\mu_{\beta},\mathbf{\theta}_{\beta})\), we can construct the posterior distribution for the model parameters conditioned on the measured spectrum data \(\mathbf{r}\) as \[\pi(\alpha,\beta(\mathbf{\nu}),\mu_{\beta},\mathbf{\theta}_{\beta}\mid\bm {r})\propto\mathcal{L}(\mathbf{r}\mid\alpha,\beta(\mathbf{\nu}))\pi_{0}(\beta(\mathbf{ \nu})\mid\mu_{\beta},\mathbf{\theta}_{\beta}) \tag{6}\] \[\times\pi_{0}(\alpha,\mu_{\beta},\mathbf{\theta}_{\beta}).\] In the posterior in Eq.(6), the dimension of \(\beta(\mathbf{\nu})\) is \(K\). The scale is a vector of the same dimension as the data, \(\beta(\mathbf{\nu})\in\mathbb{R}_{+}^{K\times 1}\). MCMC methdos are known to struggle estimating high-dimensional parameters. At a minimum, the high-dimensional parameters incur a computational cost for inference with MCMC. To amend these issues and to simplify the inference, we perform dimension reduction for the scale \(\beta(\mathbf{\nu})\). To achieve this, we observe that our data \(\mathbf{r}\) should be a reasonable estimate for the expectation of the gamma process in Eq.(1), \(\mathbf{r}\approx\mathbb{E}[\text{Gamma}(\alpha,\beta(\mathbf{\nu}))]=\alpha\beta( \mathbf{\nu})\). This implies that the shape of the data \(\mathbf{r}\) is close to the shape of the scale function, \(\beta(\mathbf{\nu})\). Thus, we approximate the scale \(\beta(\mathbf{\nu})\) as a convolution between a Gaussian kernel and the data \[\beta(\mathbf{\nu})\approx c_{\beta}G(\mathbf{\nu};\sigma_{G})*\mathbf{r}, \tag{7}\] where \(*\) denotes convolution, \(c_{\beta}\) is a scaling constant, and \(G(\mathbf{\nu};\sigma_{G})\) is Gaussian smoothing kernel with width \(\sigma_{G}\). By this, we reduce the inference of the scale \(\beta(\mathbf{\nu})\in\mathbb{R}_{+}^{K\times 1}\) to inference of two parameters, \(c_{\beta}\) and \(\sigma_{G}\). With this smoothing approximation, we formulate an approximate posterior for Eq. (6) as \[\pi(\alpha,c_{\beta},\sigma_{G},\mu_{\beta},\mathbf{\theta}_{\beta} \mid\mathbf{r})=\widetilde{\mathcal{L}}(\mathbf{r}\mid\alpha,c_{\beta},\sigma_{G})\pi_ {0}(\beta(\mathbf{\nu})\mid\mu_{\beta},\mathbf{\theta}_{\beta}) \tag{8}\] \[\times\pi_{0}(\alpha,c_{\beta},\sigma_{G},\mu_{\beta},\mathbf{\theta} _{\beta}),\] where \(\widetilde{\mathcal{L}}(\mathbf{r}\mid\alpha,c_{\beta},\sigma_{G})=\mathcal{L}( \mathbf{r}\mid\alpha,c_{\beta}G(\mathbf{\nu};\sigma_{G})*\mathbf{r})\) and \(\pi_{0}(\alpha,c_{\beta},\sigma_{G},\mu_{\beta},\mathbf{\theta}_{\beta})\) is the prior distribution augmented with \((c_{\beta},\sigma_{G})^{\intercal}\). We detail the prior distribution \(\pi_{0}(\alpha,c_{\beta},\sigma_{G},\mu_{\beta},\mathbf{\theta}_{\beta})\) in the section on computational details and prior distributions. We perform inference of the posterior in Eq. (8) by sampling all the model parameters simultaneously using the DRAM algorithm [52]. Given samples from the posterior distribution \(\pi(\alpha,c_{\beta},\sigma_{G},\mu_{\beta},\mathbf{\theta}_{\beta}\mid\mathbf{r})\) obtained with MCMC, we can sample realizations for the synthetic spectra to generate an arbitrary amount of synthetic data in the following way. First, we sample the GP parameters \((\widetilde{\mu}_{\beta},\widetilde{\mathbf{\theta}}_{\beta})^{\intercal}\) from the MCMC chain. Next, we use \((\widetilde{\mu}_{\beta},\widetilde{\mathbf{\theta}}_{\beta})^{\intercal}\) sample a GP realization \(\widetilde{\beta}(\mathbf{\nu}^{*}\mid\widetilde{\mu}_{\beta},\widetilde{\mathbf{ \theta}}_{\beta})\) at prediction locations \(\mathbf{\nu}^{*}:=(\nu_{1}^{*},\ldots,\nu_{\widetilde{K}})^{\intercal}\) modeling the scale \(\beta(\mathbf{\nu}^{*})\) with \[\widetilde{\beta}(\mathbf{\nu}^{*}\mid\widetilde{\mu}_{\beta},\widetilde{\mathbf{ \theta}}_{\beta})=\exp\left(\widetilde{\mu}_{\beta}+L(\mathbf{\nu}^{*}\mid \widetilde{\mathbf{\theta}}_{\beta})\mathbf{n}\right), \tag{9}\] where \(L(\mathbf{\nu}^{*}\mid\widetilde{\mathbf{\theta}}_{\beta})\) is the lower triangular Cholesky decomposition matrix of \(\Sigma_{\beta}(\mathbf{\nu}^{*},\mathbf{\nu}^{*};\widetilde{\mathbf{\theta}}_{\beta})\) and \(\mathbf{u}:=(u_{1},\ldots,u_{\widetilde{K}})^{\intercal}\) is Gaussian white noise such that \(u_{\widetilde{K}}\sim\mathcal{N}(0,1)\). Finally, by sampling \(\widetilde{\alpha}\), we can draw a spectrum realization \(\widetilde{r}(\mathbf{\nu})\) from the gamma process, \(\text{Gamma}(\widetilde{\alpha},\widetilde{\beta}(\mathbf{\nu}))\). We normalize the realizations \(\widetilde{r}(\mathbf{\nu})\) such that \(\max\left\{\widetilde{r}(\mathbf{\nu})\right\}=1\) and introduce an additional parameter to control amplitudes of the realizations. With an amplitude parameter \(A\), we sample a normalized shape \(\widetilde{r}_{N}(\mathbf{\nu}\mid\mathbf{\psi})\) of the spectrum and multiply this by a sampled amplitude \(\widetilde{A}\). This procedure results in the following statistical model \[r(\mathbf{\nu}\mid A,\mathbf{\psi}) \sim A\,\frac{r_{N}(\mathbf{\nu}\mid\mathbf{\psi})}{\max r_{N}(\mathbf{\nu}\mid\mathbf{ \psi})}, \tag{10}\] \[r_{N}(\mathbf{\nu}\mid\mathbf{\psi}) \sim\text{Gamma}(\alpha,\beta(\mathbf{\nu})),\] \[A \sim\pi_{0}(A),\] \[\mathbf{\psi} \sim\pi_{0}(\mathbf{\psi}),\] where \(\mathbf{\psi}:=(\alpha,c_{\beta},\sigma_{G},\mu_{\beta},\mathbf{\theta}_{\beta})^{\intercal}\) is a shorthand for the gamma process parameters and \(\pi_{0}(A)\) is a prior distribution for the amplitude \(A\). Example realizations from the above statistical model are shown in Fig. 3. In the following section, we detail how we model additive and multiplicative backgrounds for Raman and CARS spectra using GPs. ## 3 Additive and multiplicative background models We propose GPs as a flexible way to randomly draw additive and multiplicative background functions for Raman and CARS spectrum modeling. This is in contrast to more standard polynomial models such as the ones used in [32]. As noted above, we model additive or multiplicative spectral backgrounds as a GP \[e(\mathbf{\nu})\sim\text{GP}(\mu_{e},\Sigma_{e}(\mathbf{\nu},\mathbf{\nu},\mathbf{\theta}_{e})), \tag{11}\] with \(\mu_{e}\in\mathbb{R}\) and \(\Sigma_{e}(\mathbf{\nu},\mathbf{\nu},\mathbf{\theta}_{e})\in\mathbb{R}^{K\times K}\) being a constant mean and the covariance matrix of the GP parameterized according to hyperparameters \(\mathbf{\theta}_{e}\). For the background GP covariance function, we use again the squared exponential kernel \[[\Sigma_{e}(\mathbf{v},\mathbf{\psi},\mathbf{\theta}_{e})]_{i,j}=\mathbf{\sigma}_{e}^{2}\exp\left( -\frac{1}{2}\frac{\left(\nu_{i}-\nu_{j}\right)^{2}}{l_{e}^{2}}\right)+\sigma_{e} ^{2}\delta(\nu_{i}-\nu_{j}) \tag{12}\] where \([\Sigma_{e}(\mathbf{v},\mathbf{\psi},\mathbf{\theta}_{e})]_{i,j}\) denotes the \(ij\)th element of the covariance matrix, \(\mathbf{\sigma}_{e}^{2}\), is the signal variance, \(l_{e}\) is the length scale, and \(\mathbf{\sigma}_{e}^{2}\) denotes the noise variance with \(\mathbf{\theta}_{e}:=(\mathbf{\sigma}_{e,f},l_{e},\mathbf{\sigma}_{e})^{\intercal}\). Given a measurement of the background process, \(\mathbf{e}:=(e(\nu_{1}),\ldots,e(\nu_{K}))^{\intercal}\), we can formulate a posterior distribution for the background GP parameters \((\mu_{e},\mathbf{\theta}_{e})^{\intercal}\) as \[\pi(\mu_{e},\mathbf{\theta}_{e}\mid\mathbf{e})\propto\mathcal{L}(\mathbf{e}\mid\mu_{e}, \mathbf{\theta}_{e})\pi_{0}(\mu_{e},\mathbf{\theta}_{e}), \tag{13}\] where \(\mathcal{L}(\mathbf{e}\mid\mu_{e},\mathbf{\theta}_{e})\) is the GP likelihood and \(\pi_{0}(\mu_{e},\mathbf{\theta}_{e})\) denotes the prior distribution for the GP parameters. The log-likelihood is given as \[\begin{split}\log\mathcal{L}(\mathbf{e}\mid\mu_{e},\mathbf{\theta}_{e}) =&-\frac{1}{2}(\mathbf{e}-\mu_{e})^{\intercal}\Sigma_{e}(\mathbf{v}, \mathbf{\psi},\mathbf{\theta}_{e})^{-1}(\mathbf{e}-\mu_{e})\\ &-\frac{1}{2}\log|\Sigma_{e}(\mathbf{v},\mathbf{\psi},\mathbf{\theta}_{e})|- \frac{K}{2}\log 2\pi,\end{split} \tag{14}\] where \(|\Sigma_{e}(\mathbf{v},\mathbf{\psi},\mathbf{\theta}_{e})|\) is the determinant of the covariance matrix. Again, we perform the posterior estimation for Eq. (13) by sampling all the model parameters simultaneously using DRAM. Given a posterior \(\pi(\mu_{e},\mathbf{\theta}_{e}\mid\mathbf{e})\), we construct realizations for the spectrum by drawing realizations from the GP predictive distribution. We sample starting and ending points for the background function from priors \(\pi_{0}(e_{\text{start}})\) and \(\pi_{0}(e_{\text{step}})\), \(\pi_{0}(e_{\text{start}},e_{\text{stop}})=\pi_{0}(e_{\text{start}})\pi_{0}(e_{ \text{stop}})\). Next, we compute the predictive mean \[e^{*}(\mathbf{v}\mid\widetilde{\mu}_{e},\widetilde{\mathbf{\theta}}_{e},\widetilde{ \mathbf{\varepsilon}}_{\text{ss}})=\Sigma_{e}(\mathbf{v},\mathbf{v}_{\text{ss}};\widetilde {\mathbf{\theta}}_{e})\Sigma_{e}(\mathbf{v}_{\text{ss}},\mathbf{v}_{\text{ss}};\widetilde {\mathbf{\theta}}_{e})^{-1}(\widetilde{\mathbf{\varepsilon}}_{\text{ss}}-\widetilde{ \mu}_{e})+\widetilde{\mu}_{e}, \tag{15}\] and the predictive covariance \[\begin{split}\Sigma_{e}^{*}(\mathbf{v},\mathbf{\psi};\widetilde{\mathbf{ \theta}}_{e})=\Sigma_{e}(\mathbf{v},\mathbf{\psi};\widetilde{\mathbf{\theta}}_{e})-\Sigma _{e}(\mathbf{v},\mathbf{\psi}_{\text{ss}};\widetilde{\mathbf{\theta}}_{e})\Sigma_{e}(\bm {v}_{\text{ss}},\mathbf{v}_{\text{ss}};\widetilde{\mathbf{\theta}}_{e})^{-1}\\ \times\Sigma_{e}(\mathbf{v},\mathbf{v}_{\text{ss}};\widetilde{\mathbf{\theta} }_{e})^{\intercal},\end{split} \tag{16}\] where \((\widetilde{\mu}_{e},\widetilde{\mathbf{\theta}}_{e})\) are samples from the posterior distribution \(\pi(\mu_{e},\mathbf{\theta}_{e}\mid\mathbf{e})\) obtained via MCMC, and \(\mathbf{\psi}_{\text{ss}}=(\nu_{\text{start}},\nu_{\text{stop}})^{\intercal}\) are the wavenumber locations corresponding to the sampled starting and ending points \(\widetilde{\mathbf{\varepsilon}}_{\text{ss}}=(\widetilde{\varepsilon}_{\text{ start}},\widetilde{\varepsilon}_{\text{stop}})^{\intercal}\). Elements of the covariance matrix \(\Sigma_{e}(\mathbf{v},\mathbf{\psi};\widetilde{\mathbf{\theta}}_{e})\) are given as defined in Eq. (12) and elements of the covariance matrices \(\Sigma(\mathbf{v},\mathbf{\psi}_{\text{ss}};\widetilde{\mathbf{\theta}}_{e})\) and \(\Sigma(\mathbf{v}_{\text{ss}},\mathbf{v}_{\text{ss}};\widetilde{\mathbf{\theta}}_{e})\) are given by otherwise the same covariance function but without the diagonal elements produced by the Dirac delta function. With the above mathematical machinations, we can sample realizations for the background function by \[\widetilde{e}(\mathbf{v}\mid\widetilde{\mu}_{e},\widetilde{\mathbf{\theta}}_{e}, \widetilde{\mathbf{\varepsilon}}_{\text{ss}})\sim e^{*}(\mathbf{v}\mid\widetilde{\mu}_{e },\widetilde{\mathbf{\theta}}_{e},\widetilde{\mathbf{\varepsilon}}_{\text{ss}})+L(\mathbf{v }\mid\widetilde{\mu}_{e},\widetilde{\mathbf{\theta}}_{e})\mathbf{w}, \tag{17}\] where \(L(\mathbf{v}\mid\widetilde{\mu}_{e},\widetilde{\mathbf{\theta}}_{e})\) is the lower triangular Cholesky decomposition matrix of \(\Sigma^{*}(\mathbf{v},\mathbf{\psi};\widetilde{\mathbf{\theta}}_{e})\) and \(\mathbf{w}\in\mathbb{R}^{K\times 1}\) is a Gaussian white noise vector. This is compiled into the following statistical model for the background function sampling: \[\begin{split}\widehat{e}(\mathbf{v}\mid\mu_{e},\mathbf{\theta}_{e}, \mathbf{\varepsilon}_{\text{ss}})&\sim\mathcal{N}(\mathbf{e}^{*},\Sigma^{*}), \\ (\mu_{e},\mathbf{\theta}_{e},\mathbf{\varepsilon}_{\text{ss}})&\sim \pi(\mu_{e},\mathbf{\theta}_{e}\mid\mathbf{e})\pi_{0}(\mathbf{\varepsilon}_{\text{ss}}), \end{split} \tag{18}\] where \(\pi_{0}(\mathbf{\varepsilon}_{\text{ss}})=\pi_{0}(e_{\text{start}},e_{\text{stop}})\). Example realizations for a multiplicative background relevant for CARS are shown in Fig. 4. ## 4 Raman and CARS spectrum models In the preceding two sections we formulated mathematical procedures to sample synthetic spectrum and background realizations which are statistically similar to measurement data. Below, we combine these two approaches for generating arbitrary amounts of statistically realistic spectral data which are ultimately used for training our Bayesian neural networks. We present two forward models which are used to generate data for Raman measurements with an additive background and CARS measurements with a multiplicative background. Raman spectra \(y(\mathbf{\psi})\) with an additive background \(B(\mathbf{\psi})\) are constructed using \[y(\mathbf{\psi})\sim r(\mathbf{v}\mid A,\mathbf{\psi})+B(\mathbf{\psi}), \tag{19}\] where \(r(\mathbf{v}\mid A,\mathbf{\psi})\) is distributed according to the model defined in Fig. 4: Example realizations drawn from the background model defined in Eq. (18) for a multiplicative background. The starting and end points are sampled from a prior distribution and the GP predictive mean and covariance are used to sample the background shape. Fig. 3: Example realizations drawn from the log-Gaussian gamma process model defined in Eq. (10). On the left, realizations for the scale process \(\beta(\mathbf{\psi})\), drawn from a log-Gaussian process. On the right, corresponding realizations from the gamma process. All realizations are normalized and multiplied by a sampled amplitude. Eq. (10). The background \(B(\mathbf{\nu})\) is sampled with Eq. (18). CARS spectra \(z(\mathbf{\nu})\) are generated similarly to the additive Raman realizations. The CARS model consists of a multiplicative background function \(\varepsilon_{\text{m}}(\mathbf{\nu}\mid\mu_{e},\mathbf{\theta}_{e})\) distorting a CARS spectrum \(S(\mathbf{\nu}\mid B_{\text{NR}},\mathbf{\psi})\) given as \[z(\mathbf{\nu})\sim\varepsilon_{\text{m}}(\mathbf{\nu}\mid\mu_{e},\mathbf{\theta}_{e})S( \mathbf{\nu}\mid B_{\text{NR}},\mathbf{\psi}), \tag{20}\] where the CARS spectrum \(S(\mathbf{\nu}\mid B_{\text{NR}},\mathbf{\psi})\) can be given as \[S(\mathbf{\nu}\mid B_{\text{NR}},\mathbf{\psi})\sim|B_{\text{NR}}+(ir(\mathbf{\nu}\mid A, \mathbf{\psi})-\mathcal{H}\left\{r(\mathbf{\nu}\mid A,\mathbf{\psi})\right\})|^{2}, \tag{21}\] and \(B_{\text{NR}}\sim\tau_{0}(B_{\text{NR}})\) is a non-resonant background inherent to the CARS phenomenon distributed according to a prior distribution \(\pi_{0}(B_{\text{NR}})\) and \(\mathcal{H}\) denotes the Hilbert transform. The model for the CARS spectrum has been previously used for example in [9, 37]. We show example realizations for the Raman model in Fig. 5 and the CARS model in Fig. 6. We use the two models defined in Eqs. (19) and (20) to generate two synthetic data sets which are used to train two separate Bayesian neural networks. In the following section, we discuss the Bayesian neural network architecture. ## 5 Bayesian neural network architecture Our neural network architecture used in the experiments is based on the SpecNet architecture [29]. The SpecNet architecture is composed of convolutional layers encoding the input, the measurement spectrum. The encoded information is then decoded using fully-connected hidden layers, resulting in estimates for the underlying true Raman spectrum. We present our changes to the SpecNet architecture below. To achieve a partially Bayesian neural network [44], we use a Bayesian layer for the first convolutional layer. Additionally, we augment the architecture with a probabilistic output layer. This transforms the neural network estimate into estimates of a stochastic process instead of directly estimating the Raman spectrum. We use a gamma distribution as our output layer, following our formulation of Raman spectra as a log-Gaussian gamma processes. We also found that \(L_{1}\) or \(L_{2}\) regularization was not necessary for the deterministic parts of the network and therefore only employ Dropout [53] regularization with the last dense layer of the network. This in agreement with the documented robustness of Bayesian neural networks with respect to overfitting [39]. The above results in the following partial posterior probability distribution, or cost function, used for training the neural network \[\pi(\Psi_{\text{D}},\Psi_{\text{S}}\mid R)\propto\mathcal{L}(R\mid\Psi_{\text {D}},\Psi_{\text{S}})\pi_{0}(\Psi_{\text{S}}), \tag{22}\] where \(R\in\mathbb{R}^{I\times J}\) is a data matrix of \(I\) synthetic spectra of length \(J\) generated using either the Raman or CARS forward models in Eqs. (19) and (20) and \(\mathcal{L}(R\mid\Psi_{\text{D}},\Psi_{\text{S}})\) denotes the likelihood of the neural network estimate and \(\pi_{0}(\Psi_{\text{S}})\) is the prior distribution for the stochastic parameters of the network. As our outputs are modeled as gamma-distributed random variables, the likelihood \(\mathcal{L}(R\mid\Psi_{\text{D}},\Psi_{\text{S}})\) is given as \[\mathcal{L}(R\mid\Psi_{\text{D}},\Psi_{\text{S}})=\prod_{i=1}^{I}\prod_{j=1}^{ J}\frac{R_{i,j}^{\alpha_{\text{NN},j}-1}\exp(-R_{i,j}/\beta_{\text{NN},j})}{ \Gamma(\alpha_{\text{NN},j})R_{\text{NN},j}^{\alpha_{\text{NN},j}}}, \tag{23}\] where \(R_{i,j}\) denotes the \(j\)th data point of the \(j\)th spectrum, \(\alpha_{\text{NN},j}\) and \(\beta_{\text{NN},j}\) are neural network outputs for the gamma distribution parameters. For the prior distribution \(\pi_{0}(\Psi_{\text{S}})\), we use an independent normal distribution \(\mathcal{N}(0,1)\) for all the weights and biases of the first layer, \(\pi_{0}(\Psi_{\text{S}})\propto\prod_{p=1}^{P}\mathcal{N}(\Psi_{\text{S},p};0,1)\) where \(P\) is the total number of parameters in the first layer and \(\mathcal{N}(\Psi_{\text{S},p};0,1)\) denotes the evaluation of the probability density at the parameter value \(\Psi_{\text{S},p}\). We illustrate the neural network architecture in Fig. 7. In the log-Gaussian gamma process section, we estimate parameters of a doubly-stochastic process via MCMC. The Bayesian neural network architecture proposed here can be seen as an estimate of a _triply_-stochastic process where the neural network outputs are two stochastic process realizations \(\alpha_{\text{NN}}(\mathbf{\nu})\) and \(\beta_{\text{NN}}(\mathbf{\nu})\), an extension to the analytical log-Gaussian gamma process in Section 2 where the log-Gaussian parameterization of the scale process \(\beta_{\text{NN}}(\mathbf{\nu})\) is used for mathematical convenience due to the closed form of the probability density in Eq. (5). Fig. 5: Example realizations for the Raman spectrum model defined in Eq. (19). The realizations correspond to the log-Gaussian gamma process realizations in Fig. 3. Fig. 6: Example realizations for the CARS spectrum model defined in Eq. (20). The realizations correspond to the log-Gaussian gamma process realizations in Fig. 3. ## 6 Computational details and prior distributions We use 4 experimental Raman spectra and 4 CARS spectra to generate the synthetic training data sets. We use a wavelet-based approach [11] to obtain point estimates for the underlying Raman spectra in all 8 cases. Additionally, the method provides point estimates for the additive and multiplicative background signal which we use to estimate the parameters of the background GP model defined in Eq. (18). We show the obtained Raman data point estimates for the Raman spectra and additive backgrounds in Fig. 8 and CARS point estimates for the Raman spectra and multiplicative backgrounds in Fig. 9. The four cases of measurement data are used to train their respective Bayesian neural network architectures. It should be noted that for cases with significantly different Raman spectral signatures, such as where the spectra consists of either significantly sharper or wider line shapes, the training should be done using experimental data which contain such features. We run the DRAM algorithm with 5 proposal steps and with a length of \(100\,000\) samples for both the log-Gaussian gamma process parameters and the GP parameters. We use a burn-in of \(50\,000\) samples. The prior distributions for the log-Gaussian gamma process likelihood and the GP background likelihood are documented in Table 1. We use TensorFlow and TensorFlow Probability together with Keras to implement the neural network architecture [54, 55, 56]. We use the Adam optimizer for estimating the network parameters \(\Psi_{\text{D}}\) and \(\Psi_{\text{S}}\). in Fig. 11. The experimental details of the CARS samples have been described in detail elsewhere, see for example [37]. The Raman spectra are from an online database of Raman spectra of pigments used in modern and contemporary art (The standard Pigments Checker v.5) [57]. Results for synthetic CARS spectra are shown in Fig. 12 and results for experimental CARS spectra of adenosine phosphate, fructose, glucose, and sucrose are presented in Fig. 13. The spectra themselves were not part of the training data set. The results show the median estimate of the Raman spectrum obtained from the trained Bayesian neural network along with the 90% confidence intervals of the Raman spectrum estimate. We overlay the Raman spectrum estimate with a scaled versions of the point estimates in Fig. 9. The point estimates are scaled such that the minima and maxima of the point estimate are equal to the minima and maxima of the median estimate of the Raman spectrum. The results coincide with the overall shape of the point estimates, supporting the validity of the data generation approach and the Bayesian neural network design. Conclusions We propose a novel approach utilizing log-Gaussian gamma processes and Gaussian processes to generate synthetic spectra and additive or multiplicative backgrounds that are statistically similar to experimental measurements, even when using a limited number of experimental spectra. The parameters of these stochastic processes are learned through Markov chain Monte Carlo methods, enabling the generation of extensive training data for neural networks by sampling from Bayesian posterior distributions of the parameters. This data generation method is applied to train two Bayesian neural networks, specifically designed for correcting spectral measurements. One network is tailored for Raman spectra with additive backgrounds, while the other is optimized for coherent anti-Stokes Raman scattering (CARS) spectra with multiplicative backgrounds. Bayesian neural networks expand upon prior research involving neural networks for spectral corrections, offering not only point estimates but also the critical capability of uncertainty quantification. Our approach is validated using synthetic test data generated from the stochastic processes and experimental Raman spectra of phthalocyanine blue, aniline black, naphthol red, and red 264 pigments, along with experimental CARS spectra of adenosine phosphate, fructose, glucose, and sucrose. The results demonstrate excellent agreement with deterministically obtained point estimates of the Raman spectra, while simultaneously providing valuable uncertainty estimates for the Raman spectrum estimates. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements The authors were supported by Academy of Finland (grant number 353095).
2304.06681
Exploring Quantum Neural Networks for the Discovery and Implementation of Quantum Error-Correcting Codes
We investigate the use of Quantum Neural Networks for discovering and implementing quantum error-correcting codes. Our research showcases the efficacy of Quantum Neural Networks through the successful implementation of the Bit-Flip quantum error-correcting code using a Quantum Autoencoder, effectively correcting bit-flip errors in arbitrary logical qubit states. Additionally, we employ Quantum Neural Networks to restore states impacted by Amplitude Damping by utilizing an approximative 4-qubit error-correcting codeword. Our models required modification to the initially proposed Quantum Neural Network structure to avoid barren plateaus of the cost function and improve training time. Moreover, we propose a strategy that leverages Quantum Neural Networks to discover new encryption protocols tailored for specific quantum channels. This is exemplified by learning to generate logical qubits explicitly for the bit-flip channel. Our modified Quantum Neural Networks consistently outperformed the standard implementations across all tasks.
A. Chalkiadakis, M. Theocharakis, G. D. Barmparis, G. P. Tsironis
2023-04-13T17:25:20Z
http://arxiv.org/abs/2304.06681v1
Exploring Quantum Neural Networks for the Discovery and Implementation of Quantum Error-Correcting Codes ###### Abstract We investigate the use of Quantum Neural Networks for discovering and implementing quantum error-correcting codes. Our research showcases the efficacy of Quantum Neural Networks through the successful implementation of the Bit-Flip quantum error-correcting code using a Quantum Autoencoder, effectively correcting bit-flip errors in arbitrary logical qubit states. Additionally, we employ Quantum Neural Networks to restore states impacted by Amplitude Damping by utilizing an approximative 4-qubit error-correcting codeword. Our models required modification to the initially proposed Quantum Neural Network structure to avoid barren plateaus of the cost function and improve training time. Moreover, we propose a strategy that leverages Quantum Neural Networks to discover new encryption protocols tailored for specific quantum channels. This is exemplified by learning to generate logical qubits explicitly for the bit-flip channel. Our modified Quantum Neural Networks consistently outperformed the standard implementations across all tasks. ## 2 Introduction Machine learning (ML) and its extension, deep learning, are subfields of artificial intelligence that enable knowledge acquisition through experience rather than explicit instructions. Neural networks, the foundation of deep learning, have become integral to our daily lives. As AI methods are increasingly employed to address complex problems involving large volumes of data, researchers are exploring the potential of neural networks in tackling contemporary scientific challenges. Quantum computing has recently emerged as a highly promising research area, sparking efforts to integrate this potentially transformative technology with artificial intelligence algorithms [1], including neural networks. Classical neural networks (cNNs) have demonstrated remarkable capabilities in machine learning, with quantum counterparts holding the promise of handling complex tasks involving unknown quantum algorithms. By leveraging the back-propagation algorithm, neural networks can identify correlations among intricate data points and extract valuable information, often yielding results unattainable through other means. This foundation has led to the development of QNNs [2, 3, 4], which aim to match or surpass cNNs by utilizing the theoretically superior power of quantum computing devices [5]. In this paper, we will explore an implementation introduced in [6] where the authors demonstrated a method for efficient training of so-called dissipative quantum neural networks (DQNNs) on training data pairs in form of input and desired output quantum states. This version of QNNs acts as direct analogs of fully-connected feed-forward NNs, which trace out qubits from previous layers during the transition to new layers, resulting in energy dissipation, which gave it their name. Prior work on quantum denoising with quantum autoencoders (QAEs) [7, 8] has showcased their ability to denoise specific quantum states, like Greenberger-Horne-Zeilinger (GHZ) states, affected by specific quantum noise. The authors have demonstrated the ability of QAEs in reconstructing noisy states as well as generating noise-free states. Expanding on this work, our objective is to successfully implement general error-correcting codes using QAEs in arbitrary logical qubit states. In the following work, we successfully implement the Bit-Flip quantum error-correcting code using a Quantum Autoencoder, demonstrating its ability to correct bit-flip errors in arbitrary logical qubit states. Additionally, we apply QNNs to error-correct states affected by Amplitude Damping, utilizing an approximate 4-qubit error-correcting codeword [9][10]. Lastly, we propose a strategy employing QNNs to discover new encryption protocols for specific quantum channels. We use a Quantum Neural Network to learn the creation of logical qubits tailored for the bit-flip channel. However, we find that our models can only accomplish this task after modifying their structure. These modifications accelerate the training procedure and help the models avoid barren plateaus of the cost function observed in previous work, ultimately improving training task performance. Our results indicate that modified DQNNs outperform default implementations across all tasks considered in this work. ## 3 Quantum Neural Network's Architecture In this section, we discuss the architecture of QNNs and their implementation. By representing a fully connected feed-forward QNN with a series of consecutive quantum operations, we describe the action of each layer and how the quantum states of the qubits in each layer are obtained. Additionally, we outline the training process for the network, including the evaluation of its performance through a cost function that measures the distance between input and output states. We simulated the quantum circuits that run those networks using python's package qiskit, assuming that the qubits are noisy-free in general. The code to do so was provided by the authors in [8] and has been upgraded for the purposes of this paper, and the nets now include more optimisers1, early stopping, gradient ascent as well as the ability to implement Conjugate Layers (introduced in later section). Footnote 1: Namely, we added RMSprop, Adamax and Nadam into the already existing ones which were SGD, Adam. We denote by \([m_{in},m_{1},...,m_{L},m_{out}]\) a fully connected feed-forward QNN with \(L\) hidden layers, with each having \(m_{l}\) number of neurons. Then, a QNN simply represents a series of consecutively quantum operations, with its output state being \[\rho^{\text{out}}\;=\mathcal{E}\left(\rho^{\text{in}}\;\right)=\mathcal{E}^{out} \left(\mathcal{E}^{L}\left(\ldots\mathcal{E}^{2}\left(\mathcal{E}^{1}\left( \rho^{\text{in}}\;\right)\right)\ldots\right)\right) \tag{1}\] where the map of layer \(l\), \(\mathcal{E}^{l}\), is defined via \[\mathcal{E}^{l}\left(\rho^{l-1}\right)\equiv\text{Tr}_{l-1}\left(U^{l}\left( \rho^{l-1}\otimes|0\ldots 0\rangle_{l}\langle 0\ldots 0|\right)(U^{l})^{ \dagger}\right) \tag{2}\] with \(U^{l}\equiv\prod_{j=1}^{m_{l}}U_{j}^{l}\). The unitary operator, \(U_{j}^{l}\), connects all the qubits of the \(l-1\) layer with the \(j^{th}\) qubit of layer \(l\). The quantum state of the qubits in layer \(l\) is therefore obtained by applying, in ascending order of j, all the unitaries \(U_{j}^{l}\) and then tracing out all qubits of the previous layer. The authors of [11] have demonstrated that this structure is capable of simulating a universal quantum computer [12], which effectively means that any quantum algorithm can be built with this method given enough resources. This is certainly one of the most important features of this implementation of QNNs. To train the network, in a supervised manner, one has to have access to N input/target training pairs of the form \(\left\{\left|\psi_{x}^{in}\right\rangle,\left|\psi_{x}^{targ}\right\rangle \right\}_{x\in 1,2,...,N}\) that for concreteness we can assume to take the form, \(\left|\psi_{x}^{targ}\right\rangle=V\left|\psi_{x}^{in}\right\rangle\), where V is an unknown unitary operation that the QNN has to replicate. Then, to evaluate its performance we define a cost (loss) function that returns the distance between the input and the output states. One natural choice is the Fidelity, \(F\!\left(\left|\psi_{x}^{targ}\right\rangle,\rho_{x}^{out}\right)\), averaged over all the training pairs \[C(\boldsymbol{\kappa})=\frac{1}{N}\sum_{x=1}^{N}\left\langle\psi_{x}^{targ} \right|\rho_{x}^{out}\left|\psi_{x}^{targ}\right\rangle \tag{3}\] where \(\boldsymbol{\kappa}\) is a vector that contains all the parameters of the network. The cost function takes a value Figure 1: (Left) Schematic representation of the first layer of a \([3,2,...]\) QNN. (Right) Quantum circuit that constructs the first layer of that QNN. Note that the unitary \(U_{2}^{1}\) acts on the first 3 and last qubits only. of \(1\) when the target and output states are all the same and \(0\) when they are all perpendicular to each other. Naturally, to train the network we have to maximize the cost function by applying a gradient ascent (instead of decent) algorithm2 Footnote 2: Sometimes as a cost function is considered to be \(1-C\) so that we can use gradient descent algorithms that are more familiar \[\boldsymbol{\kappa}^{t+1}=\boldsymbol{\kappa}^{t}+\eta\boldsymbol{\nabla}_{ \boldsymbol{\kappa}}C(\boldsymbol{\kappa}^{t}) \tag{4}\] where \(t\) denotes the training step or the epoch and \(\eta\) the learning rate, typically a small number that ensures that the gradient step is in the vicinity where the cost function decreases Moreover, the unitary transformations are parameterized in the following way \[U_{j}^{l}=e^{iK_{j}^{l}} \tag{5}\] with \[K_{j}^{l}=\sum_{\sigma\in P^{\otimes(m_{l}+1)}}k_{\sigma}\cdot\sigma \tag{6}\] where \(k_{\sigma}\) are real numbers and the parameters to be learned and \(P^{\otimes j}\) is the set of all possible tensor products of length \(j\) between the elements of \(P=\{I,X,Y,Z\}\), i.e. \(P^{\otimes 2}=\{II,IX,IY,IZ,XI,XX\ldots\}\). In this way, \(U_{j}^{l}\) are uniquely defined by their coefficients \(k_{\sigma}\). Between two layers with \(m_{l}\) and \(m_{l+1}\) number of qubits each, we have \(m_{l+1}\) unitary transformations that have \(4^{m_{l}+1}\) number of coefficients. So in total, the number of trainable coefficients that a QNN has, is \[\sum_{i=1}^{\ell-1}m_{i+1}\cdot 4^{m_{i}+1} \tag{7}\] As one can observe, this number scales exponentially fast with the number of qubits in a layer. For this reason, it becomes unpractical to simulate QNNs with classical computers, which have more than a handful of qubits. Furthermore, at any point in the QNN, to construct the quantum map \(\mathcal{E}^{l}\), we reuse qubits from previous layers by resetting them to the state \(|0\rangle\), in order to use the least amount of qubits possible. The estimation of the cost function (3) is done by implementing the swap test algorithm [13] between the output state of the QNN and the target state, which deduces their closeness, (see Appendix A for more information). In short, by employing a series of consecutive parameterized gates between layers and resetting neurons to \(|0\rangle\) state when transitioning to a new layer we can construct a Quantum Neural Network. Through a supervised training method that utilizes gradient ascent to maximize the cost function, which is chosen to be the Fidelity between the output of the QNN and a target state of our choice, QNNs can effectively learn to perform unknown quantum algorithms. ### Enhancing QNN Performance with Conjugate Layers We'll now introduce the concept of conjugate layers in QNNs, which modify the original architecture of those models to address the challenges associated with training plateaus and training time. Restrictions imposed during the training process in classical Neural Networks can often lead to improved performance and reduced training time. For example, techniques such as dropout [14] and U-nets [15] resolve issues like overfitting and enable the transfer of information between different parts of the network. In their quantum version, conjugate layers serve as a quantum analog of these restrictions, offering similar benefits. The concept behind conjugate layers is straightforward: a conjugate layer can be implemented by replacing the transformation of a layer, \(l\), with the Hermitian conjugate of the unitary transformation of a different layer of choice, call it \(l^{\prime}\) \[U_{conj}^{l}=(U^{l^{\prime}})^{\dagger}=\left(\prod_{j=1}^{m_{l^{\prime}}}U_{ j}^{l^{\prime}}\right)^{\dagger}=\prod_{j=m_{l^{\prime}}}^{1}(U_{j}^{l^{ \prime}})^{\dagger} \tag{8}\] where \[(U_{j}^{l^{\prime}})^{\dagger}=e^{-iK_{j}^{l^{\prime}}} \tag{9}\] Thus, these two layers are trained simultaneously; and in doing so we essentially truncate the number of trainable parameters of the QNN as the parameters comprising layer \(l\) now also characterize layer \(l^{\prime}\). To implement this method effectively, the conjugate layer must apply the conjugate unitary transformations in reverse order. For instance, if layer \(l\) contains \(m\) neurons that are transformed into \(n\) in the subsequent layer, the conjugate layer should have \(n\) neurons that are transformed back into \(m\). As a result, the QNN will exhibit the following structure: \[[...,\underbrace{m,n}_{U^{l}},...,\underbrace{n,m}_{U_{conj}^{l}},...]\] This approach enables the integration of multiple conjugate layers within the QNN architecture, accommodating up to \(q\) conjugate layers in a QNN with a total of \(2q+1\) layers. In this example, we showcase two versions of the same QNN employed for error-correcting Bit Flip errors. In Figure (2), the left circuit represents the vanilla QNN implementation, while the right one replaces the transformation of the output layer with the Hermitian conjugate of the first layer. Observe how the three two-qubit unitary transformations, \(U_{1,2,3}^{2}\), are replaced by a single 4-qubit transformation \((U_{1}^{1})^{\dagger}\). Figure 2: (Left) Vanilla [3,1,3] QNN. (Right) [3,1,3] QNN with a conjugate layer. ## 4 Quantum Autoencoders for Bit Flip Error-Correction Classical autoencoders are typically employed for removing unwanted features, e.g. noise, from the dataset. Quantum Auto-Encoders (QAE) are very much inspired by their classical counterpart and can be set to perform quantum error correction by removing the least relevant features from quantum states, like quantum noise. Unlike conventional quantum error correction algorithms that typically require generalized measurements and classical information processing, QAEs are supposed to be capable of performing those tasks autonomously. In this section, we will investigate if QAEs are able to denoise specific types of errors, by utilizing Quantum Error Correcting Codes. The general approach, is to fit the QAE multiple different encoded states, which are corrupted by a quantum channel, denoted as \(\ket{\tilde{\psi}}_{L}\), and then train the QAE to replicate the states with no error, \(\ket{\psi}_{L}\). Our initial focus is on correcting errors generated by the Bit-flip channel. The bit flip channel applies a NOT gate to every qubit with probability \(p\), and is described by the following operation elements: \[E_{0}=\sqrt{1-p}I=\sqrt{1-p}\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right]\quad E_{1}=\sqrt{p}X=\sqrt{p}\left[\begin{array}{cc}0 &1\\ 1&0\end{array}\right] \tag{10}\] \(I\) is the identity matrix which corresponds to the case where qubit state was left uncorrupted and X is the Pauli matrix \(\sigma_{x}\) responsible for the bit flip. To do so, we utilized the 3-qubit error-correcting code by encoding all of the input/target states as follows \[\ket{\psi}=a\ket{0}+b\ket{1}\rightarrow\ket{\psi}_{L}=a\ket{000}+b\ket{111} \tag{11}\] In order to train QAEs to denoise bit flips, we created a training set of 120 input/target pairs of the form \(\left\{\ket{\tilde{\psi}}_{L},\ket{\psi}_{L}\right\}\), where \(\ket{\psi}_{L}\) is one of the following states: \(\ket{0}_{L}\), \(\ket{1}_{L}\), \(\ket{+}_{L}\), \(\ket{-}_{L}\), \(\ket{+i}_{L}\), \(\ket{-i}_{L}\)3. The input states of the set are corrupted by (single qubit) bit-flip errors, with probability \(p=0.2\). Additionally, the models were trained in training sessions. Each session had fixed hyperparameters and continued the training from the model of the previous session. Initial sessions generally had a relatively large learning rate (\(lr=0.25\)) with Adam or Nadam optimizer and a batch size of 20. Latter sessions had decreased lr, vanilla SGD optimizer, and no batch size. This strategy was adapted such that we ensure convergence of the cost function as well as to save computational time. Lastly, the initialization of the QNN's parameters was chosen to be uniformly random between 0 and 1, rather than all being fixed to 0 (as done in [16]), to avoid a plateaued starting point. Footnote 3: \(\ket{+}=\frac{\ket{0}+\ket{1}}{\sqrt{2}}\) and \(\ket{-}=\frac{\ket{0}-\ket{1}}{\sqrt{2}}\). To evaluate the model's performance we created a validation set during training, which ensures that the QAE generalizes and corrects bit-flip errors from any arbitrary state. To this end, we parameterized the (logical) qubit's state in the Bloch sphere representation \[\ket{\psi(\theta,\phi)}_{L}=cos(\theta/2)\ket{0}_{L}+e^{i\phi}sin(\theta/2) \ket{1}_{L}. \tag{12}\] Then created a meshgrid with \(N=20\) different values for the parameters \(\theta\) and \(\phi\), where \(\theta\in[0,\pi]\) and \(\phi\in[0,2\pi]\). The validation set, thus, contained in total \(400(=20\times 20)\) different states, such that they uniformly cover the whole Bloch sphere. Those states were then corrupted with a bit-flip error with \(p=0.2\). The mean Fidelity(\(\equiv\) Cost of the validation set) is calculated by \[\bar{F} =\frac{1}{4\pi}\int_{0}^{2\pi}\int_{0}^{\pi}F(\rho_{out}(\theta, \phi),\left|\psi(\theta,\phi)\right\rangle_{targ})sin(\theta)d\theta d\phi \tag{13}\] \[\simeq\frac{1}{4\pi}\sum_{i=0}^{N}\sum_{j=0}^{N}F(\rho_{out}( \theta_{i},\phi_{j}),\left|\psi(\theta_{i},\phi_{j})\right\rangle_{targ})sin( \theta_{i})\Delta\theta\Delta\phi \tag{14}\] with \(\Delta\theta=\pi/N\), \(\Delta\phi=2\pi/N\), \(\theta_{i}=i\Delta\theta\) and \(\phi_{j}=j\Delta\phi\). Following this strategy, we began training multiple models with different hyperparameters each. Figure (3) depicts the learning period of the first 250, out of 500, epochs of the best-performing model. As we can observe overfitting isn't a concern here; the QAE successfully learns to denoise bit-flips, with the Cost function on both the train and the validation set eventually reaching a value near 1, at around 0.98. Continuing with the testing of the model, it would be useful to study the performance of the model given that a bit flip took place in one of the 3 qubits that comprise the logical qubit or that no error has occurred. To achieve this, we calculated the corresponding conditional Fidelities, on 4 validation sets \[\bar{F}(\rho_{out},\left|\psi\right\rangle\left|\text{bit-flip to qubit 1}\right) =0.96\] \[\bar{F}(\rho_{out},\left|\psi\right\rangle\left|\text{bit-flip to qubit 2}\right) =0.96\] \[\bar{F}(\rho_{out},\left|\psi\right\rangle\left|\text{bit-flip to qubit 3}\right) =0.95\] \[\bar{F}(\rho_{out},\left|\psi\right\rangle\left|\text{no error}\right) =0.97\] This confirms that the QAE has successfully learned to denoise single bit-flip events on (logical) qubits, with high fidelity. Figure 3: Learning Curve of the \([3,1,3]\) QAE. The dashed line indicates the cost function on the training set, while the continued one, on the validation set. Each color represents a learning session with fixed hyperparameters. The training continues for another 250 epochs (500 in total) but the model showed minimal improvement in those last epochs. To this end, we present the colormaps in figure (4) that depict the fidelity \(F(\rho(\theta,\phi),|\psi(\theta,\phi)\rangle)\) as a function of the parameters \(\theta\) and \(\phi\) on two different validation sets. In the first validation set, the probability of error was set to be \(p=1\), while in the second one \(p=0\). With this method, we essentially check if the QAE has developed a bias during training towards any specific region on the Bloch sphere. The colormaps would reveal this kind of bias if their color were not uniform. In such cases, the training set must be accordingly modified and include a representative state from that region so that the Autoencoder can generalize more efficiently. Figure (5) suggests that there are no areas on the Bloch sphere where our model underperforms. #### Limitations of this Approach While employing pre-existing error-correcting codes in QAE models can be advantageous, it is important to recognize that these models also inherit the limitations of such methods. For instance, if more than two bit flip errors occur, the QAE may fail to accurately identify and correct the error, leading to improper error correction for that particular state. In an optimal quantum error-correcting algorithm, the probability of successful correction is given by: \[3(1-p)p^{2}+p^{3} \tag{15}\] Although the QAE models developed in this work are sub-optimal algorithms, they still demonstrate the ability to recover corrupted states with high fidelity. As a result, their probability of successful correction closely follows the same dependence as optimal quantum error-correcting algorithms. However, it is essential to be aware of these limitations when employing QAEs for error correction tasks, as they may impact the overall performance of the system. ## 5 Addressing Amplitude Damping Channel Errors with QAEs While the Bit flip channel serves as an instructive starting point for understanding and testing new error-correcting techniques, its simplicity may not reveal the full extent of potential limitations inherent to the chosen method. Consequently, we have chosen to explore the use of QAEs to Figure 4: Each point in the graphs corresponds to a qubit state on the Bloch sphere (12). The Autoencoder was tested with \(N=1600\) different qubit states. correct errors in the Amplitude Damping channel, which describes spontaneous emission in quantum systems and represents a more realistic type of quantum noise. Additionally, efficiently correcting Amplitude Damping errors requires a 4-qubit error-correcting code, further challenging our models. For a single qubit state the Kraus operators that describe Amplitude Damping are \[E_{0}=\left[\begin{array}{cc}1&0\\ 0&\sqrt{1-\gamma}\end{array}\right],\quad E_{1}=\left[\begin{array}{cc}0& \sqrt{\gamma}\\ 0&0\end{array}\right] \tag{16}\] where \(\gamma\) is the probability of our state emitting a photon. This probability is considered to be small in order to ensure weak interactions of our qubit with its environment. The operator \(E_{1}\) describes the "damping event", where the state \(\left|1\right\rangle\) decays to \(\left|0\right\rangle\) by emitting a photon. The operator \(E_{0}\) leaves \(\left|0\right\rangle\) unchanged, but reduces the amplitude of \(\left|1\right\rangle\). Physically this implies that there was no "damping event", thus the environment perceives that the state of our qubit is more likely to be \(\left|0\right\rangle\), rather than \(\left|1\right\rangle\). In contrast to the bit flip channel, finding an appropriate error-correcting code for efficiently addressing Amplitude Damping is considerably more challenging, as none of the Kraus operators leave the qubit state uncorrupted. Consequently, each real qubit within the error-correcting code will be corrupted by either operator \(E_{0}\) or \(E_{1}\), making it impossible to identify and correct all possible corruptions of Amplitude Damping using simple error-correcting codes. However, by keeping \(\gamma\) small, the perturbation caused by \(E_{0}\) on our qubit state becomes negligible. In this scenario, we can design approximate error-correcting codes that disregard the action of \(E_{0}\) on our qubit and concentrate on correcting corruptions resulting from the action of \(E_{1}\) that describes the spontaneous decay of a quanta of energy. One such error-correcting code, proposed in [10], employs four real qubits to correct single qubit "damping events". This 4-qubit error-correcting code suggests that an uncorrupted single qubit state should be encoded as \[\left|\psi\right\rangle=a\left|0\right\rangle+b\left|1\right\rangle\rightarrow \left|\psi\right\rangle_{L}=\frac{a}{\sqrt{2}}\left(\left|0000\right\rangle+ \left|1111\right\rangle\right)+\frac{b}{\sqrt{2}}\left(\left|0011\right\rangle +\left|1100\right\rangle\right). \tag{17}\] In the case where we only allow single qubit "damping events", and we use \(\left|\psi\right\rangle_{L}\) as an input state for the Amplitude Damping channel, the output of this quantum operation can be one of the following states \[\left|\phi_{0000}\right\rangle =a\left[\frac{|0000\rangle+(1-\gamma)^{2}|1111\rangle}{\sqrt{2}} \right]+b\left[\frac{(1-\gamma)[|0011\rangle+|1100\rangle]}{\sqrt{2}}\right],\] \[\left|\phi_{1000}\right\rangle =\sqrt{\frac{\gamma(1-\gamma)}{2}}[a(1-\gamma)|0111\rangle+b|0100 \rangle], \tag{18}\] \[\left|\phi_{0010}\right\rangle =\sqrt{\frac{\gamma(1-\gamma)}{2}}[a(1-\gamma)|1101\rangle+b|00 01\rangle],\] \[\left|\phi_{0001}\right\rangle =\sqrt{\frac{\gamma(1-\gamma)}{2}}[a(1-\gamma)|1110\rangle+b|00 10\rangle].\] where the squares of the norm of these states give their probabilities for occurring in a mixture. The subscript signifies which operator, \(E_{0}\) or \(E_{1}\), has been applied to the corresponding real qubit of the state presented in equation (17). To error-correct Amplitude Damping, we trained QAEs following a strategy similar to that used for the bit flip channel. However, due to some crucial differences between these two quantum channels, certain adaptations to the training procedure were necessary. First and foremost, for our approximate error-correcting code to be valid, the probability \(\gamma\) of the "damping event" must be kept relatively small. As a consequence, the majority of the corrupted states encountered by the Autoencoder during training will be described by the general quantum state \(\left|\phi_{0000}\right\rangle\), where no "damping event" has occurred. This leads the Autoencoder to develop an inevitable bias towards that quantum state. To mitigate this issue, we introduced a second probability \(p\) that allows us to control the frequency with which each corruption is included in the training set, regardless of the value of \(\gamma\). Lower values of \(p\) result in training sets that predominantly contain states described by \(\left|\phi_{0000}\right\rangle\), whereas higher values of \(p\) yield training sets that mostly consist of states of the form \(\left|\phi_{1000}\right\rangle\), \(\left|\phi_{0100}\right\rangle\), \(\left|\phi_{0010}\right\rangle\), and \(\left|\phi_{0001}\right\rangle\). The training process for the models was conducted in multiple sessions. During each session, we employed the default optimizer, no batch size, and maintained the probability \(\gamma\) at 0.1. Initial training sessions utilized training sets containing 50 input/target state pairs in the form of \(\left|\tilde{\psi}\right\rangle_{L},\left|\psi\right\rangle_{L}\), where \(\left|\psi\right\rangle_{L}\) could be one of the following states: \(\left|0\right\rangle_{L}\), \(\left|1\right\rangle_{L}\), or \(\left|+\right\rangle_{L}\). Additionally, we set the probability p to 0.2 and the learning rate to 0.2. In later stages of training, we increased the total number of state pairs to \(70^{4}\) and replaced some initial training set states with representative states from regions of the Bloch sphere where our models faced difficulties. Furthermore, we increased the probability p to 0.8 and decreased the learning rate to 0.1. The validation set that we used in order to supervise the performance of our models was comprised by 400 different states in total. These states were parameterized in the Bloch sphere representation (equation 12) just like the case of the Bit Flip channel and we chose 20 values for the parameter \(\theta\in[0,\pi]\) and 20 values for \(\phi\in[0,2\pi]\). The architecture of the Quantum Autoencoders (QAEs) we trained also exhibited some differences. In this case, we were required to use 4 qubits in the input layer, as our encoded state consists of 4 qubits, 1 qubit in the latent space, and 4 qubits in the output layer, resulting in a [4,1,4] Autoencoder. These models also incorporated conjugate layers. Specifically, the unitary operator of the output layer was configured as the Hermitian conjugate of the input layer's unitary operator. Continuing with the testing of the model that produced the most satisfying results during its training, we examined its performance by error-correcting each one of the five possible corruptions that we have discussed in equation (18) separately. To do so, we created 5 different validation sets that consist of 1600 different state pairs. Then, we exposed each validation set to a quantum operation that only allowed one of the possible 5 corruptions of Amplitude Damping to take place and we calculated the fidelity between the output of our model and the target state. Also, the average fidelity was calculated by using the equation (14). We present our results in the form of columns, just like we did in the case of the Bit flip channel, in figure (6). The colormaps indicate that the performance of this model is not optimal. This QAE, with the average cost being 0.8, cannot efficiently error-correct Amplitude Damping as we would expect the average cost to be no less than 0.95 for every possible corruption, as it was in the case of the Bit flip channel. Unfortunately, from this point on, no matter the combination of the hyperparameters that we used the average Fidelity either remained the same, or it started dropping. This phenomenon indicates that this QAE has reached a local maximum that prevents it from reaching the global Figure 5: Architecture of Autoencoders used for error-correcting the Amplitude Damping. Figure 6: Results of the [4,1,4] QAE when error-correcting each one of the allowed corruptions of equation (18) separately. maximum of the cost function that would allow our models to reach higher values of Fidelity. It is possible that models with different architectures or models that utilize larger training sets might perform better but our limited resources would not allow us to explore these options. ## 6 Discovering Codewords with Quantum Neural Networks In previous sections, we used QNNs to denoise specific quantum noisy channels by training them on arbitrary qubit states. This was achieved by utilizing error-correcting codes that encode multiple physical qubits into a logical qubit, allowing the QAE to recognize and correct errors. An intriguing possibility is to let the QNN discover the error-correcting code on its own, which could potentially lead to the identification of previously unknown encodings for unconventional noisy channels. To accomplish this, we propose a strategy that incorporates the noisy Quantum Channel into the neural network structure, rather than training the QAE on noisy quantum states as we have done before. In other words, we integrate the state preparation circuit (figure (7a)) that constructs and corrupts the logical qubit directly within the QNN. This enables the QNN to determine, during training, the optimal transformation for creating a logical qubit tailored to the specific quantum channel we've selected. With this approach, the QNN now consists of three main components: * The first component, occurring at the initial layer, is the encoding process; the QNN needs to encrypt the input state into a multi-qubit logical state. * Next, the Noisy Quantum Channel corrupts the logical state that the QNN has generated. * Finally, the subsequent layers mimic the structure of a QAE, which we have shown to be effective in denoising corrupted logical states. The first layer must have at least as many qubits as the theoretical minimum needed to construct the logical qubit for the specific quantum channel we've integrated. For instance, for the bit-flip channel, we need a minimum of 3 qubits. If the theoretical minimum is unknown for the quantum channel we've used, then the length of the first layer essentially becomes a hyperparameter. By attempting to reproduce its input, we believe the QNN will adapt to the integrated noise and eventually discover a quantum error-correcting code in the process. As a proof of concept, we will begin by integrating the bit-flip channel inside the QNN, as we are familiar with it and know that the logical qubit requires only 3 (physical) qubits for implementation. The architecture of this QNN will be \([1,3,1,3,1]\) (see figure (7c)), with the bit-flip channel positioned between the first and second layers, immediately after the encryption of the input layer occurs. The next two layers form a QAE, as implemented in previous sections, and the output layer must perform the reverse transformation that constructs the logical qubit. Once training is complete, we can recover the 3-qubit codeword by examining the transformation of the first layer. The learning curve for the first two sessions of the best-performing model can be observed in figure (8). The training set contained 100 input-target training pairs of the form \(\{\ket{\phi},\ket{\phi}\}\), where \(\ket{\phi}\) is one of the following set of states: \(\{\ket{0},\ket{1},\;\ket{+},\;\ket{-}\}\) (each state is equally likely to appear in the set). The validation set consisted of 400 states designed to cover the Bloch Sphere uniformly. The remaining hyperparameters included the use of stochastic gradient descent (SGD) as the optimizer and no batch size. The integrated bit-flip channel corrupts with a probability of \(p=0.75\) (single-qubit errors), which was chosen to ensure that it outputs the following states with equal probability (otherwise the QNN would tend to develop a bias towards one of the 4 different cases) Figure 8: The dashed line indicates the cost function on the training set, while the continued one on the validation set. It is clear that this model reaches a barren plateau. \[a\left|000\right\rangle+b\left|111\right\rangle,\text{ no bit-flip.} \tag{19}\] \[a\left|100\right\rangle+b\left|011\right\rangle,\text{ bit-flip to the 1st qubit.}\] \[a\left|010\right\rangle+b\left|101\right\rangle,\text{ bit-flip to the 2nd qubit}\] \[a\left|001\right\rangle+b\left|110\right\rangle,\text{ bit-flip to the 3rd qubit}\] While the model initially seems to be learning, with the cost function eventually reaching a value of around 0.9, it plateaus without further improvement. This behavior is characteristic of all such models we tested, regardless of the chosen hyperparameters. Although a cost function value of 0.9 is not necessarily poor, the fact that most models rapidly converge to this point and stagnate prompts us to investigate a potential underlying reason. Further insight can be gained by examining the performance of the QNN for the four possible cases that the bit-flip channel can manifest. Figure 9 illustrates the cost function on four validation sets, plotted as colormaps on the Bloch Sphere. In each validation set, the bit-flip channel either corrupts one of the three qubits or leaves them uncorrupted. A clear asymmetry is observed; the mean fidelity when bit-flip channel does not corrupt the state is way lower than the other cases, the QNN essentially fails to reconstruct its input despite explicitly setting the error probability, p, to balance out any potential bias of this kind. Moreover, this phenomenon is common among all other similar models we tested, with the only difference being that they exhibited this asymmetry with any of the four previously mentioned cases. A possible explanation that can justify the above observations could be that the Cost function has 4 different local maximums, one for each case. Each time we set out to train a new model it Figure 9: The colormaps depict the fidelity, \(F(\rho(\theta,\phi)^{out},\left|\phi(\theta,\phi)\right\rangle^{target})\), of output-target states of the [1,3,1,3,1] QNN on the Bloch sphere in the 4 cases that the bit-flip channel can affect the logical qubit. The QNN fails in cases where the bit flip channel doesn’t corrupt the states, with its Mean Fidelity being 0.6. randomly ends up in one such maximum thus preventing the cost function from reaching the global maximum. By modifying the initially theorized structure of these QNNs and introducing conjugate layers into the architecture, we achieved more satisfactory results. This model features a simpler [1, 3, 1] QNN structure, essentially a reverse autoencoder with the first layer acting as the conjugate of the second one, as illustrated in figure (10a). In this design, we have effectively replaced two layers, whose original purpose was to replicate the denoising properties of a QAE, with a single conjugate layer. As a result, we have reduced the number of parameters in the network from 608 to 256, which not only made this model faster to train but also helped it surpass the 0.9 threshold value of the cost function that plagued previous models, as depicted in figure (10b). This effectively means that this model has successfully generated an error-correcting code specifically tailored for the bit-flip channel, maintaining high fidelity when denoised. The natural question that arises is: How does the encryption developed by the QNN compare to the conventional en cryption used for the bit-flip channel? Since the encryption for the bit-flip code is not unique (alternative encryptions with similar properties can easily be found), we expect some differences. To investigate, we need to determine the transformation that the first layer performs on the basis \(\{\ket{0},\ket{1}\}\) and examine the resulting states. Doing so, we obtain: \(\{\ket{0},\ket{1}\}\) and see how they are transformed. Doing this we get \[\ket{0}\rightarrow(0.92+i0.36)\ket{001}+...\,\quad\ket{1}\rightarrow(-0.81+i0. 57)\ket{011}+... \tag{20}\] where the coefficients following the ellipsis are very small and therefore not presented here. This result essentially confirms our intuition; the QNN has discovered a new encryption for the bit-flip channel, but it is still fundamentally based on the same principles we are familiar with for these codes. Specifically, the subspace \(\{\ket{001},\ket{011}\}\) is associated with cases of no error, while the other three subspaces correspond to each of the possible corruptions. In conclusion, we demonstrated that a QNN can successfully generate an error-correcting code specifically tailored to address the bit-flip channel, without relying on a priori knowledge of a suitable encryption. By integrating the noisy quantum channel within the QNN and incorporating conjugate layers, we achieved a simpler and more efficient model, overcoming the limitations observed in previous attempts. Interestingly, the QNN-derived encryption, although distinct from conventional encryptions, still adheres to the same fundamental principles. This approach opens up new possibilities for discovering previously unknown encryptions for unconventional noisy channels, further expanding the capabilities and applications of quantum error-correcting codes in the field of quantum computing. ## 7 Conclusions In this paper, we have explored the use of Quantum Autoencoders and Quantum Neural Networks to design and implement quantum error-correcting codes for various quantum channels. Our investigation began with the bit-flip channel, where we successfully employed QAEs to error-correct corrupted states, further extending our approach to the more complex Amplitude Damping channel, which required the use of an approximative error-correcting code. We then shifted our focus to a more ambitious goal: enabling QNNs to discover error-correcting codes without relying on a priori knowledge of suitable encryptions. By integrating the noisy quantum channel within the QNN and incorporating conjugate layers, we successfully demonstrated the ability of the QNN to generate an error-correcting code tailored to address the bit-flip channel. Interestingly, the derived encryption, although distinct from conventional encryptions, adhered to the same fundamental principles. However, we observed that Dissipative Quantum Neural Networks can sometimes exhibit plateaus while training, which prevent them from reaching the global maximum of the cost function and consequently make it challenging to train them for specific purposes. This observation aligns with the findings in [17], where the authors characterize such QNNs as untrainable due to the barren plateaus they tend to fall into during training. While this may dampen the initial enthusiasm for the future of these networks, we believe that innovative solutions may exist to address this issue. In our case, we discovered that conjugate layers can offer some assistance in training these models while also accelerating the training process. Our results indicate that QNNs have the potential to discover previously unknown encryptions for unconventional noisy channels, opening up new possibilities in the field of quantum computing. This work lays the foundation for future research into the development of novel quantum error-correcting codes and their applications in various quantum communication and computation scenarios. As quantum technology continues to advance, the ability to adapt and optimize error-correcting codes for different quantum channels will become increasingly crucial in ensuring reliable and efficient quantum systems. Creative approaches to overcome the challenges associated with DQNNs, such as barren plateaus, will be essential in unlocking their full potential.
2306.06952
Numerically stable neural network for simulating Kardar-Parisi-Zhang growth in the presence of uncorrelated and correlated noises
Numerical simulations are essential tools for exploring the dynamic scaling properties of the nonlinear Kadar-Parisi-Zhang (KPZ) equation. Yet the inherent nonlinearity frequently causes numerical divergence within the strong-coupling regime using conventional simulation methods. To sustain the numerical stability, previous works either utilized discrete growth models belonging to the KPZ universality class or modified the original nonlinear term by the designed specified operators. However, recent studies revealed that these strategies could cause abnormal results. Motivated by the above-mentioned facts, we propose a convolutional neural network-based method to simulate the KPZ equation driven by uncorrelated and correlated noises, aiming to overcome the challenge of numerical divergence, and obtaining reliable scaling exponents. We first train the neural network to represent the determinant terms of the KPZ equation in a data-driven manner. Then, we perform simulations for the KPZ equation with various types of temporally and spatially correlated noises. The experimental results demonstrate that our neural network could effectively estimate the scaling exponents eliminating numerical divergence.
Tianshu Song, Hui Xia
2023-06-12T08:36:25Z
http://arxiv.org/abs/2306.06952v3
Numerically stable neural network for simulating Kardar-Paris-Zhang growth in the presence of uncorrelated and correlated noises ###### Abstract Numerical simulations are essential tools for exploring the dynamic scaling properties of the nonlinear Kadar-Paris-Zhang (KPZ) equation. Yet the inherent nonlinearity frequently causes numerical divergence within the strong-coupling regime using conventional simulation methods. To sustain the numerical stability, previous works either utilized discrete growth models belonging to the KPZ universality class or modified the original nonlinear term by the designed specified operators. However, recent studies revealed that these strategies could cause abnormal results. Motivated by the above-mentioned facts, we propose a convolutional neural network-based method to simulate the KPZ equation driven by uncorrelated and correlated noises, aiming to overcome the challenge of numerical divergence, and obtaining reliable scaling exponents. We first train the neural network to represent the determinant terms of the KPZ equation in a data-driven manner. Then, we perform simulations for the KPZ equation with various types of temporally and spatially correlated noises. The experimental results demonstrate that our neural network could effectively estimate the scaling exponents eliminating numerical divergence. ## I Introduction The Kadar-Paris-Zhang (KPZ) equation [1] not only achieves remarkable success in the field of surface growth but also plays a vital role in describing other physical phenomena, such as Bose gas [2], particle transport [3], and stirred fluids [4]. The KPZ equation reads \[\frac{\partial h(x,t)}{\partial t}=\nu\nabla^{2}h(x,t)+\frac{\lambda}{2}( \nabla h(x,t))^{2}+\eta(x,t), \tag{1}\] where \(h(x,t)\) is the growth height at position \(x\) and time \(t\), \(\nu\) is the diffusion constant, \(\lambda\) is the coefficient of the nonlinear term characterizing lateral growth, and usually \(\eta(x,t)\) is Gaussian white noise. The universal behaviour and scaling exponents of the KPZ equation driven by this type of noise have been fully explored, and satisfactory results have been achieved in terms of both analytical predictions and numerical simulations. However, there are still inconsistent results for the KPZ equation in the presence of correlated noises. When long-range spatiotemporal correlations are introduced, the noise satisfies \[\left\langle\eta(x,t)\eta\left(x^{\prime},t^{\prime}\right)\right\rangle\sim \left|x-x^{\prime}\right|^{2\rho-1}\left|t-t^{\prime}\right|^{2\theta-1}, \tag{2}\] where \(\rho\) and \(\theta\) are spatial and temporal correlation exponents, respectively. The KPZ equation with long-range correlations is challenging to be solved theoretically. Although several theoretical schemes were applied to the correlated growth system [5; 6; 7; 8; 9; 10], these theoretical predictions have some conflicting results. Meanwhile, previous works [11; 12; 13; 14; 15; 16] performed numerical simulations to explore the scaling properties of the KPZ equation with spatial and temporal correlations. Yet the enormous challenge one must be faced is that simulating the KPZ equation has annoyed numerical instability due to the inherent nonlinearity, which could lead to abnormal growth termination even at the early growth times. In order to avoid this kind of numerical overflow, previous studies typically designed a few special operators [17; 18] to replace the nonlinear term in Eq. (1). It was believed that these operators could hold the universality class unchanged. However, recent works indicated that this strategy for dealing with the inherent nonlinearity may notably cause different universality class [14; 15; 16; 19]. Some other works utilized discrete growth models, _e.g._ ballistic deposition (BD) model [11; 14], to represent the KPZ equation for avoiding numerical divergence. However, recent research revealed that evident discrepancies exist between the BD model and the KPZ equation when long-range correlated noise is introduced [15]. Consequently, replacing the nonlinear term and utilizing discrete growth models are suboptimal for obtaining convincing results in the presence of long-range correlations. In recent years, several neural network-based methods have been successively proposed to simulate partial differential equations (PDEs). For example, Raissi _et al._[20] proposed a physics-informed neural network to realize the data-driven solution and data-driven discovery of PDEs. Kochkov _et al._[21] adopted a neural network to approximate the Navier-Stokes equations, which achieved 40-80x fold computational speedups for modeling the two-dimensional turbulent flows. Bar-Sinai _et al._[22] employed a neural network to estimate spatial derivatives to satisfy the PDEs, and the network achieved highly accurate solutions with an unprecedentedly low resolution. The above neural network-based research mainly explored the effectiveness of neural networks for simulat
2310.08914
Differential Evolution Algorithm based Hyper-Parameters Selection of Convolutional Neural Network for Speech Command Recognition
Speech Command Recognition (SCR), which deals with identification of short uttered speech commands, is crucial for various applications, including IoT devices and assistive technology. Despite the promise shown by Convolutional Neural Networks (CNNs) in SCR tasks, their efficacy relies heavily on hyper-parameter selection, which is typically laborious and time-consuming when done manually. This paper introduces a hyper-parameter selection method for CNNs based on the Differential Evolution (DE) algorithm, aiming to enhance performance in SCR tasks. Training and testing with the Google Speech Command (GSC) dataset, the proposed approach showed effectiveness in classifying speech commands. Moreover, a comparative analysis with Genetic Algorithm based selections and other deep CNN (DCNN) models highlighted the efficiency of the proposed DE algorithm in hyper-parameter selection for CNNs in SCR tasks.
Sandipan Dhar, Anuvab Sen, Aritra Bandyopadhyay, Nanda Dulal Jana, Arjun Ghosh, Zahra Sarayloo
2023-10-13T07:38:03Z
http://arxiv.org/abs/2310.08914v1
Differential Evolution Algorithm based Hyper-Parameters Selection of Convolutional Neural Network for Speech Command Recognition ###### Abstract Speech Command Recognition (SCR), which deals with identification of short uttered speech commands, is crucial for various applications, including IoT devices and assistive technology. Despite the promise shown by Convolutional Neural Networks (CNNs) in SCR tasks, their efficacy relies heavily on hyperparameter selection, which is typically laborious and time-consuming when done manually. This paper introduces a hyperparameter selection method for CNNs based on the Differential Evolution (DE) algorithm, aiming to enhance performance in SCR tasks. Training and testing with the Google Speech Command (GSC) dataset, the proposed approach showed effectiveness in classifying speech commands. Moreover, a comparative analysis with Genetic Algorithm-based selections and other deep CNN (DCNN) models highlighted the efficiency of the proposed DE algorithm in hyperparameter selection for CNNs in SCR tasks. Differential Evolution Algorithm, Genetic Algorithm, Convolutional Neural Network, Hyper-parameters Selection, Meta-heuristics, Speech Command Recognition, Deep Learning ## 1 Introduction Speech Command Recognition (SCR) is a sub-field of Automatic Speech Recognition (ASR) focused on converting short spoken words into text (Patra et al., 2023). It's widely used in Internet of Things (IoT)-based smart home assistants, command-controlled wheelchairs for blind and disabled people, and AI-driven vehicles (Nanavati et al., 2021). Early SCR systems primarily used Hidden Markov Models (HMMs) (Naithani et al., 2018), Gaussian Mixture Models (GMMs) (Saravanan et al., 2020), and Multi-Layered Perceptron models (MLPs) (Ahad et al., 2002). Later, Recurrent Neural Networks (RNNs) (Paul and Paul, 2021) and Long Short-Term Memory networks (LSTMs) (Oruh et al., 2022) yielded significant improvements. However, Convolutional Neural Networks (CNNs), effective in handling 2D data dependencies, emerged as superior alternatives (Nanavati et al., 2021). Various input features have been considered while dealing with SCR tasks, like using a Depth-Wise Separable CNN (DS-CNN) for keyword recognition with mel-frequency spectral coefficients (MFSS) as input feature (Sorensen et al., 2020), using mel-frequency cepstral coefficients (MFCC) as input for deploying a CNN for wheelchair control using speech commands (Bakouri et al., 2022), smoothed-spectrogram, mel-spectrogram, and cochleagram as input features for CNN-based voice command detection (Sharan and Moir, 2018). Kubanek et al. proposed a new approach where MFCC, time and spectrum are combined to be used as speech features for the recognition of speech commands using DCNN model (Kubanek et al., 2019). However, the performance of all these CNNs is highly dependent on selection of several crucial hyper-parameters. CNN models have hyper-parameters like number and type of convolution layers, filter count and size, pooling type, and activation function, which significantly influence performance in classification tasks, including SCR. Typically, hyper-parameters are manually selected based on experience, a process that is both time-consuming and tedious. Therefore, it becomes difficult to obtain the optimal configuration of a CNN model within a reasonable cost [11, 12, 13, 14]. The paper employs the Differential Evolution (DE) algorithm [11][12][13] to optimize CNN hyperparameters for SCR tasks. Each individual in the DE algorithm represents a viable CNN architecture, with optimal hyper-parameters determined through standard DE operations like mutation, crossover, and selection. Spectrograms are used as input speech features for the CNN model. The dataset considered in this work is the Google Speech Command (GSC) dataset [2]. The proposed DE algorithm-based hyper-parameters selection approach is compared with the Genetic Algorithm (GA) [10] based hyper-parameter selection, as well as with state-of-the-art deep CNN (DCNN) models namely ResNet-50, Inception-V3, Xception, VGG-16 and VGG-19 for SCR task. The work maintains a consistent basic CNN architecture (with a fixed number of convolution, pooling, and fully connected layers) for both DE and GA approaches while implementing automatic hyper-parameter selection. Experimental results demonstrate that the proposed method outperforms others, achieving higher accuracy. Rest of the paper is organized as follows. Sections 2 and 3 provide detailed overviews of the related work and preliminaries respectively. Section 4 includes the details of the dataset, training details and experimental setups. In Section 5, the proposed approach is briefly discussed. In Section 6, experimental results are presented and discussed. Finally, Section 7 concludes the paper and provides some aspects of the future research. ## 2 Related Work Hyperparameter optimization is a critical research area for achieving high-performance deep learning models. Techniques like Random Search, Grid Search, Bayesian Optimization [15], and Gradient-based Optimization [10] are used to find optimal hyperparameter configurations. Each method offers trade-offs in the computational efficiency, exploration of search space, and exploitation of discerned solutions. Genetic Algorithms were first utilized for modifying Convolutional Neural Network architectures in late 1900's, subsequently instigating a gamut of applications involving various nature-inspired algorithms in the domain of deep learning models. While many works compare evolutionary algorithms on computational models, no previous study comprehensively has applied evolutionary algorithms: Genetic Algorithm, Differential Evolution, across Convolutional Neural Networks architecture for isolated speech command recognition. These algorithms stand out due to their iterative population-based approaches, stochastic and global search implementation, and versatility in optimizing various problems. In this context, this paper aims to bridge the gap by conducting a comprehensive exploration of the application of nature-inspired and evolutionary algorithms, like Differential Evolution, in optimizing DCNN architectures for SCR. By delving into the intricacies of how these algorithms interact with lightweight CNN structures and comparing the performance in SCR with that of DCNN models, namely, VGG-16, VGG-19, Resnet-50, InceptionV3, Xception, this study aims to uncover a clearer understanding of their advantages and the limitations for the various other speech related tasks. ## 3 Preliminaries ### Differential Evolution (DE) DE is a population-based optimization algorithm designed for non-linear, multi-modal optimization problems [13]. It iteratively refines a population of candidate solutions (individuals) through mutation, crossover, and selection operators, enhancing the individuals based on existing ones within the population. In order to apply DE, first a population size of \(N\) individuals is created, and each individual is represented by a d-dimensional vector \(\mathbf{x}_{i}\) (where \(i\) implies the \(i^{th}\) individual). Thereafter, the population is randomly initialized within the search space. At each iteration, a new population with \(N\) individuals are generated by applying the following mutation, crossover and selection operators. **Mutation:** In mutation operation, distinct individuals from the population are selected. A widely used mutation scheme is \(DE/rand/1\), where three distinct individuals from the population are randomly selected. Then, a mutant vector (also called donor vector) \(\mathbf{v}_{i}^{g}\) is created as shown in Eq. (1), \[\mathbf{v}_{i}^{g}=\mathbf{x}_{r_{1}}^{g}+F\times\left(\mathbf{x}_{r_{2}}^{g}- \mathbf{x}_{r_{3}}^{g}\right). \tag{1}\] In Eq. (1), \(\mathbf{x}_{r_{1}}^{g}\), \(\mathbf{x}_{r_{2}}^{g}\), and \(\mathbf{x}_{r_{3}}^{g}\) are three distinct individuals (here, \(g\) indicates generation). Whereas, \(r_{1}\), \(r_{2}\), and \(r_{3}\) means randomly selected indices, and \(F\) is the scaling factor that controls the magnitude of the mutation. **Crossover:** In crossover operation, a trial vector \(\mathbf{u}_{i}^{g}\) is generated by combining the donor vector \(\mathbf{v}_{i}^{g}\) and the original vector \(\mathbf{x}_{i}^{g}\) using crossover operation. The crossover operation (binomial crossover) is explained in details as follows, \[\mathbf{u}_{j,i}^{g}=\begin{cases}\mathbf{v}_{j,i}^{g}&if~{}j_{\text{rand}}(0,1)\leq CR~{}or~{}j=~{}\delta\\ \mathbf{x}_{j,i}^{g}&Otherwise.\end{cases} \tag{2}\] In this context, \(\mathbf{u}_{j,i}^{g}\) represents the \(j^{th}\) dimension of the \(i^{th}\) individual at the \(g^{th}\) generation. The crossover rate is denoted by \(CR\), and \(rand(0,1)\) represents a randomly generated number between 0 and 1. Additionally, \(\delta\) refers to a random dimension \(d\) selected from the range \((1,d)\) of \(\mathbf{u}_{i}^{g}\). **Selection:** In selection operation, the trial vector \(\mathbf{u}_{i}^{g}\) is compared with the original vector \(\mathbf{x}_{i}^{g}\). If the fitness of \(\mathbf{u}_{i}^{g}\) is superior than \(\mathbf{x}_{i}^{g}\), then replacement of \(\mathbf{x}_{i}^{g}\) with \(\mathbf{u}_{i}^{g}\) is carried out in the next generation. Otherwise, \(\mathbf{x}_{i}^{g}\) is kept unchanged. The above three steps are repeated until a stopping criterion is met (the stopping criterion varies from problem to problem). ## 4 Experimental Details ### Dataset Description The proposed DE-based hyper-parameters selection approach is trained and tested on google speech command (GSC) dataset (Warden, 2018). In this work 8 speech commands from GSC dataset are considered namely "down", "go", "left", "no", "right", "stop", "up", "yes". Here, total 8000 speech samples are considered by taking 1000 samples belonging to each speech commands. The dataset is split into training, validation and test set. In this work, the model is trained with 6400 training samples and 1000 validation samples. After the completion of training the trained model is tested with 600 test samples. The time span of each audio sample considered is of 1 second or less and the sampling rate is 16kHz. ### Experimental Setups The experiments of this work are implemented in Python 3.10.11 using three libraries as Tensorflow 2.11.0, Tensorflow built in Keras, and Numpy 1.22. The audible speech data samples are preprocessed using Librosa 0.10.0. The experiments were performed in a Google Colaboratory environment using A100 GPU. ## 5 Proposed Approach This section explicitly describes the proposed DE algorithm-based hyper-parameter selection of CNN model for speech command recognition task. In the pre-processing phase, each speech sample is converted into mel-spectrogram (Akhter et al., 2022) of shape \(124\times 129\), in order to make the input data compatible to work with \(2D\) CNN. First, an overall framework of the proposed method is presented followed by the main components of the method. These include encoding scheme, population initialization, fitness evaluation, mutation, crossover, and selection operation of DE, concerning the optimal hyper-parameter selection for the CNN model. ### The Overall Framework The overall framework of the proposed approach is depicted in Fig. 1. The DE algorithm starts with a population of \(N\) individuals, each representing a CNN architecture which is trained on the training dataset (\(D_{train}\)) and evaluated for fitness on the validation dataset (\(D_{valid}\)) in terms of model accuracy. The associated hyper-parameters of CNN models are evolved through mutation and crossover operations of DE. These processes are repeated with a maximum number of generations. The optimal hyper-parameters of CNN architecture are selected from the best individual based on their fitness value and tested on the test dataset to determine the model's final performance. ### Encoding Scheme Designing an appropriate encoding process is a difficult task in any algorithm, as it determines how each individual is represented as a CNN structure. To address this, a standard layer-based encoding scheme is proposed in this work. This adopts the widely popular VGG-16 CNN model design (Simonyan and Zisserman, 2014). The VGG-16 model is composed of three types of layers - convolution, pooling, and fully connected (FC) arranged sequentially. Each individual's length is fixed with a total of 16 layers, following VGG-16 model. The hyper-parameters for each layer are determined based on pre-defined ranges for the purpose of designing and training a CNN model. ### Population Initialization The population in this context refers to the collection of individuals that are initially spread throughout the search space. The population is denoted as \(P\), consists of \(N\) individuals represented as \(P=\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},...,\mathbf{x}_{N}\}\). Every individual is regarded as a CNN model architecture with a fixed length similar to the VGG-16 model architecture. In addition, the corresponding hyper-parameters of the CNN model are initialized randomly within a set of pre-defined ranges defined in Table 1. A layer-based approach is used to configure the hyper-parameter of each layer type, including convolution filter size, number of filters, activation function, optimizer, drop-out value and number of neurons in FC layers. In this work, the hyper-parameters of the pooling layer are considered as same as the VGG-16 model. Fig.2 shows an example of a genotype along with its corresponding phenotype. ### Fitness Evaluation In the proposed method, each individual in the population is evaluated based on their fitness. To calculate the fitness, every individual in the population \(P\) transforms itself into a CNN architecture and trains it with the training dataset \(D_{train}\). The trained model is then evaluated on the validation dataset \(D_{valid}\) using sparse categorical cross-entropy (Dan et al., 2022) as the fitness function due to its excellent performance in the SCR tasks. ### Mutation In DE, a mutant or donor vector is obtained by applying different mutant operations to the original vector of the current generation. In this study, the \(DE/rand/1\) mutation scheme is used for simplicity and greater diversity in the hyper-parameters of CNN architecture at each generation. During the mutation phase, as described in Eq. (1), a basic difference calculation is employed to compare the hyper-parameters of the chosen CNN model. In the proposed approach, two individuals (\(\mathbf{x}_{r_{2}}\neq\mathbf{x}_{r_{3}}\)) are selected randomly from the population \(P\) which are different from the original vector \(\mathbf{x}_{i}\). Then, the difference (\(\mathbf{x}_{r_{2}}\)-\(\mathbf{x}_{r_{3}}\)) is calculated based on the hyper-parameter values for each layer of CNN. After performing the difference calculation, the range of hyper-parameters for each layer is checked by boundary checking to ensure that they fall within specified limits. Next, the proposed approach selects another random individual, denoted as \(\mathbf{x}_{r1}\) and performs the computation with (\(\mathbf{x}_{r_{2}}\)-\(\mathbf{x}_{r_{3}}\)) to generate a donor vector (also called \begin{table} \begin{tabular}{l l} \hline \hline **Hyper-parameters** & **Hyper-parameters range** \\ \hline Convolution filter size & \{3\(\times\)3, 5\(\times\)5\} \\ Number of filters & \{16, 32, 64, 128, 256, 512\} \\ Activation function & \{’ReLU’, ’SELU’, ’ELU’\} \\ Optimization function & \{’SGD’, ’Adam’, ’Adagrad’, ’Adamax’\} \\ & \{0.1, 0.2, 0.3, 0.4, 0.5\} \\ Drop-out rate & \{128, 256, 512\} \\ Number of neurons & \\ \hline \hline \end{tabular} \end{table} Table 1: Hyper-parameters and their ranges considered in the proposed work Figure 1: The working mechanism of the DE algorithm based hyper-parameters selection approach for the SCR task. a mutant vector) \(\mathbf{v}_{i}\) based on a scaling factor \(F\). For this purpose, a random number \(r[0,1]\) is generated for each dimension of \(\mathbf{x}_{r_{1}}\). If \(r\leq F\), the proposed method chooses a layer from \(\mathbf{x}_{r_{1}}\). Otherwise, it selects a layer from \((\mathbf{x}_{r_{2}}\mathbf{x}_{r_{3}})\). Eq. (3) specifies the mutation operation, where \(\mathbf{v}_{j,i}\) represents the \(j^{th}\) dimension of the \(i^{th}\) individual in the population \(P\). \[\mathbf{v}_{j,i}=\begin{cases}\mathbf{x}_{j,r_{1}}&if\ r\quad\leq\text{F}\\ \left|\mathbf{x}_{j,r_{2}}\mathbf{-x}_{j,r_{3}}\right|&Otherwise\end{cases} \tag{3}\] Since we cannot calculate \((\mathbf{x}_{r_{2}}\mathbf{-x}_{r_{3}})\) for activation functions, as there is no defined "difference" between them, we follow an encoding and rounding off strategy, encoding the activation functions with integers and then performing rounding off and boundary checking while decoding. ### Crossover To boost population diversity, a crossover operation follows the mutation operation in DE, exchanging components between the donor vector \(\mathbf{v}_{j,i}\) and target vector \(\mathbf{x}_{j,i}\) to form a new trial vector \(\mathbf{u}_{i}\). Binomial crossover is employed, with the trial vector formation guided by crossover rate \(CR\) and a random number \(\delta\). We defines \(\delta\) value randomly one of the \(j^{th}\) component of \(\mathbf{v}_{i}\). Another random number \(j_{rand}(0,1)\) is assigned for each dimension (\(j\)) of \(\mathbf{u}_{i}\) that has the same length of \(\mathbf{v}_{i}\). If the randomly generated number \(j_{rand}(0,1)\) is less than or equal to the crossover rate \(CR\), or if \(j\) is equal to \(\delta\), then the \(j^{th}\) value from the donor vector \(\mathbf{v}_{i}\) is selected. Otherwise, the \(j^{th}\) value is taken from the target vector \(\mathbf{x}_{i}\). The proposed crossover operation is mathematically represented in Eq. (4), where trial vector \(\mathbf{u}_{j,i}\) represents the \(j^{th}\) dimension of the \(i^{th}\) individual for the target vector \(\mathbf{x}_{i}\). \[\mathbf{u}_{j,i}=\left\{\begin{array}{ll}\mathbf{v}_{j,i}&\text{if }j_{\text{rand}}\left(0,1\right)\leq CR\ or\ j=\delta\\ \mathbf{x}_{j,i}&\text{Otherwise}\end{array}\right. \tag{4}\] ### Selection The selection stage chooses either the target vector \(\mathbf{x}_{i}\) or trial vector \(\mathbf{u}_{i}\) for the next generation based on their fitness values f, ensuring a constant population size across generations for stability. Each \(\mathbf{x}_{i}\) in the population \(P\) is evaluated for its fitness, denoted as \(f(\mathbf{x}_{i})\), using the fitness function. Also, the fitness of generated \(\mathbf{u}_{i}\) is calculated using the same fitness function as for each \(x_{i}\) and represented as \(f(\mathbf{u}_{i})\). For the subsequent generation, i.e., \((g+1)\), the individual with higher fitness value is selected. Eq. (5) mathematically presents the proposed selection strategy used in our proposed work. \[\mathbf{x}_{i}^{g+1}=\begin{cases}\mathbf{u}_{i}^{g}&if\ f(x_{i}^{g})\leq\ f (u_{i}^{g})\\ \mathbf{x}_{i}^{g}&Otherwise\end{cases} \tag{5}\] A pseudocode implementation of the proposed Differential Evolution Algorithm is as follows: ## 6 Results and Discussion The parameters setting of the proposed work is based on the literature review of the conventional DE (Das and Suganthan, 2011) and deep learning (DL) (Guo et al., 2016) implementations along with our limited computational resources. The population size and maximum generation are fixed at 10 throughout the proposed algorithm1. DE scaling factor is fixed at 0.6. Furthermore, to train the generated CNN models, we have used Xavier weight initialization (Chang et al., 2020) with the learning rate 0.001 due to its effective utilization in the domain of DL. To enhance the training speed, we have incorporated batch normalization (BN) (Ioffe and Szegedy, 2015) with a batch size of 32, along with a 25% dropout rate. The fitness calculation is conducted for each epoch throughout the evaluation procedure. The final CNN model architecture obtained from this proposed Figure 2: An example of genotype with its phenotype. The acronym used in genotype is FS (Convolution Filter Size), NOF (Number of Filters), ACT (Activation function), OPT (Optimization function), DP (Drop-out rate), NON (Number of Neurons). method is tested using the test dataset to evaluate its performance. In Fig.3, the generation wise performance for the best networks of the proposed DE-based hyper-parameters selection approach are shown. The best networks for each generation indicate the best selection of hyper parameters belonging to the respective CNN networks. As shown in Fig.3, the highest test accuracy obtained is 0.915 (i.e. 91.5%) for the generation number 3. In Fig.4, the hyper-parameters for the CNN model are presented for which the highest accuracy is obtained. The proposed approach is also compared with the GA-based hyper-parameter selection approach. In GA based approach, each chromosome is selected in each generation from the population size 15. The generation-wise accuracy plot for the GA-based hyper-parameters selection approach is shown in Fig.5. From Fig.5, it can be observed that the highest accuracy obtained is 0.877 (i.e. 87.7%) for the generation number 10. In Fig.6, the hyper-parameters for the CNN model are presented for which the highest accuracy is obtained. However, from both Fig.3 and Fig.5 it can be clearly observed that the performance (in terms of accuracy) of the DE-based hyper-parameter selection approach is better than the GA-based hyper-parameter selection approach. The performance of both DE and GA approaches are also compared with ResNet-50, Inception-V3, Xception, VGG-16, and VGG-19 models for the SCR task considering the test dataset. Table 2 presents the average precision, recall, F1-score, and test accuracy of all models considered, highlighting the superior performance of the proposed DE-based CNN model. While the ResNet-50 model also performs significantly better than other consid Figure 4: Hyper-parameters of the best CNN model (in terms of accuracy) obtained using DE-based hyper-parameters selection approach. Figure 5: Generation wise accuracy plot for the proposed GA-based hyper-parameters selection approach. Figure 3: Generation wise accuracy plot for the proposed DE-based hyper-parameters selection approach. ered DCNN models, it is outshined by the proposed model. However, from Table 2 it is clearly observed that the accuracy of the CNN model obtained from the proposed approach is higher than ResNet-50 model for the SCR task. Fig.7 shows the confusion matrix obtained from evaluating the model's class wise prediction accuracy for the DE approach on the test dataset. From Fig.7, it can be concluded that the CNN model obtained from the proposed approach has shown significant performance (in terms of accuracy) for all the considered classes. The superior performance of the DE-optimized CNN model is due to its effective exploration of the search space, utilizing parameter vector differences to exploit promising regions for optimal solutions, unlike genetic algorithms that may converge to local minima. The mutation operator prevents early convergence through random perturbations, while the crossover operator accelerates convergence by exchanging useful features. The selection operator preserves the fittest individuals, enhancing the quality of solutions. Therefore, in Table 3 the class wise precision, recall, F1-score are also provided to show the performance of the obtained CNN model for all the considered speech commands of the GSC dataset. ## 7 Conclusion This paper proposes an efficient Differential Evolution (DE)-based approach for selecting CNN hyper-parameters automatically, aiming to enhance Speech Command Recognition (SCR) tasks. Unlike tedious manual selection, DE, a global optimization algorithm, avoids local optima entrapments common in Grid Search, promoting more efficient ponding more efficient global optimum identification. Furthermore, evolutionary algorithms like DE inherently minimize user bias - when hyper-parameters are manually selected, they are often influenced by an individual's past experiences or preconceived notions, which can skew the optimization process. The proposed DE-based hyper-parameter selection approach outperformed the GA-based approach and other considered DCNN models in SCR tasks. The improved performance is attributed to DE's superior search space navigation and global maxima identification abilities. Unlike GA, DE requires fewer control parameters and has demonstrated robustness across various optimization problems. Additionally, the proposed approach surpassed other DCNN models. Future work may extend this approach to evolutionary algorithm-based speech feature selection for diverse speech-based applications.
2303.06815
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Model compression is a crucial part of deploying neural networks (NNs), especially when the memory and storage of computing devices are limited in many applications. This paper focuses on two model compression techniques: low-rank approximation and weight pruning in neural networks, which are very popular nowadays. However, training NN with low-rank approximation and weight pruning always suffers significant accuracy loss and convergence issues. In this paper, a holistic framework is proposed for model compression from a novel perspective of nonconvex optimization by designing an appropriate objective function. Then, we introduce NN-BCD, a block coordinate descent (BCD) algorithm to solve the nonconvex optimization. One advantage of our algorithm is that an efficient iteration scheme can be derived with closed-form, which is gradient-free. Therefore, our algorithm will not suffer from vanishing/exploding gradient problems. Furthermore, with the Kurdyka-{\L}ojasiewicz (K{\L}) property of our objective function, we show that our algorithm globally converges to a critical point at the rate of O(1/k), where k denotes the number of iterations. Lastly, extensive experiments with tensor train decomposition and weight pruning demonstrate the efficiency and superior performance of the proposed framework. Our code implementation is available at https://github.com/ChenyangLi-97/NN-BCD
Chenyang Li, Jihoon Chung, Mengnan Du, Haimin Wang, Xianlian Zhou, Bo Shen
2023-03-13T02:14:42Z
http://arxiv.org/abs/2303.06815v3
# Provable Convergence of Tensor Decomposition-Based Neural Network Training # Provable Convergence of Tensor Decomposition-Based Neural Network Training **Chenyang Li, Bo Shen* Department of Mechanical and Industrial Engineering, New Jersey Institute of Technology *Corresponding Author: [email protected]** **Abstract** Advanced tensor decomposition, such as tensor train (TT), has been widely studied for tensor decomposition-based neural network (NN) training, which is one of the most common model compression methods. However, training NN with tensor decomposition always suffers significant accuracy loss and convergence issues. In this paper, a holistic framework is proposed for tensor decomposition-based NN training by formulating TT decomposition-based NN training as a nonconvex optimization problem. This problem can be solved by the proposed tensor block coordinate descent (tenBCD) method, which is a _gradient-free_ algorithm. The global convergence of tenBCD to a critical point at a rate of \(\mathcal{O}(1/k)\) is established with the Kurdyka Lojasiewicz (KL) property, where \(k\) is the number of iterations. The theoretical results can be extended to the popular residual neural networks (ResNets). The effectiveness and efficiency of our proposed framework are verified through an image classification dataset, where our proposed method can converge efficiently in training and prevent overfitting. **Keywords** Model Compression, Tensor Train Decomposition, Global Convergence, Gradient-free Training. ## 1 Introduction Neural network (NN) has revolutionized many facets of our modern society, such as image classification [1], object detection [2, 3], speech recognition [4], etc. These advances have become possible because of algorithmic advances, large amounts of available data, and modern hardware. Despite their widespread success and popularity, there still remains a significant challenge in executing NNs with many parameters on edge devices. For most embedded and Internet-of-Things (IoT) systems, the sizes of many state-of-the-art NN models are too large, thereby causing high storage and computational demands and severely hindering the practical deployment of NNs. For example, wearable robots [5, 6], such as exoskeletons, typically have limited processing power, memory, storage, and energy supply due to their small size and portability. In addition, these wearable devices rely on wireless communication with remote servers, as larger models would require more bandwidth and higher latency, leading to slower and less reliable performance. To address this issue, numerous model compression techniques are proposed in the literature, which can be summarized into the following categories. (1) Pruning [7, 8, 9]: this technique involves removing unnecessary connections or neurons from a pre-trained model. This can result in a smaller network with similar performance. (2) Quantization [10, 11]: this involves reducing the number of bits required to represent the weights and activations in a neural network. For example, weights and activations may be represented using 8-bit integers instead of 32-bit floating-point numbers. (3) Structured sparsity [12]: this involves imposing a structured sparsity pattern on the weights of a model, such as by sparsifying entire rows or columns of weight matrices. (4) Knowledge distillation [13]: this involves training a smaller model to mimic the behavior of a larger, more complex model, using the outputs of the larger model as labels. (5) Low-rank approximation [14]: this technique involves approximating the weight matrices/tensors of a deep learning model with low-rank matrices/tensors. Among all model compression methods, low-rank approximation, especially tensor decomposition [15], is an extremely attractive NN model compression technique since it can reduce the number of parameters in a model while maintaining a high level of accuracy. Specifically, tensor decomposition is a mathematical tool that explores the low tensor rank characteristics of large-scale tensor data, which stands out by offering an ultra-high compression ratio. By utilizing advanced tensor decomposition techniques like tensor train (TT) [16], it is possible to achieve more than a 1,000\(\times\) reduction in parameters for the input-to-hidden layers of neural network models [17; 18]. Moreover, these compression methods can also enhance the classification accuracy in video recognition tasks significantly. Given such impressive compression performance, there has been a surge of interest in exploring the potential of tensor decomposition-based neural network models in prior research efforts [19]. Due to the benefits brought by the TT-based NN models, several TT-based NN hardware accelerators have been developed and implemented in different chip formats including digital CMOS ASIC [20], memristor ASIC [21] and IoT board [22]. Although tensor decomposition shows strong compression performance, the training of tensor decomposition-based NN is a quite challenging task [23; 19] because it involves tensor decomposition in NN training. In general, there are two ways to use tensor decomposition to obtain a compressed model: (1) Train from scratch in the decomposed format, and (2) Decompose a pre-trained uncompressed model and then retrain. In the first case, when the required tensor decomposition-based, e.g. TT-format model, is directly trained from scratch because the structure of the models is already pre-set to low tensor rank format before the training, the corresponding model capacity is typically limited as compared to the full-rank structure, thereby causing the training process being very sensitive to initialization and more challenging to achieve high accuracy. In the latter scenario, though the pre-trained uncompressed model provides a good initialization position, the straightforwardly decomposing full-rank uncompressed model into low tensor rank format causes inevitable and non-negligible approximation error, which is still very difficult to be recovered even after a long-time re-training period. No matter which training strategy with tensor decomposition is adopted, the training of NN heavily relies on gradient-based methods, which make use of backpropagation [24] to compute gradients of network parameters. These gradient-based methods are based on the Stochastic Gradient Descent (SGD) method [25]. In recent years, a considerable amount of research has been dedicated to developing adaptive versions of the vanilla SGD algorithm. These adaptive variants include AdaGrad [26], RMSProp [27], Adam [28], and AMSGrad [29]. Despite the great success of these gradient-based methods, tensor decomposition always brings a linear increase in network depth, which implies training the tensor decomposition format NNs are typically more prone to the gradient vanishing problem [30] and hence being difficult to be trained well. This paper aims to address the current limitations and fully unlock the potential of tensor decomposition-based NN training. To achieve this objective, a holistic framework for tensor decomposition-based NN training is proposed, which formulates tensor train decomposition-based NN training as a nonconvex optimization problem. This problem can be solved by the proposed tensor block coordinate descent (tenBCD) methods. BCD is a _gradient-free_ method that has been recently adapted to NN training [31, 32]. The main reasons for the surge of attention of BCD algorithms are twofold. One reason is that they are _gradient-free_, and thus are able to deal with non-differentiable nonlinearities and potentially avoid the vanishing gradient issue. The other reason is that BCD can be easily implemented in a distributed and parallel manner, therefore in favor of distributed/federated scenarios. To summarize, the contributions of this paper are as follows: * A holistic framework is proposed for tensor decomposition-based NN training, which involves a highly nonconvex optimization problem. * An efficient tensor BCD (tenBCD) algorithm is implemented to solve the proposed optimization problem; * Convergence of the iterative sequence generated by the tenBCD algorithm is analyzed, which is proved to be globally convergent to a critical point at a rate of \(\mathcal{O}(1/k)\). ## 2 Background and Preliminaries In Section 2.1, the notation and basics of multi-linear/tensor algebra used in this paper are reviewed. Then, tensor train decomposition [16] is reviewed briefly in Section 2.2. Afterward, the tensor train fully-connected layer [33] is reviewed in Section 2.3. ### Notation and Tensor Basis Throughout this paper, scalars are denoted by lowercase letters, e.g., \(x\); vectors are denoted by lowercase boldface letters, e.g., \(\mathbf{x}\); matrices are denoted by uppercase boldface, e.g., \(X\); and tensors are denoted by calligraphic letters, e.g., \(X\). The order of a tensor is the number of its modes or dimensions. A real-valued tensor of order-\(d\) is denoted by \(\mathbf{\mathcal{X}}\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\) and its entries by \(\mathbf{\mathcal{X}}(i_{1},\cdots,i_{d})\). The inner product of two same-sized tensors \(\mathbf{\mathcal{X}}\) and \(\mathbf{\mathcal{Y}}\) is the sum of the products of their entries, namely, \(\langle\mathbf{\mathcal{X}},\mathbf{\mathcal{Y}}\rangle=\sum_{i_{1}}\cdots\sum_{i_{d }}X\left(i_{1},\ldots,i_{d}\right)\cdot\mathbf{\mathcal{Y}}\left(i_{1},\ldots,i_{ d}\right)\). Following the definition of inner product, the Frobenius norm of a tensor \(\mathbf{\mathcal{X}}\) is defined as \(\left\|\mathbf{\mathcal{X}}\right\|_{F}=\sqrt{\langle\mathbf{\mathcal{X}},\mathbf{ \mathcal{X}}\rangle}\). ### Tensor Train (TT) Decomposition Given a tensor \(\mathbf{\mathcal{A}}\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\), it can be decomposed to a sort of 3-order tensors via Tensor Train Decomposition (TTD) [16] as follows: \[\begin{split}\mathbf{\mathcal{A}}(i_{1},i_{2},\cdots,i_{d})& =\mathbf{\mathcal{G}}_{1}(:,i_{1},:)\mathbf{\mathcal{G}}_{2}(:,i_{2},:) \cdots\mathbf{\mathcal{G}}_{d}(:,i_{d},:)\\ &=\sum_{\alpha_{0},\alpha_{1}\cdots\alpha_{d}}^{r_{0},r_{1}, \cdots r_{d}}\mathbf{\mathcal{G}}_{1}(\alpha_{0},i_{1},\alpha_{1})\mathbf{\mathcal{G} }_{2}(\alpha_{1},i_{2},\alpha_{2})\cdots\mathbf{\mathcal{G}}_{d}(\alpha_{d-1},i_{ d},\alpha_{d}),\end{split} \tag{1}\] where \(\mathbf{\mathcal{G}}_{k}\in\mathbb{R}^{r_{k-1}\times n_{k}\times r_{k}}\) are called TT-cores for \(k=1,2,\cdots,d\), and \(\mathbf{r}=[r_{0},r_{1},\cdots,r_{d}],r_{0}=r_{d}=1\) are called TT-ranks, which determine the storage complexity of TT-format tensor. The representation of \(\mathbf{\mathcal{A}}\) via the explicit enumeration of all its entries requires storing \(\Pi_{k=1}^{d}n_{k}\) numbers compared with \(\sum_{k=1}^{d}n_{k}r_{k-1}r_{k}\) numbers if the tensor is stored in TT-format. ### Tensor Train Fully-Connected Layer Consider a simple fully-connected layer with weight matrix \(\mathbf{W}\in\mathbb{R}^{M\times N}\) and input \(\mathbf{x}\in\mathbb{R}^{N}\), where \(M=\prod_{k=1}^{d}m_{k}\) and \(N=\prod_{k=1}^{d}n_{k}\), the output \(\mathbf{y}\in\mathbb{R}^{M}\) is obtained by \(\mathbf{y}=\mathbf{W}\mathbf{x}\). In order to transform this standard layer to TT fully-connected (TT-FC) layer, the weight matrix \(\mathbf{W}\) is first tensorized to a \(d\)-order weight tensor \(\mathbf{\mathcal{W}}\in\mathbb{R}^{(m_{1}\times n_{1})\times\cdots\times(m_{d} \times n_{d})}\) by reshaping and order transposing. Then \(\mathbf{\mathcal{W}}\) can be decomposed to TT-format: \[\mathbf{\mathcal{W}}((i_{1},j_{1}),\cdots,(i_{d},j_{d}))=\mathbf{\mathcal{G}}_{1}(:,i_{ 1},j_{1},:)\cdots\mathbf{\mathcal{G}}_{d}(:,i_{d},j_{d},:) \tag{2}\] Here, each TT-core \(\mathbf{\mathcal{G}}_{k}\in\mathbb{R}^{r_{k-1}\times m_{k}\times n_{k}\times r_{k}}\) is a 4-order tensor, which is one dimension more than the standard one (1) since the output and input dimensions of \(\mathbf{W}\) are divided separately. Hence, the forward propagation on the TT-FC layer can be expressed in the tensor format as follows (the bias term is ignored here): \[\mathbf{\mathcal{Y}}(i_{1},\cdots,i_{d})=\sum_{j_{1},\cdots,j_{d}}\mathbf{\mathcal{G}}_{ 1}(:,i_{1},j_{1},:)\cdots\mathbf{\mathcal{G}}_{d}(:,i_{d},j_{d},:)\mathbf{\mathcal{X}}( j_{1},\cdots,j_{d})\] where \(\mathbf{\mathcal{X}}\in\mathbb{R}^{m_{1}\times\cdots\times m_{d}}\) and \(\mathbf{\mathcal{Y}}\in\mathbb{R}^{n_{1}\times\cdots\times n_{d}}\) are the tensorized input and output corresponding to \(\mathbf{x}\) and \(\mathbf{y}\), respectively. The details about the TT-FC layer are introduced in [33]. As the TT-FC layer and the corresponding forward propagation schemes are formulated, standard stochastic gradient descent (SGD) algorithm can be used to update the TT-cores with the rank set \(\mathbf{r}\), which determines the target compression ratio. The initialization of the TT-cores can be either randomly set or obtained from directly TT-decomposing a pre-trained uncompressed model. ## 3 Proposed Methodology Consider \(N\)-layer feedforward neural networks with \(N-1\) hidden layers of the neural networks. Particularly, let \(n_{i}\in\mathbb{N}\) be the number of hidden units in the \(i\)-th hidden layer for \(i=1,\ldots,N-1\). Let \(n_{0}\) and \(n_{N}\) be the number of units of input and output layers, respectively. Let \(\mathbf{W}_{i}\in\mathbb{R}^{n_{i}\times n_{i-1}}\) be the weight matrix between the \((i-1)\)-th layer and the \(i\)-th layer for any \(i=1,\ldots,N\). Let \(\mathcal{Z}:=\{(\mathbf{x}_{j},\mathbf{y}_{j})\}_{j=1}^{n}\subset\mathbb{R}^{n_{0}} \times\mathbb{R}^{n_{N}}\) be \(n\) samples, where \(\mathbf{y}_{j}\)'s are the one-hot vectors of labels. Denote \(\mathcal{W}:=\{\mathbf{W}_{i}\}_{i=1}^{N},\mathbf{X}:=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots, \mathbf{x}_{n})\in\mathbb{R}^{n_{0}\times n}\) and \(\mathbf{Y}:=(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{n})\in\mathbb{R}^{n_{N}\times n}\). ### Problem Formulation As shown in Figure 1, the weight in \(i\)-th layer, namely, \(\mathbf{W}_{i}\), can be transformed into a tensor \(\mathbf{\mathcal{W}}_{i}\). The tensor can be further decomposed into TT-format. Therefore, tensor train decomposition-based NN training problem can be formulated as the following empirical risk (i.e., training loss) minimization: \[\min_{\mathcal{W}}\mathcal{R}_{n}(\Phi(\mathbf{X};\mathcal{W}),\mathbf{Y}),\text{ subject to }\mathbf{\mathcal{W}}_{i}=\text{TTD}(\mathbf{r}_{i})\quad i=1,\ldots,N \tag{3}\] Figure 1: The framework of tensor train decomposition-based NN training. where \(\mathcal{R}_{n}(\Phi(\mathbf{X};\mathcal{W}),\mathbf{Y}):=\frac{1}{n}\sum_{j=1}^{n}\ell( \Phi(\mathbf{x}_{j};\mathcal{W}),\mathbf{y}_{j})\) with loss function \(\ell:\mathbb{R}^{n_{N}}\times\mathbb{R}^{n_{N}}\rightarrow\mathbb{R}_{+}\cup \{0\}\), \(\Phi(\mathbf{x}_{j};\mathcal{W})=\sigma_{N}(\mathbf{W}_{N}\sigma_{N-1}(\mathbf{W}_{N-1} \cdots\mathbf{W}_{2}\sigma_{1}(\mathbf{W}_{1}\mathbf{x}_{j})))\) is the neural network model with \(N\) layers. TTD\((\mathbf{r}_{i})\) is the tensor train decomposition with rank \(\mathbf{r}_{i}\) in (2) for weight tensor \(\mathbf{\mathcal{W}}_{i}\) and \(\sigma_{i}\) is the activation function of the \(i\)-th layer (generally, \(\sigma_{N}\equiv\text{Id}\) is the identity function). Note that the NN training model (3) is highly nonconvex as the variables are coupled via the NN architecture, which brings many challenges for the design of efficient training algorithms and also its theoretical analysis. To make Problem (3) more computationally tractable, variable splitting is one of the most commonly used ways [31, 32]. The main idea of variable splitting is to transform a complicated problem (where the variables are coupled nonlinearly) into a relatively simpler one (where the variables are coupled much looser) by introducing some additional variables. Considering general NN architectures, the regularized NN training model is applied here, which can reduce the original NN training model (3). Specifically, the variable splitting model is: \[\min_{\mathcal{W},\mathcal{V}}\mathcal{L}_{0}(\mathcal{W},\mathcal{ V}) :=\mathcal{R}_{n}(\mathbf{V}_{N};\mathbf{Y})+\sum_{i=1}^{N}\tau_{i}(\mathbf{W} _{i})+\sum_{i=1}^{N}s_{i}(\mathbf{V}_{i})\] (4) subject to \[\mathbf{U}_{i} =\mathbf{W}_{i}\mathbf{V}_{i-1},\mathbf{V}_{i}=\sigma_{i}(\mathbf{U}_{i}),\mathbf{ \mathcal{W}}_{i}=\text{TTD}(\mathbf{r}_{i})\quad i=1,\ldots,N,\] where \(\mathcal{R}_{n}(\mathbf{V}_{N};\mathbf{Y}):=\frac{1}{n}\sum_{j=1}^{n}\ell((\mathbf{V}_{N})_ {:j},\mathbf{y}_{j})\) denotes the empirical risk, \(\mathcal{V}:=\{\mathbf{V}_{i}\}_{i=1}^{N},(\mathbf{V}_{N})_{:j}\) is the \(j\)-th column of \(\mathbf{V}_{N}\). In addition, \(\tau_{i}\) and \(s_{i}\) are extended-real-valued, nonnegative functions revealing the priors of the weight variable \(\mathbf{W}_{i}\) and the state variable \(\mathbf{V}_{i}\) (or the constraints on \(\mathbf{W}_{i}\) and \(\mathbf{V}_{i}\) ) for each \(i=1,\ldots N\), and define \(\mathbf{V}_{0}:=\mathbf{X}\). To solve the formulation in (4), the following alternative minimization problem was considered: \[\min_{\mathcal{W},\mathcal{V},\mathcal{U},\mathcal{G}} \mathcal{L}(\mathcal{W},\mathcal{V},\mathcal{U},\mathcal{G}):= \mathcal{L}_{0}(\mathcal{W},\mathcal{V})+\frac{\gamma}{2}\sum_{i=1}^{N}\|\mathbf{V }_{i}-\sigma_{i}(\mathbf{U}_{i})\|_{F}^{2} \tag{5}\] \[+\frac{\rho}{2}\sum_{i=1}^{N}\|\mathbf{U}_{i}-\mathbf{W}_{i}\mathbf{V}_{i-1} \|_{F}^{2}+\frac{\tau}{2}\sum_{i=1}^{N}\|\mathbf{\mathcal{W}}_{i}-\text{TTD}(\mathbf{r} _{i})\|_{F}^{2}\] where \(\gamma,\rho,\tau>0\) are hyperparameters for different regularization terms, \(\mathcal{U}:=\{\mathbf{U}_{i}\}_{i=1}^{N}\), and \(\mathcal{G}:=\{\mathcal{G}_{i}\}_{i=1}^{N}\) is the set of TT-cores \(\mathcal{G}_{i}\) from \(i\)-th layer. The NN training model (5) can be very general, where: (a) \(\ell\) can be the squared, logistic, hinge, cross-entropy or other commonly used loss functions; (b) \(\sigma_{i}\) can be ReLU, leaky ReLU, sigmoid, linear, polynomial, softplus or other commonly used activation functions; (c) \(\tau_{i}\) can be the squared \(\ell_{2}\) norm, the \(\ell_{1}\) norm, the elastic net, the indicator function of some nonempty closed convex set (such as the nonnegative closed half-space or a closed interval \([0,1]\)); (d) \(s_{i}\) can be the \(\ell_{1}\) norm, the indicator function of some convex set with simple projection. Particularly, if there is no regularizer or constraint on \(\mathbf{W}_{i}\) (or \(\mathbf{V}_{i}\)), then \(\tau_{i}\) (or \(s_{i}\)) can be zero. The network architectures considered in this paper exhibit generality to various types of NNs, including but not limited to the fully (or sparse) connected MLPs, convolutional neural networks (CNN) and residual neural networks (ResNets) [34]. **Remark 1** (Advantage of Formulation (5)).: _As mentioned before, an existing TT-format NN is either 1) trained from randomly initialized tensor cores; or 2) trained from a direct decomposition of a pre-trained model. For the first strategy, it does not utilize any information related to the high-accuracy uncompressed model; while other model compression methods, e.g. pruning and knowledge distillation, have shown that proper utilization of the pre-trained models is very critical for NN compression. For the second strategy, though the knowledge of the pre-trained model is indeed utilized, because the pre-trained model generally lacks low TT-rank property, after direct low-rank tensor decomposition the approximation error is too significant to be properly recovered even using long-time re-training. Such inherent limitations of the existing training strategies, consequently, cause significant accuracy loss for the compressed TT-format NN models. To overcome these limitations, it is to maximally retain the knowledge contained in the uncompressed model, or in other words, minimize the approximation error after tensor decomposition with given target tensor ranks. In our formulation (5), \(\mathcal{L}_{0}(\mathcal{W},\mathcal{V})\) is the loss function of the uncompressed model while the regularization term \(\|\boldsymbol{\mathcal{W}}_{i}-\mathrm{TTD}(\boldsymbol{r}_{i})\|_{F}^{2}\) can encourage the uncompressed DNN models to gradually exhibit low tensor rank properties._ ### Tensor BCD Algorithms Note that (5) is a nonconvex optimization problem with multi-block variables. BCD is a Gauss-Seidel type method for a minimization problem with multi-block variables to update all the variables cyclically while fixing the remaining blocks at their last updated values [35]. A tensor BCD (tenBCD) algorithm is developed for solving (5). In this paper, proximal terms are added to some sub-problems arising from the tenBCD algorithm for two major reasons: (1) To practically stabilize the training process; (2) To yield the desired "sufficient descrease" property for theoretical justification. At each iteration \(k\), the tenBCD method with the backward order is considered for the updates of variables, i.e., the variables are updated from the output layer (layer \(N\)) to the input layer (layer 1). For each layer, the variables \(\{\boldsymbol{V}_{i},\boldsymbol{U}_{i},\boldsymbol{W}_{i},\mathcal{G}_{i}\}\) are updated cyclically for Problem (5). Since \(\sigma_{N}\equiv\mathrm{Id}\), the output layer is paid special attention. The tenBCD algorithms for (5) can be summarized in Algorithm 1. #### 3.2.1 Optimization over \(\boldsymbol{V}_{i}\) At iteration \(k\), \(\boldsymbol{V}_{N}\) can be updated through the following optimization problem \[\boldsymbol{V}_{N}^{k}=\operatorname{argmin}_{\boldsymbol{V}_{N}}\left\{s_{N} (\boldsymbol{V}_{N})+\mathcal{R}_{n}(\boldsymbol{V}_{N};\boldsymbol{Y})+ \frac{\gamma}{2}\|\boldsymbol{V}_{N}-\boldsymbol{U}_{N}^{k-1}\|_{F}^{2}+\frac {\alpha}{2}\|\boldsymbol{V}_{N}-\boldsymbol{V}_{N}^{k-1}\|_{F}^{2}\right\}, \tag{6}\] where \(s_{N}(\boldsymbol{V}_{N})+\mathcal{R}_{n}(\boldsymbol{V}_{N};\boldsymbol{Y})\) is regarded as a new proximal function \(\tilde{s}_{N}(\boldsymbol{V}_{N})\). When \(i<N\), \(\boldsymbol{V}_{i}\) can be updated through the following optimization problem \[\boldsymbol{V}_{i}^{k}=\operatorname{argmin}_{\boldsymbol{V}_{i}}\left\{s_{i} (\boldsymbol{V}_{i})+\frac{\gamma}{2}\|\boldsymbol{V}_{i}-\sigma_{i}( \boldsymbol{U}_{i}^{k-1})\|_{F}^{2}+\frac{\rho}{2}\|\boldsymbol{U}_{i+1}^{k}- \boldsymbol{W}_{i+1}^{k}\boldsymbol{V}_{i}\|_{F}^{2}\right\}. \tag{7}\] For subproblem (6), \(\frac{\alpha}{2}\|\boldsymbol{V}_{N}-\boldsymbol{V}_{N}^{k-1}\|_{F}^{2}\) is the proximal term, where \(\alpha>0\) is the positive coefficient. The above two problems (6) and (7) are simple proximal updates [35, 36] (or just least squares problems), which usually have closed-form solutions to many commonly used NNs. For \(\boldsymbol{V}_{N}^{k}\)-update, \(s_{N}(\boldsymbol{V}_{N})+\mathcal{R}_{n}(\boldsymbol{V}_{N};\boldsymbol{Y})\) is regarded as a new proximal function \(\tilde{s}_{N}(\boldsymbol{V}_{N})\). Some typical examples leading to the closed-form solutions include: (a) \(s_{i}\) are \(0\) (i.e., no regularization), or the squared \(\ell_{2}\) norm, or the indicator function of a nonempty closed convex set with a simple projection like the nonnegative closed half-space and the closed interval \([0,1]\); (b) the loss function \(\ell\) is the squared loss or hinge loss.1 Footnote 1: The \(\boldsymbol{V}_{N}\)-update with hinge loss and other smooth losses is provided in Appendix A.1. #### 3.2.2 Optimization over \(\boldsymbol{U}_{i}\) At iteration \(k\), \(\boldsymbol{U}_{N}\) can be updated through the following optimization problem \[\boldsymbol{U}_{N}^{k}=\operatorname{argmin}_{\boldsymbol{U}_{N}}\left\{\frac {\gamma}{2}\|\boldsymbol{V}_{N}^{k}-\boldsymbol{U}_{N}\|_{F}^{2}+\frac{\rho}{2} \|\boldsymbol{U}_{N}-\boldsymbol{W}_{N}^{k-1}\boldsymbol{V}_{N-1}^{k-1}\|_{F}^ {2}\right\} \tag{8}\] \(\mathbf{U}_{i},i<N\) can be updated through the following optimization problem \[\mathbf{U}_{i}^{k}=\operatorname*{argmin}_{\mathbf{U}_{i}}\left\{\frac{\gamma}{2}\|\mathbf{V} _{i}^{k}-\sigma_{i}(\mathbf{U}_{i})\|_{F}^{2}+\frac{\rho}{2}\|\mathbf{U}_{i}-\mathbf{W}_{i} ^{k-1}\mathbf{V}_{i-1}^{k-1}\|_{F}^{2}+\frac{\alpha}{2}\|\mathbf{U}_{i}-\mathbf{U}_{i}^{k- 1}\|_{F}^{2}\right\} \tag{9}\] For subproblem (9), \(\frac{\alpha}{2}\|\mathbf{U}_{i}-\mathbf{U}_{i}^{k-1}\|_{F}^{2}\) is the proximal term. The subproblem (8) is a least-square optimization where the closed-form solution can be derived. The subproblem (9) is a nonlinear and nonsmooth optimization where \(\sigma_{i}\) is ReLU or leaky ReLU. Accordingly, the closed-form solution to solve the subproblem (9) is provided in Appendix A.2. #### 3.2.3 Optimization over \(\mathbf{W}_{i}\) At iteration \(k\), \(\mathbf{W}_{i},i=1,\ldots,N\) can be updated through the following optimization problem \[\mathbf{W}_{i}^{k}=\operatorname*{argmin}_{\mathbf{W}_{i}}\left\{\tau_{i}(\mathbf{W}_{i}) +\frac{\rho}{2}\|\mathbf{U}_{i}^{k}-\mathbf{W}_{i}\mathbf{V}_{i-1}^{k-1}\|_{F}^{2}+\frac{ \tau}{2}\|\mathbf{\mathcal{W}}_{i}-\operatorname*{TTD}(\mathbf{r}_{i})\|_{F}^{2} \right\}, \tag{10}\] The closed-form solution to solve the above optimization problem can be obtained when \(\tau_{i}\) is \(0\) (i.e., no regularization), or the squared \(\ell_{2}\) norm (i.e., weight decay), or the indicator function of a nonempty closed convex set with a simple projection like the nonnegative closed half-space and the closed interval \([0,1]\). #### 3.2.4 Optimization over \(\mathcal{G}_{i}\) At iteration \(k\), \(\mathcal{G}_{i},i=1,\ldots,N\) can be updated through the following optimization problem \[\mathcal{G}_{i}^{k}=\operatorname*{argmin}_{\mathcal{G}_{i}}\left\{\frac{\tau }{2}\|\mathbf{\mathcal{W}}_{i}^{k}-\operatorname*{TTD}(\mathbf{r}_{i})\|_{F}^{2}+ \frac{\alpha}{2}\|\mathcal{G}_{i}-\mathcal{G}_{i}^{k-1}\|_{F}^{2}\right\} \tag{11}\] where \(\frac{\alpha}{2}\|\mathcal{G}_{i}-\mathcal{G}_{i}^{k-1}\|_{F}^{2}\) is the proximal terms. This subproblem is implemented in TensorLy package [37]. ``` 0: Sample \(\mathbf{X}\in\mathbb{R}^{n_{0}\times n}\) and \(\mathbf{Y}\in\mathbb{R}^{n_{N}\times n}\), \(\gamma,\rho,\tau,\alpha>0\) Initialization:\(\{\mathbf{W}_{i}^{0},\mathbf{V}_{i}^{0},\mathbf{U}_{i}^{0},\mathcal{G}_{i}^{0}\}_{i=1}^{N}\), \(\mathbf{V}_{0}^{k}\equiv\mathbf{V}_{0}:=\mathbf{X}\) 1:for\(k=1,\ldots\)do 2: Get \(\mathbf{V}_{N}^{k}\) by solving (6) 3: Get \(\mathbf{U}_{N}^{k}\) by solving (8) 4: Get \(\mathbf{W}_{N}^{k}\) by solving (10) 5: Get \(\mathcal{G}_{N}^{k}\) by solving (11) 6:for\(i=N-1,\ldots,1\)do 7: Get \(\mathbf{V}_{i}^{k}\) by solving (7) 8: Get \(\mathbf{U}_{i}^{k}\) by solving (9) 9: Get \(\mathbf{W}_{i}^{k}\) by solving (10) 10: Get \(\mathcal{G}_{i}^{k}\) by solving (11) 11:endfor 12:endfor 13:\(\{\mathcal{G}_{i}\}_{i=1}^{N}\) ``` **Algorithm 1** tenBCD Algorithm ### Global Convergence Analysis of tenBCD In this section, the global convergence of Algorithm 1 for Problem (5) is established. Firstly, let \(h:\mathbb{R}^{p}\rightarrow\mathbb{R}\cup\{+\infty\}\) be an extended-real-valued function, its graph is defined by \(Graph(h):=\{(\mathbf{x},y)\in\mathbb{R}^{p}\times\mathbb{R}:y=h(\mathbf{x})\}\), and its domain by \(\operatorname{dom}(h):=\{\mathbf{x}\in\mathbb{R}^{p}:h(\mathbf{x})<+\infty\}\). The subdifferential of a function is defined as follows. **Definition 1** (Subdifferentials [38, 39]).: _Assume that \(f:\ \mathbb{R}^{p}\to(-\infty,+\infty)\) is a proper and lower semicontinuous function._ 1. _The domain of_ \(f\) _is defined and denoted by_ \(\mathrm{dom}f\coloneqq\{\mathbf{x}\in\mathbb{R}^{p}:f(\mathbf{x})<+\infty\}\)__ 2. _For a given_ \(\mathbf{x}\in\mathrm{dom}f\)_, the Frechet subdifferential of_ \(f\) _at_ \(\mathbf{x}\)_, written_ \(\hat{\partial}f(\mathbf{x})\)_, is the set of all vectors_ \(\mathbf{u}\in\mathbb{R}^{p}\) _that satisfy_ \[\lim_{\mathbf{y}\neq\mathbf{x}}\inf_{\mathbf{y}\to\mathbf{x}}\frac{f(\mathbf{y})-f(\mathbf{x})-\langle \mathbf{u},\mathbf{y}-\mathbf{x}\rangle}{\|\mathbf{y}-\mathbf{x}\|}\geq 0.\] 3. _The limiting-subdifferential, or simply the subdifferential, of_ \(f\) _at_ \(\mathbf{x}\)_, written_ \(\partial f(\mathbf{x})\) _is defined through the following closure process_ \[\partial f(\mathbf{x}):=\{\mathbf{u}\in\mathbb{R}^{p}:\exists\mathbf{x}^{k}\to\mathbf{x},f( \mathbf{x}^{k})\to f(\mathbf{x})\ \text{and}\ \mathbf{u}^{k}\in\hat{\partial}f(\mathbf{x}^{k})\to\mathbf{u}\ \text{as}\ k\to\infty\}.\] Now, our first main lemma about the sufficient decrease property of the iterative sequence \(\{\mathcal{P}^{k}:=(\{\mathbf{W}_{i}^{k}\}_{i=1}^{N},\{\mathbf{V}_{i}^{k}\}_{i=1}^{N}, \{\mathbf{U}_{i}^{k}\}_{i=1}^{N}),\{\mathcal{G}_{i}^{k}\}_{i=1}^{N}\}_{k\in \mathbb{N}}\) from Algorithm 1 is ready to be introduced. **Lemma 2** (Sufficient Decrease Property).: _Given that \(\alpha,\gamma,\rho,\tau>0\), \(\left\{\mathcal{P}^{k}\right\}_{k\in\mathbb{N}}\) is the sequence generated by the tenBCD algorithm 1, then the sequence satisfies_ \[\mathcal{L}(\mathcal{P}^{k})\leq\mathcal{L}(\mathcal{P}^{k-1})-\lambda\| \mathcal{P}^{k}-\mathcal{P}^{k-1}\|_{F}^{2}. \tag{12}\] _For the case that \(\mathbf{V}_{N}\) is updated via the proximal strategy, \(\lambda:=\min\left\{\frac{\alpha}{2},\frac{\gamma+\rho}{2},\frac{\tau}{2}\right\}\). For the case that \(\mathbf{V}_{N}\) is update via the prox-linear strategy, \(\lambda:=\min\left\{\frac{\alpha}{2},\frac{\gamma+\rho}{2},\frac{\tau}{2}, \alpha+\frac{\gamma-LR}{2}\right\}\), where \(\nabla\mathcal{R}_{n}\) is Lipschitz continuous with a Lipschitz constant \(L_{R}\) and \(\alpha>\max\{0,\frac{L_{R}-\gamma}{2}\}\)._ Proof.: The inequality (12) can be developed by considering the descent quantity along the update of each block variable, i.e., \(\{\mathbf{V}_{i}\}_{i=1}^{N}\), \(\{\mathbf{U}_{i}\}_{i=1}^{N}\), \(\{\mathbf{W}_{i}\}_{i=1}^{N}\), and \(\{\mathcal{G}_{i}\}_{i=1}^{N}\). To begin with, the following notations are introduced. Specifically, \(\mathbf{W}_{<i}:=\left(\mathbf{W}_{1},\mathbf{W}_{2},\ldots,\mathbf{W}_{i-1}\right),\mathbf{W}_{>i }:=\left(\mathbf{W}_{i+1},\mathbf{W}_{i+1},\ldots,\mathbf{W}_{N}\right)\), and \(\mathbf{V}_{<i},\mathbf{V}_{>i},\mathbf{U}_{<i},\mathbf{U}_{>i},\mathcal{G}_{<i},\mathcal{G}_ {>i}\) are defined similarly. We will consider each case separately. **Optimization over \(\mathbf{V}_{i}\)** \(\mathbf{V}_{N}^{k}\)**-block:** at iteration \(k\), there are two ways to update the variable: (1) proximal update with closed-form solution: the following inequality can be derived \[\mathcal{L}\left(\{\mathbf{W}_{i}^{k-1}\}_{i=1}^{N},\mathbf{V}_{i<N}^{k-1 },\mathbf{V}_{N}^{k},\{\mathbf{U}_{i}^{k-1}\}_{i=1}^{N},\{\mathcal{G}_{i}^{k-1}\}_{i=1 }^{N}\right) \tag{13}\] \[\leq \mathcal{L}\left(\{\mathbf{W}_{i}^{k-1}\}_{i=1}^{N},\mathbf{V}_{i<N}^{k-1 },\mathbf{V}_{N}^{k-1},\{\mathbf{U}_{i}^{k-1}\}_{i=1}^{N},\{\mathcal{G}_{i}^{k-1}\}_{i= 1}^{N}\right)-\frac{\alpha}{2}\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\|_{F}^{2}.\] The above inequality (13) is due to the fact that \(\mathbf{V}_{N}^{k}\) is the optimal solution for subproblem (6). (2) proximal-linear case: let \(h^{k}(\mathbf{V}_{N}):=s_{N}(\mathbf{V}_{N})+\mathcal{R}_{n}(\mathbf{V}_{N};\mathbf{Y})+\frac{ \gamma}{2}\|\mathbf{V}_{N}-\mathbf{U}_{N}^{k-1}\|_{F}^{2}\) and \(\bar{h}^{k}(\mathbf{V}_{N}):=s_{N}(\mathbf{V}_{N})+\mathcal{R}_{n}(\mathbf{V}_{N}^{k-1}; \mathbf{Y})+\langle\nabla\mathcal{R}_{n}(\mathbf{V}_{N}^{k-1};\mathbf{Y}),\mathbf{V}_{N}-\mathbf{V }_{N}^{k-1}\rangle+\frac{\alpha}{2}\|\mathbf{V}_{N}-\mathbf{V}_{N}^{k-1}\|_{F}^{2}+\frac {2}{2}\|\mathbf{V}_{N}-\mathbf{U}_{N}^{k-1}\|_{F}^{2}\). By the optimality of \(\mathbf{V}_{N}^{k}\) and the strong convexity2 of \(\bar{h}^{k}(\mathbf{V}_{N})\) with modulus at least \(\alpha+\gamma\), the following holds Footnote 2: The function \(h\) is called a strongly convex function with parameter \(\gamma>0\) if \(h(u)\geq h(v)+\langle\nabla h(v),u-v\rangle+\frac{\gamma}{2}\|u-v\|^{2}\). \[\bar{h}^{k}(\mathbf{V}_{N}^{k})\leq\bar{h}^{k}(\mathbf{V}_{N}^{k-1})-\frac{\alpha+\gamma} {2}\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\|_{F}^{2}, \tag{14}\] which implies \[h^{k}(\mathbf{V}_{N}^{k})\leq h^{k}(\mathbf{V}_{N}^{k-1})+\mathcal{R}_{n}(\mathbf{V}_{N}^{k}; \mathbf{Y})-\mathcal{R}_{n}(\mathbf{V}_{N}^{k-1};\mathbf{Y})-\langle\nabla\mathcal{R}_{n}(\mathbf{V }_{N}^{k-1};\mathbf{Y}),\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\rangle\] \[\begin{split}&-(\alpha+\frac{\gamma}{2})\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k- 1}\|_{F}^{2}\\ &\leq h^{k}(\mathbf{V}_{N}^{k-1})-(\alpha+\frac{\gamma-L_{R}}{2})\|\bm {V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\|_{F}^{2},\end{split}\] (15a) where inequality ( 15a ) is due to the inequality ( 14 ), the relationship between \[h^{k}(\mathbf{V}_{N}^{k-1})\] and \[\bar{h}^{k}(\mathbf{V}_{N}^{k-1})\], and the relationship between \[h^{k}(\mathbf{V}_{N}^{k})\] and \[\bar{h}^{k}(\mathbf{V}_{N}^{k})\]. The inequality ( 15b ) holds for the \[L_{R}\] -Lipschitz continuity of \[\nabla\mathcal{R}_{n}\], i.e., the following inequality by [40] \[\mathcal{R}_{n}(\mathbf{V}_{N}^{k};\mathbf{Y})\leq\mathcal{R}_{n}(\mathbf{V}_{N}^{k-1}; \mathbf{Y})+\langle\nabla\mathcal{R}_{n}(\mathbf{V}_{N}^{k-1};\mathbf{Y}),\mathbf{V}_{N}^{k}- \mathbf{V}_{N}^{k-1}\rangle+\frac{L_{R}}{2}\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\|_{F }^{2}.\] According to the relationship between \[h^{k}(\mathbf{V}_{N})\] and \[\mathcal{L}\left(\{\mathbf{W}_{i}^{k-1}\}_{i=1}^{N},\mathbf{V}_{i<N}^{k-1},\mathbf{V}_{N}, \{\mathbf{U}_{i}^{k-1}\}_{i=1}^{N},\{\mathcal{G}_{i}^{k-1}\}_{i=1}^{N}\right)\], and the inequality ( 15 ), \[\begin{split}&\mathcal{L}\left(\{\mathbf{W}_{i}^{k-1}\}_{i=1}^{N}, \mathbf{V}_{i<N}^{k-1},\mathbf{V}_{N}^{k},\{\mathbf{U}_{i}^{k-1}\}_{i=1}^{N},\{\mathcal{G} _{i}^{k-1}\}_{i=1}^{N}\right)\\ \leq&\mathcal{L}\left(\{\mathbf{W}_{i}^{k-1}\}_{i=1}^{N}, \mathbf{V}_{i<N}^{k-1},\mathbf{V}_{N}^{k-1},\{\mathbf{U}_{i}^{k-1}\}_{i=1}^{N},\{\mathcal{ G}_{i}^{k-1}\}_{i=1}^{N}\right)-(\alpha+\frac{\gamma-L_{R}}{2})\|\mathbf{V}_{N}^{k}-\mathbf{V}_{ N}^{k-1}\|_{F}^{2}.\end{split} \tag{16}\] \(\mathbf{V}_{i}^{k}\)**-block (\(i<N\)):**\(\mathbf{V}_{i}^{k}\) is updated according to the following \[\mathbf{V}_{i}^{k}\leftarrow\operatorname*{argmin}_{\mathbf{V}_{i}}\left\{s_{i}(\mathbf{V} _{i})+\frac{\gamma}{2}\|\mathbf{V}_{i}-\sigma_{i}(\mathbf{U}_{i}^{k-1})\|_{F}^{2}+\frac {\rho}{2}\|\mathbf{U}_{i+1}^{k}-\mathbf{W}_{i+1}^{k}\mathbf{V}_{i}\|_{F}^{2}\right\}.\] Let \(h^{k}(\mathbf{V}_{i})=s_{i}(\mathbf{V}_{i})+\frac{\gamma}{2}\|\mathbf{V}_{i}-\sigma_{i}(\bm {U}_{i}^{k-1})\|_{F}^{2}+\frac{\rho}{2}\|\mathbf{U}_{i+1}^{k}-\mathbf{W}_{i+1}^{k}\mathbf{V }_{i}\|_{F}^{2}\). By the convexity of \(s_{i}\), the function \(h^{k}(\mathbf{V}_{i})\) is a strongly convex function with modulus no less than \(\gamma\). By the optimality of \(\mathbf{V}_{i}^{k}\), the following holds \[h^{k}(\mathbf{V}_{i}^{k})\leq h^{k}(\mathbf{V}_{i}^{k-1})-\frac{\gamma}{2}\|\mathbf{V}_{i} ^{k}-\mathbf{V}_{i}^{k-1}\|_{F}^{2}. \tag{17}\] Based on the inequality (17), it yields for \[\begin{split}&\mathcal{L}(\mathbf{W}_{\leq i}^{k-1},\mathbf{W}_{>i}^{k},\mathbf{V}_{< i}^{k-1},\mathbf{V}_{i}^{k},\mathbf{V}_{>i}^{k},\mathbf{U}_{\leq i}^{k-1},\mathbf{U}_{>i}^{k}, \mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})\\ \leq&\mathcal{L}(\mathbf{W}_{\leq i}^{k-1},\mathbf{W}_{>i}^{k}, \mathbf{V}_{<i}^{k-1},\mathbf{V}_{i}^{k-1},\mathbf{V}_{>i}^{k},\mathbf{U}_{\leq i}^{k-1}, \mathcal{U}_{>i}^{k},\mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})-\frac{ \gamma}{2}\|\mathbf{V}_{i}^{k}-\mathbf{V}_{i}^{k-1}\|_{F}^{2}\end{split} \tag{18}\] for \(i=1,\ldots,N-1\), where \[\begin{split} h^{k}(\mathbf{V}_{i}^{k})-h^{k}(\mathbf{V}_{i}^{k-1})& =\mathcal{L}(\mathbf{W}_{\leq i}^{k-1},\mathbf{W}_{>i}^{k},\mathbf{V}_{<i}^{ k-1},\mathbf{V}_{i}^{k},\mathbf{V}_{>i}^{k},\mathbf{U}_{\leq i}^{k-1},\mathbf{U}_{>i}^{k}, \mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})\\ &-\mathcal{L}(\mathbf{W}_{\leq i}^{k-1},\mathbf{W}_{>i}^{k},\mathbf{V}_{<i}^{k-1 },\mathbf{V}_{i}^{k-1},\mathbf{V}_{>i}^{k},\mathbf{U}_{\leq i}^{k-1},\mathbf{U}_{>i}^{k}, \mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k}).\end{split}\] **Optimization over \(\mathbf{U}_{i}\)** \(\mathbf{U}_{N}^{k}\)**-block**: similar to the inequality (18), the descent quantity is established as follows \[\begin{split}&\mathcal{L}(\mathbf{W}_{\leq N}^{k-1},\mathbf{V}_{<N}^{k-1}, \mathbf{V}_{N}^{k},\mathbf{U}_{<N}^{k-1},\mathbf{U}_{N}^{k},\mathcal{G}_{\leq N}^{k-1})\\ \leq&\mathcal{L}(\mathbf{W}_{\leq N}^{k-1},\mathbf{V}_{<N}^{k-1},\mathbf{V}_{N}^{k}, \mathbf{U}_{<N}^{k-1},\mathbf{U}_{N}^{k-1},\mathcal{G}_{\leq N}^{k-1})-\frac{\gamma+ \rho}{2}\|\mathbf{U}_{N}^{k}-\mathbf{U}_{N}^{k-1}\|_{F}^{2},\end{split} \tag{19}\] where the above inequality is because the objective function in subproblem (8) is a strongly convex function with modulus at least \(\gamma+\rho\). \(\mathbf{U}_{i}^{k}\)**-block (\(i<N\))**: the following can be obtained \[\begin{split}&\mathcal{L}(\mathbf{W}_{\leq i}^{k-1},\mathbf{W}_{>i}^{k},\mathbf{V} _{<i}^{k-1},\mathbf{V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U}_{i}^{k},\mathbf{U}_{>i}^{k},\mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})\\ \leq&\mathcal{L}(\mathbf{W}_{\leq i}^{k-1},\mathbf{W}_{>i}^{k },\mathbf{V}_{<i}^{k-1},\mathbf{V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U}_{i}^{k-1},\bm {U}_{>i}^{k},\mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})-\frac{\alpha}{2} \|\mathbf{U}_{i}^{k}-\mathbf{U}_{i}^{k-1}\|_{F}^{2}\end{split} \tag{20}\] for \(i=1,\ldots,N-1\) since \(\mathbf{U}_{i}^{k}\) is the optimal solution for subproblem (9). **Optimization over \(\mathbf{W}_{i}\)** \(\mathbf{W}_{i}^{k}\)**-block (\(i\leq N\))**: \(\mathbf{W}_{i}^{k}\) is updated according to the following \[\mathbf{W}_{i}^{k}\rightarrow\operatorname*{argmin}_{\mathbf{W}_{i}}\left\{r_{i}(\bm {W}_{i})+\frac{\rho}{2}\|\mathbf{U}_{i}^{k}-\mathbf{W}_{i}\mathbf{V}_{i-1}^{k-1}\|_{F}^{2} +\frac{\tau}{2}\|\mathbf{\mathcal{W}}_{i}-\operatorname{TTD}(\mathbf{r}_{i})\|_{F}^{ 2}\right\},\] where \(h^{k}(\mathbf{W}_{i})=r_{i}(\mathbf{W}_{i})+\frac{\rho}{2}\|\mathbf{U}_{i}^{k}-\mathbf{W}_{i} \mathbf{V}_{i-1}^{k-1}\|_{F}^{2}+\frac{\tau}{2}\|\mathbf{\mathcal{W}}_{i}-\operatorname {TTD}(\mathbf{r}_{i})\|_{F}^{2}\) is a strongly convex function with modulus at least \(\tau\). Accordingly, the following holds \[\begin{split}&\mathcal{L}(\mathbf{W}_{<i}^{k-1},\mathbf{W}_{i}^{k},\mathbf{W}_{> i}^{k},\mathbf{V}_{<i}^{k-1},\mathbf{V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U}_{\geq i }^{k},\mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})\\ \leq&\mathcal{L}(\mathbf{W}_{<i}^{k-1},\mathbf{W}_{i}^{k-1}, \mathbf{W}_{>i}^{k},\mathbf{V}_{<i}^{k-1},\mathbf{V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U} _{\geq i}^{k},\mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{>i}^{k})-\frac{\tau}{2} \|\mathbf{W}_{i}^{k}-\mathbf{W}_{i}^{k-1}\|_{F}^{2},\end{split} \tag{21}\] which is due to the relationship between \(h^{k}(\mathbf{W}_{i})\) and \(\mathcal{L}(\mathbf{W}_{<i}^{k-1},\mathbf{W}_{i},\mathbf{W}_{>i}^{k},\mathbf{V}_{<i}^{k-1},\bm {V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U}_{\geq i}^{k},\mathcal{G}_{\leq i}^{k -1},\mathcal{G}_{>i}^{k})\). **Optimization over \(\mathcal{G}_{i}\)** \(\mathcal{G}_{i}\)**-block (\(i\leq N\))**: the descent quantity for \(\mathcal{G}_{i}\) can be derived as follows \[\begin{split}&\mathcal{L}(\mathbf{W}_{<i}^{k-1},\mathbf{W}_{\geq i}^{k}, \mathbf{V}_{<i}^{k-1},\mathbf{V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U}_{\geq i}^{k}, \mathcal{G}_{\leq i}^{k-1},\mathcal{G}_{i}^{k},\mathcal{G}_{>i}^{k})\\ \leq&\mathcal{L}(\mathbf{W}_{<i}^{k-1},\mathbf{W}_{\geq i}^{k },\mathbf{V}_{<i}^{k-1},\mathbf{V}_{\geq i}^{k},\mathbf{U}_{<i}^{k-1},\mathbf{U}_{\geq i}^{k}, \mathcal{G}_{<i}^{k-1},\mathcal{G}_{i}^{k-1},\mathcal{G}_{>i}^{k})-\frac{ \alpha}{2}\|\mathcal{G}_{i}^{k}-\mathcal{G}_{i}^{k-1}\|_{F}^{2},\end{split} \tag{22}\] where the above inequality (22) is due to the fact that \(\mathcal{G}_{i}^{k}\) is the optimal solution for subproblem (11). By summing up inequalities (13) (or (16)), (19), (20), (21), and (22), it yields the \[\mathcal{L}(\mathcal{P}^{k})\leq\mathcal{L}(\mathcal{P}^{k-1})-\lambda\| \mathcal{P}^{k}-\mathcal{P}^{k-1}\|_{F}^{2},\] where \(\lambda:=\min\left\{\frac{\alpha}{2},\frac{\gamma+\rho}{2},\frac{\tau}{2}\right\}\) (or \(\lambda:=\min\left\{\frac{\alpha}{2},\frac{\gamma+\rho}{2},\frac{\tau}{2}, \alpha+\frac{\gamma-L_{R}}{2}\right\}\)). From Lemma 2, the Lagrangian sequence \(\left\{\mathcal{L}(\mathcal{P}^{k})\right\}_{k\in\mathbb{N}}\) is monotonically decreasing, and the descent quantity of each iterate can be lower bounded by the discrepancy between the current iterate and its previous iterate. This lemma is crucial for the global convergence of a nonconvex algorithm. It tells at least the following four important items: (i) \(\{\mathcal{L}(\mathcal{P}^{k})\}_{k\in\mathbb{N}}\) is convergent if \(\mathcal{L}\) is lower bounded; (ii) \(\{\mathcal{P}^{k}\}_{k\in\mathbb{N}}\) itself is bounded if \(\mathcal{L}\) is coercive and \(\mathcal{P}^{0}\) is finite; (iii) \(\{\mathcal{P}^{k}\}_{k\in\mathbb{N}}\) is square summable, i.e., \(\sum_{k=1}^{\infty}\|\mathcal{P}^{k}-\mathcal{P}^{k-1}\|_{F}^{2}<\infty\), implying its asymptotic regularity, i.e., \(\|\mathcal{P}^{k}-\mathcal{P}^{k-1}\|_{F}\to 0\) as \(k\rightarrow\infty\); and (iv) \(\frac{1}{K}\sum_{k=1}^{K}\|\mathcal{P}^{k}-\mathcal{P}^{k-1}\|_{F}^{2}\to 0\) at a rate of \(\mathcal{O}(1/K)\). Leveraging Lemma 2, we can establish the global convergence (i.e., the whole sequence convergence) of tenBCD algorithm 1 in NN training settings. In contrast, [41] only establish the subsequence convergence of SGD in NN training settings. Such a gap between the subsequence convergence of SGD in [41] and the whole sequence convergence of tenBCD algorithm 1 in this paper exists mainly because SGD can only achieve the descent property but not the sufficient descent property. It can be noted from Lemma 2 that neither multiconvexity and differentiability nor Lipschitz differentiability assumptions are imposed on the NN training models to yield this lemma, as required in the literature [42, 35, 43, 36]. Instead, we mainly exploit the proximal strategy for all nonstrongly convex subproblems in Algorithm 1 to establish this lemma. Our second main lemma is about the subgradient lower bound. **Lemma 3** (Subgradient Lower Bound).: _Under the same conditions of Lemma 2, let \(\mathcal{B}\) be an upper bound of \(\mathcal{P}^{k-1}\) and \(\mathcal{P}^{k}\) for any positive integer \(k,L_{\mathcal{B}}\) be a uniform Lipschitz constant of \(\sigma_{i}\) on the bounded set \(\{\mathcal{P}:\|\mathcal{P}\|_{F}\leq\mathcal{B}\}\), and_ \[\delta:=\max\{\gamma,\alpha+\rho\mathcal{B},\alpha+\gamma L_{\mathcal{B}},2 \rho\mathcal{B}+2\rho\mathcal{B}^{2},\alpha+\sqrt{N}\tau\mathcal{B}^{N-1}\}\] _(or, for the prox-linear case, \(\delta:=\max\{\gamma,L_{R}+\alpha+\rho\mathcal{B},\alpha+\gamma L_{\mathcal{B} },2\rho\mathcal{B}+2\rho\mathcal{B}^{2},\alpha+\sqrt{N}\tau\mathcal{B}^{N-1}\}\)), then for any positive integer \(k\), there holds,_ \[\begin{split}\operatorname{dist}(\mathbf{0},\partial\mathcal{L} (\mathcal{P}^{k}))&\leq\delta\sum_{i=1}^{N}\left[\|\mathbf{W}_{i}^{k -}-\mathbf{W}_{i}^{k-1}\|_{F}+\|\mathbf{V}_{i}^{k}-\mathbf{V}_{i}^{k-1}\|_{F}+\|\mathbf{U}_{i}^ {k}-\mathbf{U}_{i}^{k-1}\|_{F}+\|\mathcal{G}_{i}^{k}-\mathcal{G}_{i}^{k-1}\|_{F} \right]\\ &\leq\bar{\delta}\|\mathcal{P}^{k}-\mathcal{P}^{k-1}\|_{F}\end{split} \tag{23}\] _where \(\bar{\delta}:=\delta\sqrt{4N}\), \(\operatorname{dist}(\mathbf{0},\mathcal{S}):=\inf_{\mathbf{s}\in\mathcal{S}}\|\bm {s}\|_{F}\) for a set \(\mathcal{S}\), and_ \[\partial\mathcal{L}(\mathcal{P}^{k}):=(\{\partial_{\mathbf{W}_{i}}\mathcal{L}\}_ {i=1}^{N},\{\partial_{\mathbf{V}_{i}}\mathcal{L}\}_{i=1}^{N},\{\partial_{\mathbf{U}_{i }}\mathcal{L}\}_{i=1}^{N},\{\partial_{\mathcal{G}_{i}}\mathcal{L}\}_{i=1}^{N} )(\mathcal{P}^{k}).\] Proof.: The inequality (23) is established via bounding each term of \(\partial\mathcal{L}(\mathcal{P}^{k})\). Specifically, the following holds \[\mathbf{0} \in\partial s_{N}(\mathbf{V}_{N}^{k})+\partial\mathcal{R}_{n}(\mathbf{V}_ {N}^{k};\mathbf{Y})+\gamma(\mathbf{V}_{N}^{k}-\mathbf{U}_{N}^{k-1})+\alpha(\mathbf{V}_{N}^{k}- \mathbf{V}_{N}^{k-1}), \tag{24a}\] \[\mathbf{0} \in\partial s_{N}(\mathbf{V}_{N}^{k})+\nabla\mathcal{R}_{n}(\mathbf{V}_ {N}^{k-1};\mathbf{Y})+\gamma(\mathbf{V}_{N}^{k}-\mathbf{U}_{N}^{k-1})+\alpha(\mathbf{V}_{N}^{k }-\mathbf{V}_{N}^{k-1}),\ \text{(proximal-linear)}\] (24b) \[\mathbf{0} =\gamma(\mathbf{U}_{N}^{k}-\mathbf{V}_{N}^{k})+\rho(\mathbf{U}_{N}^{k}-\mathbf{W} _{N}^{k-1}\mathbf{V}_{N-1}^{k-1}),\] (24c) \[\mathbf{0} \in\partial\tau_{N}(\mathbf{W}_{N}^{k})+\rho(\mathbf{W}_{N}^{k}\mathbf{V}_{N -1}^{k-1}-\mathbf{U}_{N}^{k})\mathbf{V}_{N-1}^{k-1}+\tau\left(\mathbf{W}_{N}^{k}-\mathrm{ TTD}^{k-1}(\mathbf{r}_{N})\right),\] (24d) \[\mathbf{0} \in\partial\left(\frac{\tau}{2}\|\mathbf{\mathcal{W}}_{N}^{k}- \mathrm{TTD}^{k}(\mathbf{r}_{N})\|_{F}^{2}\right)+\alpha(\mathcal{G}_{N}^{k}- \mathcal{G}_{N}^{k-1}), \tag{24e}\] where (24a), (24b), (24c), (24d), and (24e) are due to the optimality conditions of all updates in (6), (33), (8), (10), and (11), respectively. For \(i=N-1,\ldots,1\), the following holds \[\mathbf{0} \in\partial s_{i}(\mathbf{V}_{i}^{k})+\gamma(\mathbf{V}_{i}^{k}-\sigma_{i }(\mathbf{U}_{i}^{k-1}))+\rho{\mathbf{W}_{i+1}^{k}}^{\top}(\mathbf{W}_{i+1}^{k}\mathbf{V}_{i}^ {k}-\mathbf{U}_{i+1}^{k}), \tag{25a}\] \[\mathbf{0} \in\gamma\left[(\sigma_{i}(\mathbf{U}_{i}^{k})-\mathbf{V}_{i}^{k})\odot \partial\sigma_{i}(\mathbf{U}_{i}^{k})\right]+\rho(\mathbf{U}_{i}^{k}-\mathbf{W}_{i}^{k-1} \mathbf{V}_{i-1}^{k-1})+\alpha(\mathbf{U}_{i}^{k}-\mathbf{U}_{i}^{k-1}),\] (25b) \[\mathbf{0} \in\partial\tau_{i}(\mathbf{W}_{i}^{k})+\rho(\mathbf{W}_{i}^{k}\mathbf{V}_ {i-1}^{k-1}-\mathbf{U}_{i}^{k})\mathbf{V}_{i-1}^{k-1}\top+\tau\left(\mathbf{W}_{i}^{k}- \mathrm{TTD}^{k-1}(\mathbf{r}_{i})\right),\] (25c) \[\mathbf{0} \in\partial\left(\frac{\tau}{2}\|\mathbf{\mathcal{W}}_{i}^{k}- \mathrm{TTD}^{k}(\mathbf{r}_{i})\|_{F}^{2}\right)+\alpha(\mathcal{G}_{i}^{k}- \mathcal{G}_{i}^{k-1}), \tag{25d}\] where (25a), (25b), (25c), and (25d) are due to the optimality conditions of all updates in (7), (9), (10), and (11), respectively. \(\mathbf{V}_{0}^{k}\equiv\mathbf{V}_{0}=\mathbf{X}\) for all \(k\), and \(\odot\) is the Hadamard product. Through the above relationship (24), we have \[-\alpha(\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1})-\gamma(\mathbf{U}_{N}^{k}-\mathbf{U} _{N}^{k-1})\in\partial s_{N}(\mathbf{V}_{N}^{k})+\partial\mathcal{R}_{n}(\mathbf{V}_{N}^ {k};\mathbf{Y})+\gamma(\mathbf{V}_{N}^{k}-\mathbf{U}_{N}^{k})=\partial_{\mathbf{V}_{N}}\mathcal{ L}(\mathcal{P}^{k}),\] \[\Big{(}\nabla\mathcal{R}_{n}(\mathbf{V}_{N}^{k};\mathbf{Y})-\nabla \mathcal{R}_{n}(\mathbf{V}_{N}^{k-1};\mathbf{Y})\Big{)}-\alpha(\mathbf{V}_{N}^{k}-\mathbf{V}_ {N}^{k-1})-\gamma(\mathbf{U}_{N}^{k}-\mathbf{U}_{N}^{k-1})\in\partial_{\mathbf{V}_{N}} \mathcal{L}(\mathcal{P}^{k}),\ \text{(proximal-linear)}\] \[-\rho(\mathbf{W}_{N}^{k}-\mathbf{W}_{N}^{k-1})\mathbf{V}_{N-1}^{k}-\rho\mathbf{W }_{N}^{k-1}(\mathbf{V}_{N-1}^{k}-\mathbf{V}_{N-1}^{k-1})=\gamma(\mathbf{U}_{N}^{k}-\mathbf{V}_ {N}^{k})+\rho(\mathbf{U}_{N}^{k}-\mathbf{W}_{N}^{k}\mathbf{V}_{N-1}^{k})=\partial_{\mathbf{U}_ {N}}\mathcal{L}(\mathcal{P}^{k}),\] \[\rho\mathbf{W}_{N}^{k}[\mathbf{V}_{N-1}^{k}(\mathbf{V}_{N-1}^{k}-\mathbf{V}_{N-1}^ {k-1})^{\top}+(\mathbf{V}_{N-1}^{k}-\mathbf{V}_{N-1}^{k-1})\mathbf{V}_{N-1}^{k-1}\top]- \rho\mathbf{U}_{N}^{k}(\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1})^{\top}+\tau\left(\mathrm{ TTD}^{k}(\mathbf{r}_{N})-\mathrm{TTD}^{k-1}(\mathbf{r}_{N})\right)\] \[\in\partial r_{N}(\mathbf{W}_{N}^{k})+\rho(\mathbf{W}_{N}^{k}\mathbf{V}_{N-1}^ {k}-\mathbf{U}_{N}^{k})\mathbf{V}_{N-1}^{k}\top+\tau\left(\mathbf{W}_{N}^{k}-\mathrm{TTD}^ {k}(\mathbf{r}_{i})\right)=\partial_{\mathbf{W}_{N}}\mathcal{L}(\mathcal{P}^{k}),\] \[-\alpha(\mathcal{G}_{N}^{k}-\mathcal{G}_{N}^{k-1})\in\partial_{ \mathcal{G}_{N}}\mathcal{L}(\mathcal{P}^{k}). \tag{26}\] For \(i=N-1,\ldots,1\), the relationship (25) implies \[-\gamma(\sigma_{i}(\mathbf{U}_{i}^{k})-\sigma_{i}(\mathbf{U}_{i}^{k-1})) \in\partial s_{i}(\mathbf{V}_{i}^{k})+\rho(\mathbf{V}_{i}^{k}-\sigma_{i}(\mathbf{U}_{i}^ {k}))+\gamma\mathbf{W}_{i+1}^{k}\top(\mathbf{W}_{i+1}^{k}\mathbf{V}_{i}^{k}-\mathbf{U}_{i+1}^{ k})=\partial_{\mathbf{V}_{i}}\mathcal{L}(\mathcal{P}^{k}),\] \[-\rho\mathbf{W}_{i}^{k-1}(\mathbf{V}_{i-1}^{k}-\mathbf{V}_{i-1}^{k-1})-\rho( \mathbf{W}_{i}^{k}-\mathbf{W}_{i}^{k-1})\mathbf{V}_{i-1}^{k}-\alpha(\mathbf{U}_{i}^{k}-\mathbf{U}_ {i}^{k-1})\] \[\in\gamma\left[(\sigma_{i}(\mathbf{U}_{i}^{k})-\mathbf{V}_{i}^{k})\odot \partial\sigma_{i}(\mathbf{U}_{i}^{k})\right]+\rho(\mathbf{U}_{i}^{k}-\mathbf{W}_{i}^{k}\bm {V}_{i-1}^{k})=\partial_{\mathbf{U}_{i}}\mathcal{L}(\mathcal{P}^{k}),\] \[\rho\mathbf{W}_{i}^{k}[\mathbf{V}_{i-1}^{k}(\mathbf{V}_{i-1}^{k}-\mathbf{V}_{i-1}^ {k-1})^{\top}+(\mathbf{V}_{i-1}^{k}-\mathbf{V}_{i-1}^{k-1})\mathbf{V}_{i-1}^{k-1}]-\rho \mathbf{U}_{i}^{k}(\mathbf{V}_{i-1}^{k}-\mathbf{V}_{i-1}^{k-1})^{\top}+\tau\left(\mathrm{ TTD}^{k}(\mathbf{r}_{i})-\mathrm{TTD}^{k-1}(\mathbf{r}_{i})\right)\] \[\in\partial r_{i}(\mathbf{W}_{i}^{k})+\rho(\mathbf{W}_{i}^{k}\mathbf{V}_{i-1}^ {k}-\mathbf{U}_{i}^{k})\mathbf{V}_{i-1}^{k}\top=\partial_{\mathbf{W}_{i}}\mathcal{L}( \mathcal{P}^{k}),\] \[-\alpha(\mathcal{G}_{i}^{k}-\mathcal{G}_{i}^{k-1})\in\partial_{ \mathcal{G}_{i}}\mathcal{L}(\mathcal{P}^{k}). \tag{27}\] Based on the above relationships, and by the Lipschitz continuity of the activation function on the bounded set \(\{\mathcal{P}:\|\mathcal{P}\|_{F}\leq\mathcal{B}\}\) and the bounded assumption of both \(\mathcal{P}^{k-1}\) and \(\mathcal{P}^{k}\), we have \[\|\xi_{\mathbf{V}_{N}}^{k}\|_{F}\leq\alpha\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^ {k-1}\|_{F}+\gamma\|\mathbf{U}_{N}^{k}-\mathbf{U}_{N}^{k-1}\|_{F}, \xi_{\mathbf{V}_{N}}^{k}\in\partial_{\mathbf{V}_{N}}\mathcal{L}(\mathcal{P}^{k}), \tag{28}\] \[(\omega\|\xi_{\mathbf{V}_{N}}^{k}\|_{F}\leq(L_{R}+\alpha)\|\mathbf{V}_{N}^ {k}-\mathbf{V}_{N}^{k-1}\|_{F}+\gamma\|\mathbf{U}_{N}^{k}-\mathbf{U}_{N}^{k-1}\|_{F})\ \text{proximal-linear}\] \[\|\xi_{\mathbf{V}_{N}}^{k}\|_{F}\leq\rho\mathcal{B}\|\mathbf{W}_{N}^{k}- \mathbf{W}_{N}^{k-1}\|_{F}+\rho\mathcal{B}\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\|_{F}, \xi_{\mathbf{U}_{N}}^{k}\in\partial_{\mathbf{U}_{N}}\mathcal{L}(\mathcal{P}^{k}),\] \[\|\xi_{\mathbf{W}_{N}}^{k}\|_{F}\leq 2\rho\mathcal{B}^{2}\|\mathbf{V}_{N-1}^ {k}-\mathbf{V}_{N-1}^{k-1}\|_{F}+\rho\mathcal{B}\|\mathbf{V}_{N}^{k}-\mathbf{V}_{N}^{k-1}\|_{F}\] \[+\tau\|\mathrm{TTD}^{k}(\mathbf{r}_{N})-\mathrm{TTD}^{k-1}(\mathbf{r}_{N}) \|_{F}, \xi_{\mathbf{W}_{N}}^{k}\in\partial_{\mathbf{W}_{N}}\mathcal{L}(\mathcal{P}^{k}),\] \[\|\xi_{\mathcal{G}_{N}}^{k}\|_{F}\leq\alpha\|\mathcal{G}_{N}^{k}- \mathcal{G}_{N}^{k-1}\|_{F}, \xi_{\mathcal{G}_{i}}^{k}\in\partial_{\mathcal{G}_{i}}\mathcal{L}(\mathcal{P}^{k}).\] and for \(i=N-1,\ldots,1\), \[\|\xi_{\mathbf{V}_{i}}^{k}\|_{F}\leq\gamma L_{\mathcal{B}}\|\mathbf{U}_{i}^ {k}-\mathbf{U}_{i}^{k-1}\|_{F}, \xi_{\mathbf{V}_{i}}^{k}\in\partial_{\mathbf{V}_{i}}\mathcal{L}(\mathcal{P}^{k}),\] (29) \[\|\xi_{\mathbf{U}_{i}}^{k}\|_{F}\leq\rho\mathcal{B}\|\mathbf{V}_{i-1}^{k}- \mathbf{V}_{i-1}^{k-1}\|_{F}+\rho\mathcal{B}\|\mathbf{W}_{i}^{k}-\mathbf{W}_{i}^{k-1}\|_{F}+ \alpha\|\mathbf{U}_{i}^{k}-\mathbf{U}_{i}^{k-1}\|_{F}, \xi_{\mathbf{U}_{i}}^{k}\in\partial_{\mathbf{U}_{i}}\mathcal{L}(\mathcal{P}^{k}),\] \[\|\xi_{\mathbf{W}_{i}}^{k}\|_{F}\leq(2\rho\mathcal{B}^{2}+\rho \mathcal{B})\|\mathbf{V}_{i-1}^{k}-\mathbf{V}_{i-1}^{k-1}\|_{F}+\tau\|\mathrm{TTD}^{k}( \mathbf{r}_{i})-\mathrm{TTD}^{k-1}(\ where \[\delta:=\max\{\gamma,\alpha+\rho\mathcal{B},\alpha+\gamma L_{\mathcal{B}},2\rho \mathcal{B}+2\rho\mathcal{B}^{2},\alpha+\sqrt{N}\tau\mathcal{B}^{N-1}\},\] (or, for the prox-linear case, \(\delta:=\max\{\gamma,L_{R}+\alpha+\rho\mathcal{B},\alpha+\gamma L_{\mathcal{B}},2 \rho\mathcal{B}+2\rho\mathcal{B}^{2},\alpha+\sqrt{N}\tau\mathcal{B}^{N-1}\})\). **Definition 4** (Critical point [38, 39]).: _A necessary condition for \(\mathbf{x}\) to be a minimizer of a proper and lower semicontinuous (PLSC) function \(f\) is that_ \[\mathbf{0}\in\partial f(\mathbf{x}). \tag{31}\] _A point that satisfies (31) is called limiting-critical or simply critical._ **Definition 5** (Global convergence [44, 45]).: _Any iterative algorithm for solving an optimization problem over a set \(X\), is said to be **globally convergent** if for any starting point \(\mathbf{x}_{0}\in X\), the sequence generated by the algorithm always has an accumulation critical point._ To build the global convergence of our iterative sequence \(\{\mathcal{P}^{k}\}_{k\in\mathbb{N}}\) from Algorithm 1, the function \(\mathcal{L}(\mathcal{W},\mathcal{V},\mathcal{U},\mathcal{G})\) needs to have the Kurdyka Lojasiewicz (KL) property as follows **Definition 6** (KL property [35, 36]).: _A real function \(f:\mathbb{R}^{p}\to(-\infty,+\infty]\) has the Kurdyka Lojasiewicz (KL) property, namely, for any point \(\bar{\mathbf{u}}\in\mathbb{R}^{p}\), in a neighborhood \(N(\bar{\mathbf{u}},\sigma)\), there exists a desingularizing function \(\phi(s)=cs^{1-\theta}\) for some \(c>0\) and \(\theta\in[0,1)\) such that_ \[\phi^{\prime}(|f(\mathbf{u})-f(\bar{\mathbf{u}})|)\mathrm{d}(0,\partial f(\mathbf{u}))\geq 1 \tag{32}\] _for any \(\mathbf{u}\in N(\bar{\mathbf{u}},\sigma)\) and \(f(\mathbf{u})\neq f(\bar{\mathbf{u}})\)._ The real analytic and semi-algebraic functions, which are related to KL property, are introduced below. **Definition 7** (Real analytic [46]).: _A function \(h\) with domain an open set \(U\subset\mathbb{R}\) and range the set of either all real or complex numbers, is said to be real analytic at \(u\) if the function \(h\) may be represented by a convergent power series on some interval of positive radius centered at \(u\), i.e., \(h(x)=\sum_{j=0}^{\infty}\alpha_{j}(x-u)^{j}\), for some \(\{\alpha_{j}\}\subset\mathbb{R}\). The function is said to be real analytic on \(V\subset U\) if it is real analytic at each \(u\in V\)[46, Definition 1.1.5]. The real analytic function \(f\) over \(\mathbb{R}^{p}\) for some positive integer \(p>1\) can be defined similarly._ **Definition 8** (Semi-algebraic [36]).: _A subset \(S\) of \(\mathbb{R}^{p}\) is a real **semi-algebraic set** if there exists a finite number of real polynomial functions \(g_{ij},h_{ij}\): \(\mathbb{R}^{p}\to\mathbb{R}\) such that \(S=\cup_{j=1}^{q}\cap_{i=1}^{m}\left\{\mathbf{u}\in\mathbb{R}^{p}:g_{ij}(\mathbf{u})=0\right.\) and \(h_{ij}(\mathbf{u})<0\}.\) In addition, a function \(h:\mathbb{R}^{p+1}\to\mathbb{R}\cup+\infty\) is called **semi-algebraic** if its graph \(\{(\mathbf{u},t)\in\mathbb{R}^{p+1}:h(\mathbf{u})=t\}\) is a real semi-algebraic set._ Based on the above definitions, the following lemma can be obtained. **Lemma 9**.: _Most of the commonly used NN training models (5) can be verified to satisfy the following_ * _the loss function_ \(\ell\) _is a proper lower semicontinuous and nonnegative function. For example, the squared, logistic, hinge, or cross-entropy losses._ * _the activation functions_ \(\sigma_{i}(i=1\ldots,N-1)\) _are Lipschitz continuous on any bounded set. For example, ReLU, leaky ReLU, sigmoid, hyperbolic tangent, linear, polynomial, or softplus activations._ _._ 3. _the regularizers_ \(\tau_{i}\) _and_ \(s_{i}(i=1,\ldots,N)\) _are nonegative lower semicontinuous convex functions._ \(\tau_{i}\) _and_ \(s_{i}\) _are the squared_ \(\ell_{2}\) _norm, the_ \(\ell_{1}\) _norm, the elastic net, the indicator function of some nonempty closed convex set (such as the nonnegative closed half-space, box set or a closed interval_ \([0,1]\)_), or 0 if no regularization._ 4. _all these functions_ \(\ell,\sigma_{i},\tau_{i}\) _and_ \(s_{i}(i=1,\ldots,N)\) _are either real analytic or semialgebraic, and continuous on their domains._ _Accordingly, the objective function \(\mathcal{L}(\mathcal{W},\mathcal{V},\mathcal{U},\mathcal{G})\) in (5) has **Kurdyka Lojasiewicz (KL)** property._ Proof.: **On the loss function \(\ell\):** Since these losses are all nonnegative and continuous on their domains, they are proper lower semicontinuous and lower bounded by 0. In the following, we only verify that they are either real analytic or semialgebraic. 1. If \(\ell(t)\) is the squared \((t^{2})\) or exponential \((e^{t})\) loss, then according to [46], they are real analytic. 2. If \(\ell(t)\) is the logistic loss (\(\log(1+\mathrm{e}^{-t})\)), since it is a composition of logarithm and exponential functions which both are real analytic, thus according to [46], the logistic loss is real analytic. 3. If \(\ell(\boldsymbol{u};\boldsymbol{y})\) is the cross-entropy loss, i.e., given \(\boldsymbol{y}\in\mathbb{R}^{d_{N}},\ell(\boldsymbol{u};\boldsymbol{y})=- \frac{1}{d_{N}}[\langle\boldsymbol{y},\log\widehat{\boldsymbol{y}}(\boldsymbol {u})\rangle+\langle\boldsymbol{1}-\boldsymbol{y},\log(\boldsymbol{1}- \widehat{\boldsymbol{y}}(\boldsymbol{u}))\rangle]\), where \(\log\) is performed elementwise and \((\widehat{\boldsymbol{y}}(\boldsymbol{u})_{i})_{1\leq i\leq d_{N}}:=((1+ \mathrm{e}^{-u_{i}})^{-1})_{1\leq i\leq d_{N}}\) for any \(\boldsymbol{u}\in\mathbb{R}^{d_{N}}\), which can be viewed as a linear combination of logistic functions, then by (a2) and [46], it is also analytic. 4. If \(\ell\) is the hinge loss, i.e., given \(\boldsymbol{y}\in\mathbb{R}^{d_{N}},\ell(\boldsymbol{u};\boldsymbol{y}):= \max\{0,1-\langle\boldsymbol{u},\boldsymbol{y}\rangle\}\) for any \(\boldsymbol{u}\in\mathbb{R}^{d_{N}}\), by [47], it is semialgebraic, because its graph is \(\mathrm{cl}(\mathcal{D})\), the closure of the set \(\mathcal{D}\), where \(\mathcal{D}=\{(\boldsymbol{u},z):1-\langle\boldsymbol{u},\boldsymbol{y} \rangle-z=0,\boldsymbol{1}-\boldsymbol{u}\succ 0\}\cup\{(\boldsymbol{u},z):z=0, \langle\boldsymbol{u},\boldsymbol{y}\rangle-1>0\}\). **On the activation function \(\sigma_{i}\):** Since all the considered specific activations are continuous on their domains, they are Lipschitz continuous on any bounded set. In the following, we only need to check that they are either real analytic or semialgebraic. 1. If \(\sigma_{i}\) is a linear or polynomial function, then according to [46] is real analytic. 2. If \(\sigma_{i}(t)\) is sigmoid, \((1+\mathrm{e}^{-t})^{-1}\), or hyperbolic tangent, \(\tanh(t):=\frac{\mathrm{e}^{t}-\mathrm{e}^{-t}}{\mathrm{e}^{t}+\mathrm{e}^{-t}}\), then the sigmoid function is a composition \(g\circ h\) of these two functions where \(g(u)=\frac{1}{1+u},u>0\) and \(h(t)=\mathrm{e}^{-t}\) (resp. \(g(u)=1-\frac{2}{u+1},u>0\) and \(h(t)=\mathrm{e}^{2t}\) in the hyperbolic tangent case). According to [46], \(g\) and \(h\) in both cases are real analytic. Thus, sigmoid and hyperbolic tangent functions are real analytic. 3. If \(\sigma_{i}\) is ReLU, i.e., \(\sigma_{i}(u):=\max\{0,u\}\), then we can show that ReLU is semialgebraic since its graph is \(\mathrm{cl}(\,.\mathcal{D})\), the closure of the set \(\mathcal{D}\), where \(\mathcal{D}=\{(u,z):u-z=0,u>0\}\cup\{(u,z):z=0,-u>0\}\). 4. Similar to the ReLU case, if \(\sigma_{i}\) is leaky ReLU, i.e., \(\sigma_{i}(u)=u\) if \(u>0\), otherwise \(\sigma_{i}(u)=au\) for some \(a>0\), then we can similarly show that leaky ReLU is semialgebraic since its graph is \(\mathrm{cl}(\mathcal{D})\), the closure of the set \(\mathcal{D}\), where \(\mathcal{D}=\{(u,z):u-z=0,u>0\}\cup\{(u,z):au-z=0,-u>0\}\). 5. If \(\sigma_{i}\) is polynomial, then according to [46], it is real analytic. 6. If \(\sigma_{i}\) is softplus, i.e., \(\sigma_{i}(u)=\frac{1}{t}\log(1+\mathrm{e}^{tu})\) for some \(t>0\), since it is a composition of two analytic functions \(\frac{1}{t}\log(1+u)\) and \(\mathrm{e}^{tu}\), then according to [46], it is real analytic. **On \(\tau_{i}(\boldsymbol{W}_{i}),s_{i}(\boldsymbol{V}_{i})\):** By the specific forms of these regularizers, they are nonnegative, lower semicontinuous and continuous on their domain. In the following, we only need to verify they are either real analytic and semialgebraic. * the squared \(\ell_{2}\) norm \(\|\cdot\|_{2}^{2}\): According to [47], the \(\ell_{2}\) norm is semialgebraic, so is its square where \(g(t)=t^{2}\) and \(h(\boldsymbol{W})=\|\boldsymbol{W}\|_{2}\). * the squared Frobenius norm \(\|\cdot\|_{F}^{2}\): The squared Frobenius norm is semialgebraic since it is a finite sum of several univariate squared functions. * the elementwise \(1\)-norm \(\|\cdot\|_{1,1}\): Note that \(\|\boldsymbol{W}\|_{1,1}=\sum_{i,j}|\boldsymbol{W}_{ij}|\) is the finite sum of absolute functions \(h(t)=|t|\). According to [47], the absolute value function is semialgebraic since its graph is the closure of the following semialgebraic set \(\mathcal{D}=\{(t,s):t+s=0,-t>0\}\cup\{(t,s):t-s=0,t>0\}\). Thus, the elementwise \(1\)-norm is semialgebraic. * the elastic net: Note that the elastic net is the sum of the elementwise \(1\)-norm and the squared Frobenius norm. Thus, by (c2), (c3), and [47], the elastic net is semialgebraic. * If \(\tau_{i}\) or \(s_{i}\) is the indicator function of nonnegative closed half-space or a closed interval (box constraints), by [47], any polyhedral set is semialgebraic such as the nonnegative orthant \(\mathbb{R}_{+}^{p\times q}=\{\boldsymbol{W}\in\mathbb{R}^{p\times q}, \boldsymbol{W}_{ij}\geq 0,\forall i,j\}\), and the closed interval. Thus, \(\tau_{i}\) or \(s_{i}\) is semialgebraic in this case. We first verify the KL property of \(\mathcal{L}\). From (5), we have \[\mathcal{L}(\mathcal{W},\mathcal{V},\mathcal{U},\mathcal{G}):= \mathcal{R}_{n}\left(\boldsymbol{V}_{N};\boldsymbol{Y}\right)+\sum_{i=1}^{N}r_ {i}\left(\boldsymbol{W}_{i}\right)+\sum_{i=1}^{N}s_{i}\left(\boldsymbol{V}_{i}\right)\] \[+\frac{\gamma}{2}\sum_{i=1}^{N}\|\boldsymbol{V}_{i}-\sigma_{i}( \boldsymbol{U}_{i})\|_{F}^{2}+\frac{\rho}{2}\sum_{i=1}^{N}\|\boldsymbol{U}_{i }-\boldsymbol{W}_{i}\boldsymbol{V}_{i-1}\|_{F}^{2}+\frac{\tau}{2}\sum_{i=1}^{N }\|\boldsymbol{\mathcal{W}}_{i}-\mathrm{TTD}(\boldsymbol{r}_{i})\|_{F}^{2},\] which mainly includes the following types of functions, i.e., \[\mathcal{R}_{n}\left(\boldsymbol{V}_{N};\boldsymbol{Y}\right),\tau_{i}\left( \boldsymbol{W}_{i}\right),s_{i}\left(\boldsymbol{V}_{i}\right),\left\| \boldsymbol{V}_{i}-\sigma_{i}\left(\boldsymbol{U}_{i}\right)\right\|_{F}^{2}, \left\|\boldsymbol{U}_{i}-\boldsymbol{W}_{i}\boldsymbol{V}_{i-1}\right\|_{F}^ {2},\sum_{i=1}^{N}\|\boldsymbol{\mathcal{W}}_{i}-\mathrm{TTD}(\boldsymbol{r}_ {i})\|_{F}^{2}.\] To verify the KL property of the function \(\mathcal{L}\), we consider the above functions one. On \(\mathcal{R}_{n}(\boldsymbol{V}_{N};\boldsymbol{Y})\): Note that given the output data \(\boldsymbol{Y},\mathcal{R}_{n}(\boldsymbol{V}_{N};\boldsymbol{Y}):=\frac{1}{n }\sum_{j=1}^{n}\ell((\boldsymbol{V}_{N})_{:j},\boldsymbol{y}_{j})\), where \(\ell:\mathbb{R}^{d_{N}}\times\mathbb{R}^{d_{N}}\rightarrow\mathbb{R}_{+}\cup \{0\}\) is some loss function. If \(\ell\) is real analytic (resp. semialgebraic), then \(\mathcal{R}_{n}(\boldsymbol{V}_{N};\boldsymbol{Y})\) is real-analytic (resp. semialgebraic). On \(\|\boldsymbol{V}_{i}-\sigma_{i}(\boldsymbol{U}_{i})\|_{F}^{2}\) : Note that \(\|\boldsymbol{V}_{i}-\sigma_{i}(\boldsymbol{U}_{i})\|_{F}^{2}\) is a finite sum of simple functions of the form, \(|v-\sigma_{i}(u)|^{2}\) for any \(u,v\in\mathbb{R}\). If \(\sigma_{i}\) is real analytic (resp. semialgebraic), then \(v-\sigma_{i}(u)\) is real analytic (resp. semialgebraic), and further \(|v-\sigma_{i}(u)|^{2}\) is also real analytic (resp. semialgebraic) since \(|v-\sigma_{i}(u)|^{2}\) can be viewed as the composition \(g\circ h\) of these two functions where \(g(t)=t^{2}\) and \(h(u,v)=v-\sigma_{i}(u)\). On \(\|\boldsymbol{U}_{i}-\boldsymbol{W}_{i}\boldsymbol{V}_{i-1}\|_{F}^{2}\): Note that the function \(\|\boldsymbol{U}_{i}-\boldsymbol{W}_{i}\boldsymbol{V}_{i-1}\|_{F}^{2}\) is a polynomial function with the variables \(\boldsymbol{U}_{i},\boldsymbol{W}_{i}\) and \(\boldsymbol{V}_{i-1}\), and thus according to [46] and [47], it is both real analytic and semialgebraic. On \(\tau_{i}(\boldsymbol{W}_{i}),s_{i}(\boldsymbol{V}_{i}):\) All \(\tau_{i}\)'s and \(s_{i}\)'s are real analytic or semialgebraic. On \(\|\boldsymbol{\mathcal{W}}_{i}-\mathrm{TTD}(\boldsymbol{r}_{i})\|_{F}^{2}:\) Note that the function \(\|\boldsymbol{\mathcal{W}}_{i}-\mathrm{TTD}(\boldsymbol{r}_{i})\|_{F}^{2}\) is a polynomial function with the variables \(\boldsymbol{W}_{i},\mathcal{G}_{i}\). Since each part of the function \(\mathcal{L}\) is either real analytic or semialgebraic, \(\mathcal{L}\) is a subanalytic function [48, p.43]. Furthermore, by the continuity, \(\mathcal{L}\) is continuous in its domain. Therefore, \(\mathcal{L}\) is a KL function according to [49, Theorem 3.1].3 Footnote 3: Let \(h:\mathbb{R}^{p}\to\mathbb{R}\cup\{+\infty\}\) be a subanalytic function with closed domain, and assume that \(h\) is continuous on its domain, then \(h\) is a KL function. Based on Lemmas 2, 3, and 9 and conclusions in [36, Section 3.2], the following main theorem can be obtained. **Theorem 10** (Global Convergence).: _Let \(\{\mathcal{P}^{k}:=(\{\mathbf{W}_{i}^{k}\}_{i=1}^{N},\{\mathbf{V}_{i}^{k}\}_{i=1}^{N}, \{\mathbf{U}_{i}^{k}\}_{i=1}^{N}),\{\mathcal{G}_{i}^{k}\}_{i=1}^{N}\}_{k\in\mathbb{N}}\) be the sequences generated from Algorithm 1. Suppose that \(\tau_{i}\) and \(\mathcal{L}\) are coercive for any \(i=1,\ldots,N\). Then for any \(\alpha,\gamma,\rho,\tau>0\) and any finite initialization \(\mathcal{P}^{0}\), the following hold_ 1. \(\{\mathcal{L}(\mathcal{P}^{k})\}_{k\in\mathbb{N}}\) _converges to_ \(\mathcal{L}^{*}\)_._ 2. \(\{\mathcal{P}^{k}\}_{k\in\mathbb{N}}\) _converges to a critical point of_ \(\mathcal{L}\)_._ 3. _If further the initialization_ \(\mathcal{P}^{0}\) _is sufficiently close to some global minimum_ \(\mathcal{P}^{*}\) _of_ \(\mathcal{L}\)_, then_ \(\mathcal{P}^{k}\) _converges to_ \(\mathcal{P}^{*}\)_._ 4. _Let_ \(\theta\) _be the KL exponent of_ \(\mathcal{L}\) _at_ \(\mathcal{P}^{*}\)_. There hold: (a) if_ \(\theta=0\)_, then_ \(\{\mathcal{P}^{k}\}_{k\in\mathbb{N}}\) _converges in a finite number of steps; (b) if_ \(\theta\in(0,\frac{1}{2}]\)_, then_ \(\|\mathcal{P}^{k}-\mathcal{P}^{*}\|_{F}\leq C\eta^{k}\) _for all_ \(k\geq k_{0}\)_, for certain_ \(k_{0}>0,C>0,\eta\in(0,1)\)_; and (c) if_ \(\theta\in(\frac{1}{2},1)\)_, then_ \(\|\mathcal{P}^{k}-\mathcal{P}^{*}\|_{F}\leq Ck^{-\frac{1-\theta}{2\theta-1}}\) _for_ \(k\geq k_{0}\)_, for certain_ \(k_{0}>0,C>0\)_._ 5. \(\frac{1}{K}\sum_{k=1}^{K}\|\mathbf{g}^{k}\|_{F}^{2}\to 0\) _at the rate_ \(\mathcal{O}(1/K)\) _where_ \(\mathbf{g}^{k}\in\partial\mathcal{L}(\mathcal{P}^{k})\)_._ Lispchitz differentiable property is a required for nonconvex optimizations with multi-block variables to build the convergence in the existing literature [50]. However, the NN training problem (5) in this paper generally does not satisfy such a condition. For example, when ReLU activation is used. Theorem 10 establishes the global convergence under a very mild condition that most NN models satisfy. **Extension to ResNets [34]:** the theoretical results in Theorem 10 can be extended to ResNets by considering the following optimization problem \[\min_{\mathcal{W},\mathcal{V}}\mathcal{L}_{0}(\mathcal{W},\mathcal{V})\text{ subject to }\mathbf{U}_{i}=\mathbf{W}_{i}\mathbf{V}_{i-1},\mathbf{V}_{i}-\mathbf{V}_{i-1}=\sigma_{i}(\mathbf{U}_{i} ),\mathbf{\mathcal{W}}_{i}=\text{TTD}(\mathbf{r}_{i})\quad i=1,\ldots,N,\] where the residual term \(\mathbf{V}_{i}-\mathbf{V}_{i-1}\) is considered instead of \(\mathbf{V}_{i}\). The corresponding algorithm can be easily modified from Algorithm 1. ## 4 Case Study In this experiment, to evaluate the effectiveness and efficiency of our proposed method, NN model (5) training with different compression ratios (determined by TT-rank \(\mathbf{r}_{i}\)) is conducted on the image classification task. In terms of the NN model, ReLU activation, the squared loss, and the network architecture being an MLP with ten hidden layers are considered here. The number of hidden units in each layer is \(2^{9}=512\). The neural network is trained on the MNIST dataset, which is a handwritten digits dataset. The size of each input image is \(d_{0}=28\times 28=784\) and the output dimension is \(d_{11}=10\). The numbers of training and test samples are 60,000 and 10,000, respectively. For comparison, SGD is also considered as a benchmark method, where the learning rate is 0.001. For each experiment, the same mini-batch sizes (512) and initializations for all algorithms. All the experiments are repeated ten times to obtain the average performance. Specifically, all the weights \(\{\mathbf{W}_{i}\}_{i=1}^{N}\) are initialized from a Gaussian distribution with a standard deviation of 0.01. The auxiliary variables \(\{\mathbf{U}_{i}\}_{i=1}^{N}\), state variables \(\{\mathbf{V}_{i}\}_{i=1}^{N}\), TT-cores \(\{\mathcal{G}_{i}\}_{i=1}^{N}\) are initialized by a single forward pass [50]. Under these settings, the training loss, training accuracy, and test accuracy are shown in Table 1. With a smaller CR (\(\frac{\#}{\#\theta\text{parameters after compression}}\)), a higher training loss is observed. Our proposed method with CR<1 can outperform the uncompressed method and SGD. In addition, the curves of the training loss and test accuracy are plotted in Figure 2. Figure 2(a) shows that the proposed method converges with different compression rates. The training loss of our proposed method also shows the monotone decreasing trend, which verified the statements in Theorem 10. Figure 2(b) shows that, for different CR (<1), the test accuracy of our proposed method keeps increasing as the number of iterations increases. When CR=1 (the model without compression), the test accuracy increases first and then decreases. This result demonstrates that model compression can prevent overfitting. In addition, our proposed method with CR<1 can outperform SGD significantly in terms of test accuracy. \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & \multicolumn{2}{c}{Total Loss} & \multicolumn{2}{c}{Training Accuracy} & \multicolumn{2}{c}{Test Accuracy} \\ \hline CR & Mean & Std & Mean & Std & Mean & Std \\ \hline 1.00 & 0.6726 & 0.0170 & 0.8921 & 0.0017 & 0.8794 & 0.0034 \\ 0.77 & 0.9804 & 0.0331 & 0.9768 & 0.0003 & 0.9673 & 0.0014 \\ 0.34 & 1.3564 & 0.0440 & 0.9677 & 0.0006 & 0.9623 & 0.0009 \\ 0.09 & 3.4612 & 0.0264 & 0.9439 & 0.0009 & 0.9443 & 0.0014 \\ 0.02 & 4.5591 & 0.0331 & 0.9046 & 0.0027 & 0.9090 & 0.0021 \\ SGD & NA & NA & 0.8940 & 0.0295 & 0.9002 & 0.0274 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of tenBCD algorithm with different compression ratios. Figure 2: The convergence analysis of tenBCD algorithm with different compression ratios: (a) training loss; (b) test accuracy. ## 5 Conclusion In this paper, a holistic framework is proposed for tensor decomposition-based NN model compression by formulating TT decomposition-based NN training as a nonconvex optimization problem. The framework can be extended to other formats of tensor decomposition such as Tucker decomposition, and CP decomposition. For the first time in the literature on tensor decomposition-based NN model compression, global convergence is guaranteed for the proposed tensor BCD (tenBCD) algorithm. Specifically, tenBCD converges to a critical point at a rate of \(\mathcal{O}(1/k)\), where \(k\) is the number of iterations. The empirical experiment shows that the proposed method can converge and run efficiently in practice. Compared with SGD, the proposed method can maintain a high compression rate and high accuracy simultaneously.
2307.05633
Transaction Fraud Detection via an Adaptive Graph Neural Network
Many machine learning methods have been proposed to achieve accurate transaction fraud detection, which is essential to the financial security of individuals and banks. However, most existing methods leverage original features only or require manual feature engineering. They lack the ability to learn discriminative representations from transaction data. Moreover, criminals often commit fraud by imitating cardholders' behaviors, which causes the poor performance of existing detection models. In this paper, we propose an Adaptive Sampling and Aggregation-based Graph Neural Network (ASA-GNN) that learns discriminative representations to improve the performance of transaction fraud detection. A neighbor sampling strategy is performed to filter noisy nodes and supplement information for fraudulent nodes. Specifically, we leverage cosine similarity and edge weights to adaptively select neighbors with similar behavior patterns for target nodes and then find multi-hop neighbors for fraudulent nodes. A neighbor diversity metric is designed by calculating the entropy among neighbors to tackle the camouflage issue of fraudsters and explicitly alleviate the over-smoothing phenomena. Extensive experiments on three real financial datasets demonstrate that the proposed method ASA-GNN outperforms state-of-the-art ones.
Yue Tian, Guanjun Liu, Jiacun Wang, Mengchu Zhou
2023-07-11T07:48:39Z
http://arxiv.org/abs/2307.05633v1
# Transaction Fraud Detection via an Adaptive Graph Neural Network ###### Abstract Many machine learning methods have been proposed to achieve accurate transaction fraud detection, which is essential to the financial security of individuals and banks. However, most existing methods leverage original features only or require manual feature engineering. They lack the ability to learn discriminative representations from transaction data. Moreover, criminals often commit fraud by imitating cardholders' behaviors, which causes the poor performance of existing detection models. In this paper, we propose an Adaptive Sampling and Aggregation-based Graph Neural Network (ASA-GNN) that learns discriminative representations to improve the performance of transaction fraud detection. A neighbor sampling strategy is performed to filter noisy nodes and supplement information for fraudulent nodes. Specifically, we leverage cosine similarity and edge weights to adaptively select neighbors with similar behavior patterns for target nodes and then find multi-hop neighbors for fraudulent nodes. A neighbor diversity metric is designed by calculating the entropy among neighbors to tackle the camouflage issue of fraudsters and explicitly alleviate the over-smoothing phenomena. Extensive experiments on three real financial datasets demonstrate that the proposed method ASA-GNN outperforms state-of-the-art ones. Graph neural network, transaction fraud, weighted multigraph, attention mechanism, entropy. ## I Introduction Online transaction is a popular and convenient way of electronic payment. It also increases the incidences of financial fraud and causes massive monetary losses to individuals and banks. The global losses reached 25 billion dollars in 2018 and have kept increasing [1]. According to statistics from Nilson Report, the losses jumped to 28.65 billion in 2020 [2]. Financial institutions have taken measures to prevent fraud. In traditional methods, online transactions are checked against some expert rules, and then suspicious transactions are fed to a detection model. The task of the detection model is to mine fraud patterns (represented as some rules) from sizeable historical transaction data so that the model can find transactions that match these rules. However, the ability of these rules is limited and hardly adapts to the fast changes in fraud patterns. How to quickly mine and represent as many fraud patterns as possible and fit their changes is complicated since fraudsters and detectors of fraud transactions have kept a dynamic gaming process for a long time [3]. Transaction records often contain transaction-related elements, such as location, date, time, and relations. Although there are machine learning-based methods to detect fraudulent transactions, most require manual feature engineering based on the above elements and the construction of supervised classifiers [4, 5]. These methods fail to automatically detect fraud patterns and express important behavior information [6]. On the one hand, there are many interactions among transactions [7]. On the other hand, the transaction behaviors of users are dynamic [8]. Hence, designing a more discriminative representation framework for transaction fraud detection remains a big challenge. The camouflage of fraudsters is another challenge that causes performance degradation and poor generalization of many detection approaches [9]. Recently, graph neural networks (GNNs) have been used to learn representations automatically for some prediction tasks [10, 11, 12]. In contrast to traditional machine learning methods, GNN utilizes a neighborhood aggregation strategy to learn representations and then uses a neural network for node classification and link prediction [13]. These methods can capture rich interactions among samples and avoid feature engineering [9]. However, learning discriminative representations by directly applying these graph techniques to our transaction fraud detection problem is challenging. They ignore the relationship among features and the dynamic changes in cardholders' behaviors. We visualize the representations of the general GNN models, including GraphSAGE and GCN. As shown in Figs. 1 and 2, they fail to distance the fraudulent transactions from the legitimate ones. Moreover, GNNs face an over-smoothing problem (indistinguishable representations of nodes in different classes), which results from the over-mixing of information and noise [14, 15, 16]. In the applications of transaction fraud detection, the fact that fraudulent nodes are connected to legitimate ones by disguising the cardholders' behaviors [9], as shown in Fig. 3, exacerbates the effects of the over-smoothing issue. To tackle the above problem, we propose an Adaptive Sampling and Aggregation-based GNN for transaction fraud detection, named ASA-GNN. It integrates our newly proposed Adaptive Sampling and Aggregation methods. First, we use raw transaction records to construct the transaction graph, which considers the relationship of features and the dynamic changes in cardholders' behaviors. Based on it, we design a sampler to filter as many noisy neighbors as possible while retaining structural information. Cosine similarity and edge weight are used to select similar neighbor nodes. Then, we over-sample these neighbor nodes to tackle the need for more links among fraudulent nodes. To deal with the camouflage issue of fraudsters, a neighbor diversity metric is defined and calculated based on the entropy among neighbor nodes to distinguish whether neighborhood aggregation is harmful. Each node has its neighborhood aggregation degree. As a result, intraclass compactness and interclass separation can be guaranteed. This work aims to make the following new contributions: 1. We propose a graph neural network that learns discriminative representations to improve the performance of transaction fraud detection. 2. We propose a new sampling strategy to filter noisy nodes and capture neighbors with the same behavior pattern for fraudulent nodes based on the distance between two nodes measured by cosine similarity and edge weight. 3. We define a neighbor diversity metric to make each node adaptive in its aggregation process, which handles the camouflage issue of fraudsters and alleviates the over-smoothing phenomena. 4. Extensive experiments conducted on three financial datasets show that the proposed ASA-GNN achieves significant performance improvements over traditional and state-of-the-art methods. The rest of this paper is organized as follows. Section II presents the related work. Section III describes the proposed ASA-GNN. Section IV presents three real datasets and discusses the experimental results of performance comparison, ablation studies, and parameter sensitivity analysis. Section V concludes the paper. ## II Background and Related Work ### _Transaction Fraud Detection Model_ Researchers have proposed many methods based on expert rules and machine learning in many fields including transaction fraud detection tasks, which have achieved much success [17, 18, 19, 20]. Their core is to learn some information from historical data to detect fraudulent transactions automatically. They have been proven effective for known fraud patterns but cannot deal with unknown fraud types [21]. Experts have started to use deep learning methods to solve it. Therefore, according to the correlations among the transaction features, deep learning methods can automatically capture cross-feature relationships so that these transaction records can be accurately portrayed, which helps detect fraud behaviors [22]. The Convolutional Neural Network (CNN) method is one of the commonly used methods [22]. In addition to the relationship among transaction features, existing feature engineering methods extract the association of transaction records to improve performance [23]. The aggregation strategy is a classical feature engineering method for transaction fraud detection. It groups the transaction records according to the specified time and then extracts the amount-related features and numbers of these records as the aggregation features [24]. Location and merchant code are also considered and used to generate aggregation features, increasing the user's periodic behavior information [6]. Moreover, some methods, such as a Recurrent Neural Network (RNN) method [25], start to explore the dynamic information of transactions versus time [8, 26]. A Long Short-Term Memory (LSTM) network relies on the evolution of data distribution versus time to capture the dynamic information [26]. By considering the small and various changes in data distribution, an ensemble method is proposed to achieve a better performance [8]. To comprehensively focus on the various mentioned relationships, researchers have utilized transaction records to Fig. 1: GraphSAGE’s visualization results. The red nodes represent fraudulent transactions, while the others represent legitimate ones. (a) Before training on our financial dataset. (b) After its training on our financial dataset. Fig. 3: The camouflage issue of fraudsters. Fraudsters attenuate their suspicions by disguising the cardholders’ behaviors such that the detection system thinks they are legitimate transactions. Fig. 2: GCN’s visualization results.The red nodes represent fraudulent transactions, while the others represent legitimate ones. (a) Before training on our financial dataset. (b) After its training on our financial dataset. construct a graph [27]. For example, GNN can capture the relationships among transactions and achieve better performance in fraud detection. However, since this method uses only one feature to construct a sparse graph, it fails to mine many useful fraud features [27]. Most of the mentioned approaches fail to comprehensively consider the relationship among transactions, the relationship among the transaction features, and dynamic change information of cardholders' behaviors. In our previous work [28], we constructed a weighted multigraph to tackle the challenge since it can use multiple features as long as logic propositions can represent them. Based on this weighted multigraph, we use GNN to extract the above relationships and the dynamic changes. However, the method has some shortcomings, as stated in Section I. This work is motivated by the need to overcome such shortcomings. ### _Gnn_ A GNN is an effective framework that can learn graph representations by modeling the relationships of non-Euclidean graph data [13]. The concept is initially outlined in [29], where a convolution operation in image processing is used in the graph data processing. After that, several graph-based methods are proposed and applied. Most earlier algorithms obtain the embedding representations in two steps: 1) Obtain a sequence of neighbor nodes for each node by using a random walk strategy; 2) Use machine learning models to gain the topological structure information of a graph and obtain the representation for each node. Although the topological structure information is investigated, these algorithms ignore the attributes of nodes. Some graph methods utilize the attributes of nodes based on text and statistical information [30, 31, 32]. For example, a Graph Convolutional Network (GCN) method leverages spectral graph convolutions to extract feature information [30]. A Graph Attention Network (GAT) method specifies the contribution of different neighbors by an attention mechanism [31]. GraphSAGE can sample and aggregate neighbor nodes to update the embedding of nodes flexibly [32]. A Relational Graph Convolutional Network (RGCN) method can model relational data [33]. Recent studies have handled large-scale graphs and overcome the problem of computational complexity [34, 35]. Considering that a heterogeneous graph contains a large number of types of nodes and edge information, as well as their changes over time, researchers have proposed heterogeneous GNNs [36], and dynamic GNNs [37]. In addition, the interpretability and structural optimization of GNNs are also studied [38]. However, applying the above GNNs to our transaction fraud detection problem fails to utilize all possible features to construct a graph. The graph built in this way may lack much vital information. A competitive graph neural network (CGNN) method [39] utilizes a heterogeneous graph to model normal and fraudulent behaviors in eCommerce. By a 3D convolutional mechanism, a spatial-temporal attention-based graph network (STAGN) method [40] can detect fraudulent transactions from a location-based transaction graph. MAFI utilizes aggregator-level and relation-level attention to learn neighborhood information and different relation [41]. LGM-GNN uses local and global information to learn more discriminative representations for prediction tasks [42]. Although these methods have tried to learn more discriminative representations, they ignore that excessive aggregation makes indistinguishable representations of nodes in different classes. It results in the over-smoothing phenomenon in GNNs [14]. In the applications of transaction fraud detection, fraudsters often disguise cardholders' behaviors by various means. Thereby, there are some edges among fraud nodes and legitimate ones [9, 43]. It exacerbates the influence of the over-smoothing phenomenon. Facing the camouflage issue of fraudsters, the CAmouflage-REsistant GNN (CARE-GNN) method [9] defines a label-aware similarity measure using the \(l_{1}\)-distance to select neighbors. However, it only focuses on the similarity of labels. \(l_{1}\)-distance loses its effectiveness in high-dimensional space and fails to describe the similarity among complex behaviors. In our previous work [28], we measure the distance using cosine similarity, which makes up for the shortcoming of \(l_{1}\)-distance. The TG constructed according to transaction data in [28] can focus on the relationship among dynamic transaction behaviors, static transaction attributes, and transactions themselves. However, it ignores that fraudsters avoid trades with others to cover up their behaviors, which results in the lack of links among fraudulent nodes. Meanwhile, it fails to tackle the issue of over-smoothing. In this work, we utilize cosine similarity and edge weight to remedy the mentioned flaws and focus on estimating whether neighborhood aggregation is harmful to solving the over-smoothing issue. ## III Proposed Approach This section first describes the preliminaries in the field of transaction fraud detection. After that, ASA-GNN is described in detail. Important notations are listed in Table 1. ### _Preliminary_ **Definition 1**: _Transaction Record. A transaction record \(r\) consists of \(l\) attributes and a label \(y\in\{0,1\}\)._ **Definition 2**: _Transaction Graph (TG). A transaction graph is a weighted multigraph \(\mathcal{G}=(\mathcal{V},\mathcal{R},\mathcal{P},\mathcal{W}eight,\mathcal{E})\), where_ 1. \(\mathcal{V}\) is the set of \(|\mathcal{R}|\) nodes and each node \(v\) denotes a record \(r\in\mathcal{R}\); 2. \(\mathcal{P}\) is the set of \(m\) logic propositions that are assertions with respect to the attributes of transaction records and \(m\geq 1\); 3. \(Weight:\mathcal{P}\rightarrow\mathbb{N}\) is a weight function; and 4. \(\mathcal{E}=\bigcup_{i=1}^{m}\{(a,b)_{v}^{p_{i}}|a\in\mathcal{V}\wedge b\in \mathcal{V}\wedge a\neq b\wedge p_{i}(a,b)=True\wedge w=\mathcal{W}eight(p_{i})\}\); The logic propositions are based on expert rules such that TG ensures the effectiveness of features and reflects the dynamic changes in cardholders' behaviors. In [28], we have defined TG. In comparison with [28], the biggest contributions of this paper lie in the adaptive sampling and aggregation methods, which allow us to utilize GNN to learn the discriminative representations. Assume that the underlying graph of a GNN is \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\). GNNs aim to learn the embedding representations for all nodes and a mapping function such that the predicted results can be obtained. In a general GNN framework [13, 32], the update function for a single layer is as follows: \[h_{v}^{k}=\sigma(\mathcal{W}^{k}(h_{v}^{k-1}\oplus\mathbb{A}^{k}(\{h_{v^{ \prime}}^{k-1}:v^{\prime}\in\mathcal{N}_{v}\}))), \tag{1}\] where \(h_{v}^{k}\), \(\sigma\) and \(\mathcal{W}^{k}\) represent the embedding representation of node \(v\), activation function, and shared parameter matrix at \(k\)-th layer respectively. Given a node \(v\), \(\mathcal{N}_{v}\) represents the set of its neighbor nodes, \(\mathbb{A}^{k}\) denotes an aggregator function at the \(k\)-th layer which can aggregate the rich information from \(\mathcal{N}_{v}\) and \(\oplus\) is the operator that combines the embedding of \(v\) and its neighbor nodes. General aggregator functions include a mean aggregator and a pooling one. The mean aggregator is defined as: \[\mathbb{A}^{k}=\frac{1}{||\mathcal{N}_{v}||}\sum_{v^{\prime}\in\mathcal{N}_{v }}h_{v^{\prime}}^{k-1}. \tag{2}\] After the \(K\) layers' learning, we utilize a classification layer to predict the samples' labels: \[\hat{y_{v}}=Softmax(\mathcal{W}^{K}h_{v}^{K}). \tag{3}\] In the field of fraud detection, we first construct a TG by transaction records \(\mathcal{R}=\{r_{1},r_{2},...,r_{|\mathcal{R}|}\}\), and then we train the GNN to get the nodes' embedding representations at the last layer and apply a classification layer to predict whether the transaction is fraudulent. ### _Asa-Gnn_ The framework of ASA-GNN is illustrated in Fig. 4. Its main components are neighbor sampling and neighborhood aggregation. In the neighbor sampling stage, to filter noisy neighbors and retain structural information, we define a novel neighbor sampling policy based on cosine similarity and edge weight. In the neighborhood aggregation process, the contributions of different neighbors are specified by an attention mechanism. After that, we apply a diversity metric to ensure the aggregation degree. Finally, a softmax function calculates the probability of a transaction to predict whether it is fraud. All details are described as follows. #### Iii-B1 Neighbor Sampling Strategy Based on Cosine Similarity and Edge Weight Simply put, GNNs leverage the information of neighbors to learn more discriminative representations. Existing studies, such as GraphSAGE [32], adopt a random sampling policy under a homophily assumption. However, they ignore the quality of the information from neighbor nodes. Some useless neighbor nodes around a target node result in indistinguishable representations of nodes in different classes. In addition, similar neighbor nodes may provide rich information. Therefore, selecting valid neighbors is necessary before aggregation. The distance between two nodes and the weight of the edge connecting them are considered to make a novel neighbor sampling policy to deal with this problem. Given a node \(v\in\mathcal{V}\) and its neighbor \(v^{\prime}\in\mathcal{V}\), we utilize cosine similarity to compute their distance, which is usually used to analyze user behaviors in practical applications. We calculate the distance between \(v\) and \(v^{\prime}\), i.e., \[\overleftarrow{v},\overleftarrow{v^{\prime}}=exp(r_{v^{\prime}}\cdot r_{v}), \tag{4}\] where \(r_{v^{\prime}}\) and \(r_{v}^{\prime}\) are the normalized attribute vectors of nodes \(v\) and \(v^{\prime}\). We utilize the exponential function to ensure non-negative similarity. Note that there may be multiple edges between two nodes in a TG, and the weight of each edge is assigned. The most significant weight of the edge between \(v\) and \(v^{\prime}\) is computed as follows: \[w_{v,v^{\prime}}=max\{\mu_{i}\cdot Weight(p_{i})\}_{i\in\{1,\cdots,m\}}, \tag{5}\] where \[\mu_{i}=\left\{\begin{array}{ll}1&p(v,v^{\prime})=True\\ 0&\text{otherwise.}\end{array}\right. \tag{6}\] Finally, given a node \(v\in\mathcal{V}\), the probability of its neighbor \(v^{\prime}\) being selected is defined as \[\mathbb{P}_{v,v^{\prime}}=\frac{w_{v,v^{\prime}}\cdot\overleftarrow{v,v^{\prime }}}{\sum_{v^{\prime}\in\mathcal{N}_{v},v\neq v^{\prime}}w_{v,v^{\prime}} \cdot\overleftarrow{v,v^{\prime}}}, \tag{7}\] where \(\mathcal{N}_{v}\) denotes the set of neighbor nodes of \(v\). We perform Top-\(\hat{z}\) neighbor sampling to filter noise information from useless neighbor nodes. After the above neighbor sampling, \(\mathcal{N}_{v}\) contains the selected neighbor nodes of \(v\). However, fraudulent nodes still need neighbors to enrich information. We should find nodes with the same behavior pattern for them. For this purpose, we over-sample neighbors for fraudulent node \(v\) as follows: \[\mathcal{N}^{f}_{v}=\{v^{\prime}\in\mathcal{V}|v^{\prime}\not\in\mathcal{N^{ \prime}}_{v}\wedge c^{\prime}_{v}=1\land\overleftrightarrow{v,v^{\prime}}<d_{f }\}, \tag{8}\] where \(\overleftrightarrow{v,v^{\prime}}\) is the distance between nodes \(v\) and \(v^{\prime}\) calculated by Eq. (4). Therefore,if \(v\) is fraudulent, the set of its neighbors can be defined as follows: \[\mathcal{N}_{v}=\{\mathcal{N^{\prime}}_{v}\cup\tilde{\mathcal{N}^{f}_{v}}\}. \tag{9}\] If \(v\) is legitimate, the set of its neighbors \(\mathcal{N}_{v}\) can be updated by \(\mathcal{N^{\prime}}_{v}\). #### Iii-B2 Attention Mechanism After the neighbor sampling process, \(\mathcal{N}_{v}\) contains the selected neighbor nodes of \(v\). Then an aggregator function can generate the embedding representations of \(v\) at each layer. Given a node \(v\), \(h^{k}_{v}\) denotes the representation at the \(k\)-th layer where \(v\in\mathcal{V}\) and \(k=1,2,...,K\). Then it aggregates the information from \(\mathcal{N}_{v}\), which is the set of selected neighbor nodes, i.e., \[h^{k}_{\mathcal{N}_{v}}=\alpha^{k}_{v,v^{\prime}}\cdot\mathbb{A}^{k}(h^{k-1}_ {v^{\prime}},\forall v^{\prime}\in\mathcal{N}_{v}), \tag{10}\] \[\alpha^{k}_{v,v^{\prime}}=\frac{exp(LeakyReLU(e^{v^{\prime}}_{v}))}{\sum_{i \in\mathcal{N}_{v}}exp(LeakyReLU(e^{i}_{v}))}, \tag{11}\] \[e^{v^{\prime}}_{v}=f(\mathcal{W}^{k}h^{k}_{v}||\mathcal{W}^{k}h^{k}_{v^{\prime }}), \tag{12}\] where \(\alpha^{k}_{v,v^{\prime}}\) denotes the attention score of \(v\) and \(v^{\prime}\) at the \(k\)-th layer, \(\mathbb{A}^{k}\) is an aggregator function at the \(k\)-th layer, \(LeakyReLU\) is an activation function, \(f\) is a function mapping the high-dimensional feature to a real number and \(\mathcal{W}^{k}\) is a shared parameter matrix. Generally, the interaction between two transaction records within a short interval is more important. Therefore, given a node \(v\) and its neighbor \(v^{\prime}\), the attention score between them Fig. 4: The overview of ASA-GNN: 1) Neighbor sampling at the node level, in which Top-\(\tilde{z}\) neighbors are sampled filter noise information of each node, and then over-sampling neighbors for fraudulent nodes. 2) Calculating the attention score and aggregation degree to learn representations. 3) Estimating the probability of a transaction being predicted as fraudulent at the detection layer. at the \(k\)-th layer is adjusted by the normalised time interval \(\{\widetilde{\delta}t_{v,v^{\prime}},\forall v^{\prime}\in\mathcal{N}_{v}\}\), i.e., \[\alpha_{v,v^{\prime}}^{k}=\widetilde{\delta}t_{v,v^{\prime}}\cdot\frac{exp(LeakyReLU (e_{v}^{v^{\prime}}))}{\sum_{i\in\mathcal{N}_{v}}exp(LeakyReLU(e_{v}^{i}))}. \tag{13}\] #### Iii-B3 Adaptive Neighborhood Aggregation Over-smoothing is a common problem in GNN methods. Existing methods assume that the introduction of noise in the aggregation process causes the problem. Specifically, the information from neighbor nodes in the same class makes the representations maintain compactness within the class, which reflects the advantages of GNN. Interactions between a target node and its neighbors in different classes may result in indistinguishable representations. Although neighbor sampling can help us filter some noisy nodes, the camouflage issue of fraudsters brings another challenge. In applications of transaction fraud detection, fraudsters often disguise the cardholders' behaviors by various means so that there exist edges connecting fraud nodes and legitimate ones in a TG. It exacerbates the effect of the over-smoothing issue. Therefore, when a node has a neighbor in a different class, we should consider that the neighbor may be noisy. We introduce a neighbor diversity metric \(\mathcal{D}\) by computing the entropy among neighbors of a target node, i.e., \[\mathcal{D}(v)=-\sum_{c\in C}P_{c}(v)log(P_{c}(v)), \tag{14}\] \[P_{c}(v)=\frac{|v^{\prime}\in\mathcal{N}_{v}|y_{v^{\prime}}\in c|}{|\mathcal{ N}_{v}|}, \tag{15}\] where \(C\) represents the set of label classes, including legitimate and fraudulent ones. \(y_{v^{\prime}}\) is the label of \(v^{\prime}\). The greater the value of \(\mathcal{D}(v)\) is, the more diverse the neighbors of \(v\) are. Considering that each node has a \(\mathcal{D}\), we use a gating function to control the aggregation degree, i.e., \[g_{v}^{k}=\sigma(-Norm(\mathcal{D}(v))),\forall v\in\mathcal{V}, \tag{16}\] where \(Norm\) is the batch normalization for all nodes in a TG. The range of \(g_{v}^{k}\) is (0, 1). When plenty of noisy neighbors are connected to target node \(v\), it is very small and close to 0. Using the gating function, we allow each node to have its neighborhood aggregation degree. To better understand our adaptive neighborhood aggregation process, the interaction operations of a target node and its neighbors are described, as shown in Fig. 5. The update function for a single layer is as follows: \[h_{v}^{k}=\sigma(\mathcal{W}^{k}\cdot conccat(h_{v}^{k-1},g_{v}^{k}h_{\mathcal{ N}_{v}}^{k})). \tag{17}\] #### Iii-B4 Detection Layer For the target node \(v\), \(h_{v}^{K}\) is the final representation outputted by the \(K\)-th layer. After that, a softmax function can be applied to estimate the probability of a transaction being fraudulent. The loss function is computed as follows: \[\mathcal{L}=\sum_{i}^{|\mathcal{R}|}-[y_{i}\cdot log(\hat{y_{i}})+(1-y_{i}) \cdot log(1-\hat{y_{i}})], \tag{18}\] where \(y_{i}\) and \(\hat{y_{i}}\) are the labels of the \(i\)-th transaction record, and the possibility that the sample is predicted to be fraudulent, respectively, and \(|\mathcal{R}|\) represents the number of transactions. ``` Input: TG \(\mathcal{G}=(\mathcal{V},\mathcal{R},\mathcal{P},\mathcal{V}eight,\mathcal{E})\), number of layers \(K\), neighbourhood sample size \(\hat{z}\), edge weight \(\{WWeight_{1},...,Weight_{n}\}\), non-linear activation function \(\sigma\). Output: embedding representation \(h_{v}^{K}\) for each node \(h_{v}^{k}\gets r_{v},\forall v\in\mathcal{V}\); // initialization 1\(h_{v}^{k}\gets r_{v},\forall v\in\mathcal{V}\); // initialization 2forachlayer \(k=1,2,...,K\)do 3forach \(i=1,2,\ldots,\hat{z}_{k}\)do 4// Neighbor sampling 5\(\mathcal{N}_{v}^{i}\leftarrow\) select neighbors from \(\mathcal{N}_{v}\) according to Eq.(7); 6if\(c_{v}=1\)then 7 over-sample neighbors according to Eq.(8); 8 9 end if 10 11 end for 12forachnode \(v\in\mathcal{V}\)do 1//Aggression 13\(\alpha_{v}^{k}\leftarrow\) Eq.(11) ; 14\(h_{N(v)}^{k}\leftarrow\) Eq.(8) ; 15\(g_{v}^{k}\leftarrow\) Eq.(14) ; 16\(h_{v}^{k}\leftarrow\) Eq.(15) ; 17 18 end for 19\(h_{v}^{k}\gets h_{v}^{k}/\left\|h_{v}^{k}\right\|_{2},\forall v\in\mathcal{ V}\); 20 21 end for ``` **Algorithm 1**ASA-GNN Approach The training process of ASA-GNN is illustrated in Algorithm 1. Given a multigraph \(\mathcal{G}\), we first compute the selection Fig. 5: The process of adaptive neighborhood aggregation. probability and then sample \(\hat{z}\) neighbors for each node. Then we can compute attention score \(\alpha_{u,v}^{k}\) and aggregation degree \(g_{v}^{k}\). Finally, the representation for each node at the \(k\)-th layer can be obtained by utilizing an aggregator. ## IV Experiments Based on three real-world financial datasets, we conduct the following experiments to show the advantages of ASA-GNN. ### _Datasets and Graph Construction_ #### Iv-A1 Datasets We conduct experiments on one private dataset and two public datasets to demonstrate that ASA-GNN achieves significant improvements over both classic and state-of-the-art models for transaction fraud detection tasks. The private dataset, PR01, consists of 5.133.5 million transactions from a financial company in Chinahat took place during the second quarter of 2017. Transactions are labeled by professional investigators of a Chinese bank, with 1 representing fraudulent transactions and 0 representing legitimate ones. In data preprocessing, we first utilize the down-sampling of legitimate transactions to solve the imbalanced problem. Then, we apply one-hot coding and min-max normalisation to handle the discrete and continuous values, respectively. Since CARE-GNN requires a lot of computing resources, we take the latest 10000 transaction records as a small dataset (PR02) to facilitate the test. The TC dataset1 contains 160,764 transaction records collected by Orange Finance Company, including 44,982 fraudulent transactions and 115,782 legitimate transactions. According to the trade time of these transaction records, the training and test sets are divided. Transaction records of one week form the training set and transaction records of the next week form the test set. In this way, the TC dataset is split into TC12, TC23, and TC34. We perform the same data processing as for the PR01 and PR02 datasets. Footnote 1: [https://challenge.datacasstle.cn/v3/](https://challenge.datacasstle.cn/v3/) The XF dataset is a subset extracted from iFLYTEK2 which have 20000 records. It contains five types of information, including basic data, media information, time, IP information, and device information. The XF dataset is balanced. Therefore, We only perform the same data processing as the datasets PR01 and PR02 to handle the discrete and continuous values. Footnote 2: [http://challenge.xfyun.cn/2019/gamedetail?ype=detail/mobileAD](http://challenge.xfyun.cn/2019/gamedetail?ype=detail/mobileAD) #### Iv-A2 Graph Construction To construct the TG, the transactions are regarded as nodes. Then, we utilize some logic propositions to design the edges. Generally, fraudsters often have two characteristics: device aggregation and temporal aggregation. Device aggregation means that fraudsters are often limited by financial and regulatory constraints and commit fraud on a small number of devices. It differs from legitimate transactions, where cardholders trade on different devices. Temporal aggregation means that fraudsters must complete the fraud activities as quickly as possible since the banks and cardholders may otherwise discover their activities. Therefore, we construct a TG for the private dataset using two logic propositions as follows: \[p_{1}(a,b)=\left\{\begin{array}{ll}True&a.Trade\_ip=b.Trade\_ip\ \wedge\\ &|a.Trade\_time-b.Trade\_time|\\ &\leq 0.5h\\ False&\text{otherwise.}\end{array}\right. \tag{19}\] \[p_{2}(a,b)=\left\{\begin{array}{ll}True&a.Trade\_mac=\\ &b.Trade\_mac\ \wedge\\ &|a.Trade\_time-b.Trade\_time|\\ &\leq 0.5h\\ False&\text{otherwise},\end{array}\right. \tag{20}\] where \(Trade\_ip\), \(Trade\_time\), and \(Trade\_mac\) are the Internet Protocol address, time and Media Access Control address of the transactions, respectively. ### _Baselines_ To verify the effectiveness of ASA-GNN, the general GNN models and state-of-the-art GNN-based fraud detectors are selected for comparison. The general GNN models includes GCN [32], GraphSAGE [32], GAT [31], RGCN [33] and HAN [44]. The state-of-the-art GNN-based fraud detectors include CARE-GNN [9] and SSA [28]. * **GCN**[30]: The GCN method leverages spectral graph convolutions to extract feature information. * **GraphSAGE**[32]: GraphSAGE can get a representation for each node using an update function which includes a random sampling policy and neighborhood aggregation process. * **GAT**[31]: The GAT method uses graph attention layers to specify the importance of different neighbors. * **CARE-GNN**[9]: It is a GNN method applied to fraud detection, which improves its aggregation with reinforcement learning to identify the behavior of fraudsters. * **Similar-sample + attention SAGE (SSA)**[28]: The SSA method improves the performance of a model using a sampling strategy and an attention mechanism. * **RGCN**[33]: RGCN models a relational GNN for link prediction tasks and classification tasks. * **HAN**[44]: HAN utilizes a hierarchical attention mechanism so that the contributions of different neighbors and meta-paths can be learned. ### _Parameter Settings_ In ASA-GNN, we set \(K=3\) as the number of layers, \((20,20,20)\) as the neighborhood sample size, \(32\) as the hidden size, \(0.001\) as the learning rate, \(Adam\) as the optimizer and \(256\) as the batch size for our PR01 and PR02 datasets. We set \(K=3\) as the number of layers, \((30,50,50)\) as the neighborhood sample size, \(16\) as the hidden size, \(0.01\) as the learning rate, \(Adam\) as the optimizer and \(128\) as the batch size for the XF, TC12, TC23, and TC34 datasets. For all baseline algorithms, their parameters are the same as those in the corresponding papers [28, 30, 31, 32, 33, 9, 34]. ### _Evaluation Criteria_ To measure the performance, we choose \(Recall\), \(F_{1}\), and Area Under the Curve of ROC (\(AUC\)) as criteria. \(Recall\) represents the ratio of the identified fraudulent transaction records to all fraudulent ones. \(F_{1}\) is a common evaluation criteria in binary classification problems [42]. \(AUC\) is usually computed to evaluate a model on an imbalanced dataset. \(Recall\) and \(F_{1}\) are calculated as follows: \[Recall=\frac{T_{P}}{T_{P}+T_{N}}, \tag{21}\] \[Precision=\frac{T_{P}}{T_{P}+F_{P}}, \tag{22}\] \[F_{1}=\frac{2\times Recall\times Precision}{Recall+Precision}, \tag{23}\] where \(T_{P}\), \(T_{N}\), and \(F_{P}\) are the numbers of true positive transaction records, true negative transaction records, and false positive transaction records, respectively. \(AUC\) is calculated as follows: \[AUC=\frac{\sum_{r\in\mathcal{R}^{+}}rank_{r}-\frac{|\mathcal{R}^{+}|\times(| \mathcal{R}^{+}|+1)}{2}}{|\mathcal{R}^{+}|\times|\mathcal{R}^{-}|}, \tag{24}\] where \(\mathcal{R}^{+}\) and \(\mathcal{R}^{-}\) are the fraudulent and legitimate class sets and \(rank_{r}\) is the rank of \(r\) by the predicted score. ### _Performance Comparison_ The performance of ASA-GNN and all baselines are presented in Table. 3. The ROC curves of ASA-GNN and all baselines are shown in Fig. 8. We have the following observations and analysis results: * The proposed ASA-GNN achieves significant improvements over all baselines on the PR01, PR02, XF, TC12, and TC34 datasets. ASA-GNN improves significantly by 6.8% and 6.7% in terms of \(F_{1}\) and \(AUC\) on the TC12 dataset. Therefore, the overall performance demonstrates the superiority of the proposed ASA-GNN. * GCN, GraphSAGE, GAT, RGCN, and HAN are traditional GNNs, neither of which can identify the camouflage behavior of fraudsters. Thus, their performance is worse than ASA-GNN. * GraphSAGE, CARE-GNN, and SSA are all graph algorithms based on node sampling. None of them performs better than ASA-GNN. The reason is that the proposed ASA-GNN filters nodes effectively and supplements the information of minority nodes, i.e., fraud information. In addition, ASA-GNN considers the camouflage behaviors of fraudsters. The performance of GraphSAGE is worse than that of SSA because noise information may be absorbed in the former's sampling process, and the importance of different nodes needs to be considered. * CARE-GNN calculate the \(l_{1}\)-distance between nodes. However, it only focuses on the similarity of labels and the \(l_{1}\)-distance loses its effectiveness in high-dimensional space. Although it tries its best to solve the camouflage issue of fraudsters, it still performs poorly. ## V Conclusion and Future Work In this paper, a novel graph neural network named ASA-GNN is proposed to identify fraudulent transactions. ASA GNN employs the neighbor sampling strategy to filter noisy nodes and make up for the lack of neighbors of fraudulent nodes. Consequently, it can make full use of attribute and topology information in TGs. Besides, ASA-GNN can address the camouflage issue of fraudsters and alleviate the over-smoothing phenomena, benefiting from our neighbor diversity metric. Extensive experiments on three financial datasets show that the proposed ASA-GNN achieves significant performance improvements over traditional and state-of-the-art methods. Therefore, ASA-GNN can better help banks and financial institutions detect fraudulent transactions and establish trust relationships with customers. Our plan includes designing an explainer for the detection model produced by ASA-GNN since the lack of explanations may make customers distrust financial institutions [45]. Our TG is built based on expert rules, which can provide a feasible way to develop such an explainer. Studying the imbalance issues in transaction graphs and adding temporal modules (TCN/Transformer) to improve the ability to capture temporal features is also interesting.
2305.10447
The Effectiveness of a Dynamic Loss Function in Neural Network Based Automated Essay Scoring
Neural networks and in particular the attention mechanism have brought significant advances to the field of Automated Essay Scoring. Many of these systems use a regression-based model which may be prone to underfitting when the model only predicts the mean of the training data. In this paper, we present a dynamic loss function that creates an incentive for the model to predict with the correct distribution, as well as predicting the correct values. Our loss function achieves this goal without sacrificing any performance achieving a Quadratic Weighted Kappa score of 0.752 on the Automated Student Assessment Prize Automated Essay Scoring dataset.
Oscar Morris
2023-05-15T16:39:35Z
http://arxiv.org/abs/2305.10447v1
# The Effectiveness of a Dynamic Loss Function in Neural Network Based Automated Essay Scoring ###### Abstract Neural networks and in particular the attention mechanism have brought significant advances to the field of Automated Essay Scoring. Many of these systems use a regression-based model which may be prone to underfitting when the model only predicts the mean of the training data. In this paper, we present a dynamic loss function that creates an incentive for the model to predict with the correct distribution, as well as predicting the correct values. Our loss function achieves this goal without sacrificing any performance achieving a Quadratic Weighted Kappa score of 0.752 on the Automated Student Assessment Prize Automated Essay Scoring dataset. ## 1 Introduction Automated Essay Scoring (AES) is the task of assigning a score to free-form text (throughout this paper essay will be defined loosely to include short answers) using a computational system. The goal of AES is to mimic human scoring as closely as possible. The development of the Transformer in [1] has significantly improved the performance of Natural Language Processing (NLP) models to a point where it is achievable to use a purely neural approach to AES [2, 3]. This has created the possibility for many task-agnostic architectures and pre-training approaches which then allows for greater flexibility in the implementation of these models. This also makes the cutting-edge performance of these NLP models available for simple implementation in real world situations. Transformer models such as those in [4, 5, 6] and many more have the significant disadvantage that they require a very large training dataset and high training time to achieve decent performance. This makes training these models from scratch almost unachievable for most tasks (which don't have very large datasets available). A good solution to this is pre-training where the model is first trained by the creator on a task-agnostic dataset and, if necessary, the model can then be fine-tuned to the downstream task that it will be applied to. After fine-tuning these models can achieve the performance increase of a transformer model with minimal training time and a small training dataset. All sequence to sequence transformer models use an encoder-decoder architecture where the input sequence is fed into an encoder. The encoder then outputs a vector or matrix representation of the input sequence known as the 'hidden vector' or 'hidden matrix'. This can then be inputted into a decoder which converts this hidden data back into an output sequence. Since the encoder and decoder are independent networks, the decoder can be replaced with a classification or regression head, taking the hidden data as its input and outputting multiple or a single neuron value. This significantly expands the tasks transformers can be applied to. The encoders and decoders do not have to use a transformer architecture, they can be replaced by older Recurrent Neural Networks (RNNs). Even though RNNs often perform worse than transformer models, on small datasets they do often perform similarly or better than transformer models (even if the transformer is pre-trained). For the task of AES there has been discussion on whether a classification approach is better than a regression approach (and vice versa) [7]. It was found that a regression approach performs better for their dataset, however, if a pre-training approach is used for an AES system classification has the significant disadvantage that the number of marks given cannot be changed between the pre-training step and the fine-tuning step, whereas with a regression approach the number of marks is irrelevant which may significantly improve the size of the dataset that can be used, or even if a pre-training approach is reasonable. The distribution of scores given to a set of essays is naturally very unbalanced as the aim for human markers is often to obtain a normal distribution of the scores. This means that there are few answers that achieve the very worst scores and very few that achieve the highest scores. Unbalanced datasets can significantly reduce the performance of a classification model. Unbalanced datasets can pose an issue for a regression model in that it can be easier for the model to only predict the mean of the training data, this is a significant problem for AES as it is very important that each sample is given as accurate a score as possible. The loss function proposed in this paper aims to solve this problem. ## 2 Dataset The dataset used to train this model was introduced in the Kaggle Automated Student Assessment Prize (ASAP) in 2012.1 For the ASAP competition two datasets were introduced, one containing essays (approx. 150-650 words) and the other containing short answers (\(<\) 100 words). This model is trained only on the essay dataset. Table 1 shows the details of the dataset. Footnote 1: [https://kaggle.com/c/asap-aes](https://kaggle.com/c/asap-aes) The model was trained on each prompt individually. First the dataset is shuffled with 90% of the data used for training and the other 10% used for evaluation. ## 3 Model ### Architecture The model used in this paper is a Long Short-Term Memory (LSTM) [8] encoder using an attention mechanism [1]. The regression head used for this model is a single fully connected layer where the hidden data is inputted and a single value is outputted. To tokenise the input sequence the same tokeniser used in the BERT model [5] is used. This choice was made because the BERT tokeniser tokenises on word-parts rather than whole words. This allows the model to be more confident in the meaning of words that are not in its original training data, which may occur when using technical language in an essay. \begin{table} \begin{tabular}{|l|l|l|} \hline Prompt & No. of Samples & Score Range \\ \hline 1 & 1,783 & 2-12 \\ 2 & 1,800 & 1-6 \\ 3 & 1,726 & 0-3 \\ 4 & 1,772 & 0-3 \\ 5 & 1,805 & 0-4 \\ 6 & 1,800 & 0-4 \\ 7 & 1,569 & 0-30 \\ 8 & 723 & 0-60 \\ \hline \end{tabular} \end{table} Table 1: Details of ASAP AES dataset Fig. 1 shows the flow of data through the model and Fig. 2 takes a closer look at the architecture of the encoder.2 Footnote 2: Images created with code2flow ### Metrics In the ASAP competition the metric used was the Quadratic Weighted Kappa (QWK) defined in [2]. The model was also evaluated on Mean Squared Error (MSE), defined in Eq. 1, Mean Absolute Error (MAE), defined in Eq. 2 and the coefficient of determination \(r^{2}\), defined in Eq. 3. \[\text{MSE}(\mathbf{x},\mathbf{y})=\frac{1}{N}\sum_{i=0}^{N}(y_{i}-x_{i})^{2} \tag{1}\] Figure 1: Model architecture Figure 2: Encoder architecture The Effectiveness of a Dynamic Loss Function in Neural Network Based Automated Essay Scoring \[\text{MAE}(\mathbf{x},\mathbf{y})=\frac{1}{N}\sum_{i=0}^{N}|y_{i}-x_{i}| \tag{2}\] \[r^{2}=1-\frac{\sum_{i=0}^{N}(y_{i}-x_{i})^{2}}{\sum_{i=0}^{N}(y_{i}-\bar{y})} \tag{3}\] where \(\mathbf{x}\) are the true values, \(\mathbf{y}\) are the predicted values, \(N\) is the number of samples and \(\bar{y}\) is the mean predicted value. ## 4 The Dynamic Loss Function A loss function is the function that is minimized during the training of the network. The aim of a loss function is to determine the difference between what the models outputs are and what they should be. A dynamic loss function is when the loss function is changed throughout the training process. The main benefit of a dynamic loss function is the ability to adjust the goals of the model at different times during the training process. A common problem with regression models is that the model tends towards predicting the mean of the training dataset with very little variation. This can occur when the dataset is unbalanced (as is often the case with real world datasets), however, in prior testing it did occur on the ASAP AES dataset. A solution to this problem is to adjust the loss function to provide an incentive to predicting batches with the correct sample standard deviation. Do do this, a loss function can be defined as the error in the standard deviation of a certain batch: \[\text{STDE}(\mathbf{x},\mathbf{y})=|\sigma(\mathbf{y})-\sigma(\mathbf{x})| \tag{4}\] where \(\sigma\) is the function calculating the sample standard deviation. Multiple loss functions can be combined using a weighted sum, and some constant \(p\): \[L_{T}=pL_{1}+(1-p)L_{2} \tag{5}\] where \(L_{T}\) is the total loss and \(L_{1}\) and \(L_{2}\) are two loss functions. Therefore, using STDE as \(L_{1}\) and MSE as \(L_{2}\) a loss function is defined that provides an incentive to predict using the correct standard deviation and with minimal error. Using a constant value of \(p\) did help to prevent this form of underfitting, however, the model was still showing significant underfitting and because of the reduced importance of an error metric the model's performance was significantly reduced. In an attempt to solve both of these problems, the value of \(p\) could be decayed over time using an exponential decay function (shown in Eq. 6). When the decay was started as soon as the training began it was found that the model would still show signs of underfitting, this could easily be solved by holding \(p\) constant for the first portion of training and then decaying its value throughout the rest of the training process. The full definition of \(p\) as a function of training step or epoch is shown in Eq. 7. \[p(t)=a\cdot\exp\left(-\frac{t}{T}\right) \tag{6}\] \[p(t)=\min\left(a,a\cdot\exp\left(-c\left(\frac{t}{T}-b\right)\right)\right) \tag{7}\] where \(t\) is the current training step or current epoch, \(T\) is the total number of training step or total number of training epochs and \(a\), \(b\) and \(c\) are constants. Fig. 3 shows an example plot of \(p\) against fraction of training complete. This method achieved the highest performance without sacrificing either metric. ## 5 Results The winner of the ASAP competition is the Enhanced AI Scoring Engine (EASE) [9]. EASE is a feature-extraction based system where the features are then used to build a regression model using either Support Vector Regression (SVR) or Bayesian Linear Ridge Regression (BLRR). We compare our model to both EASE systems and two fully neural models using LSTM, one using the Mean over Time (MoT) and the other using an attention mechanism as described in [2]. Unfortunately, Taghipour and Ng did not release detailed data for their attention model, only the mean QWK score it achieved across all prompts. As can be seen in Table 2, Our system outperforms both EASE models by a significant margin in all prompts and performs approximately equivalently to the models proposed by Taghipour and Ng. The model improves slightly the score achieved by both models proposed by Taghipour and Ng. This may seem surprising as it may be assumed that since the error-based loss is 'less important' to the model, the model's performance would decrease. However, these results show this is not the case. The QWK scores of these models are all close to the difference between human graders with our approach being effectively equivalent. The main goal of these experiments was to create a loss function that prevents underfitting on regression tasks. To show that this has been achieved, the model was trained twice on prompt 1 and the QWK and standard deviation of its predictions on the evaluation split were measured after every epoch. The results of this is shown in Figures 4 and 5. \begin{table} \begin{tabular}{|l|c c c c c c c|c|} \hline \multirow{2}{*}{System} & \multicolumn{6}{c|}{Prompt} \\ \cline{2-10} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Avg QWK \\ \hline Ours & **0.841** & 0.663 & 0.672 & 0.773 & 0.807 & 0.743 & **0.839** & **0.679** & **0.752** \\ EASE (SVR) & 0.781 & 0.621 & 0.630 & 0.749 & 0.782 & 0.771 & 0.727 & 0.534 & 0.699 \\ EASE (BLRR) & 0.761 & 0.606 & 0.621 & 0.742 & 0.784 & 0.775 & 0.730 & 0.617 & 0.705 \\ Taghipour and Ng MoT & 0.775 & **0.687** & **0.683** & **0.795** & **0.818** & **0.813** & 0.805 & 0.594 & 0.746 \\ Taghipour and Ng Attn. & & & & & & & & & 0.731 \\ \hline Human graders & 0.721 & 0.812 & 0.769 & 0.851 & 0.753 & 0.776 & 0.720 & 0.627 & 0.754 \\ \hline \end{tabular} \end{table} Table 2: Our model compared to EASE and Taghipour and Ng’s LSTM model with Mean over Time and Attention Figure 3: Value of \(p\) against fraction of training complete using \(a=1\), \(b=0.15\) and \(c=1.1\) These figures show that the performance of our loss function is a significant improvement over only using MSE which causes severe underfitting, shown by the extremely low standard deviation compared to the actual standard deviation of the dataset. The QWK score of the model trained only using MSE loss is 0 which implies that any agreement between the model and the human grader is by chance. ## 6 Conclusion The dynamic loss function proposed in this paper significantly improves upon other loss functions in reducing underfitting in regression tasks. Our loss function has also shown that it does not sacrifice the performance of the model, it even improves on the performance achieved by other approaches. Our model makes use of an attention mechanism that allows the model to weight the importance of different tokens in the input sequence. This eliminates the need for handcrafted features which can be inaccurate and time-consuming to create. The large disadvantage of using neural networks for AES is the increased compute power required. However, using an LSTM-based model instead of a transformer-based model significantly improves this.
2306.12700
Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation
We develop an approach to efficiently grow neural networks, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics. Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network. To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents. Experimental results show that our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original computation budget for training. We demonstrate that these gains translate into real wall-clock training speedups.
Xin Yuan, Pedro Savarese, Michael Maire
2023-06-22T07:06:45Z
http://arxiv.org/abs/2306.12700v1
# Accelerated Training via Incrementally ###### Abstract We develop an approach to efficiently grow neural networks, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics. Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network. To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents. Experimental results show that our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original computation budget for training. We demonstrate that these gains translate into real wall-clock training speedups. ## 1 Introduction Modern neural network design typically follows a "larger is better" rule of thumb, with models consisting of millions of parameters achieving impressive generalization performance across many tasks, including image classification [23; 33; 31; 47], object detection [13; 27; 11], semantic segmentation [28; 3; 25] and machine translation [35; 7]. Within a class of network architecture, deeper or wider variants of a base model typically yield further improvements to accuracy. Residual networks (ResNets) [15] and wide residual networks [46] illustrate this trend in convolutional neural network (CNN) architectures. Dramatically scaling up network size into the billions of parameter regime has recently revolutionized transformer-based language modeling [35; 7; 1]. The size of these models imposes prohibitive training costs and motivates techniques that offer cheaper alternatives to select and deploy networks. For example, hyperparameter tuning is notoriously expensive as it commonly relies on training the network multiple times, and recent techniques aim to circumvent this by making hyperparameters transferable between models of different sizes, allowing them to be tuned on a small network prior to training the original model once [42]. Our approach incorporates these ideas, but extends the scope of transferability to include the parameters of the model itself. Rather than view training small and large models as separate events, we grow a small model into a large one through many intermediate steps, each of which introduces additional parameters to the network. Our contribution is to do so in a manner that preserves the function computed by the model at each growth step (functional continuity) and offers stable training dynamics, while also saving compute by leveraging intermediate solutions. More specifically, we use partially trained subnetworks as scaffolding that accelerates training of newly added parameters, yielding greater overall efficiency than training a large static model from scratch. Competing recent efforts to grow deep models from simple architectures [4; 24; 5; 26; 40; 38; 39; 45; 10] draw inspiration from other sources, such as the progressive development processes of biological brains. In particular, Net2Net [4] grows the network by randomly splitting learned neurons from previous phases. This replication scheme, shown in Figure 1(a) is a common paradigm for most existing methods. Gradient-based methods [39; 40] determine which neurons to split and how to split them by solving a combinatorial optimization problem with auxiliary variables. At each growth step, naive random initialization of new weights destroys network functionality and may overwhelm any training progress. Weight rescaling with a static constant from a previous step is not guaranteed to be maintained as the network architecture evolves. Gradient-based methods outperform these simple heuristics but require additional training effort in their parameterization schemes. Furthermore, all existing methods use a global LR scheduler to govern weight updates, ignoring the discrepancy among subnetworks introduced in different growth phases. The gradient itself and other parameterization choices may influence the correct design for scaling weight updates. We develop a growing framework around the principles of enforcing transferability of parameter settings from smaller to larger models (extending [42]), offering functional continuity, smoothing optimization dynamics, and rebalancing learning rates between older and newer subnetworks. Figure 1(b) illustrates key differences with prior work. Our core contributions are: * **Parameterization using Variance Transfer:** We propose a parameterization scheme accounting for the variance transition among networks of smaller and larger width in a single training process. Initialization of new weights is gradient-free and requires neither additional memory nor training. * **Improved Optimization with Learning Rate Adaptation:** Subnetworks trained for different lengths have distinct learning rate schedules, with dynamic relative scaling driven by weight norm statistics. * **Better Performance and Broad Applicability:** Our method not only trains networks fast, but also yields excellent generalization accuracy, even outperforming the original fixed-size models. Flexibility in designing a network growth curve allows choosing different trade-offs between training resources and accuracy. Furthermore, adopting an adaptive batch size schedule provides acceleration in terms of wall-clock training time. We demonstrate results on image classification and machine translation tasks, across various network architectures. ## 2 Related Work **Network Growing.** A diverse range of techniques train models by progressively expanding the network architecture [37; 9; 5; 38; 45]. Within this space, the methods of [4; 26; 40; 39; 10] are most relevant to our focus - incrementally growing network width across multiple training stages. Net2Net [4] proposes a gradient-free neuron splitting scheme via replication, enabling knowledge transfer from previous training phases; initialization of new weights follows simple heuristics. [26]'s Splitting approach derives a gradient-based scheme for duplicating neurons by formulating a combinatorial optimization problem. FireFly [39] gains flexibility by also incorporating brand new neurons. Both methods improve Net2Net's initialization scheme by solving an optimization problem Figure 1: Dynamic network growth strategies. Different from (a) which rely on either splitting [4; 26; 40] or adding neurons with auxiliary local optimization [39; 10], our initialization (b) of new neurons is random but function-preserving. Additionally, our separate learning rate scheduler governs weight updating to address the discrepancy in total accumulated training between different growth stages. with auxiliary variables, at the cost of extra training effort. GradMax [10], in consideration of training dynamics, performs initialization via solving a singular value decomposition (SVD) problem. **Neural Architecture Search (NAS) and Pruning.** Another subset of methods mix growth with dynamic reconfiguration aimed at discovering or pruning task-optimized architectures. Network Morphism [37] searches for efficient networks by extending layers while preserving the parameters. Autogrow [38] takes an AutoML approach governed by heuristic growing and stopping policies. [45] combine learned pruning with a sampling strategy that dynamically increases or decreases network size. Unlike these methods, we focus on the mechanics of growth when the target architecture is known, addressing the question of how to best transition weight and optimizer state to continue training an incrementally larger model. NAS and pruning are orthogonal to, though potentially compatible with, the technical approach we develop. **Hyperparameter Transfer.**[43; 30; 17] explore transferrable hyperparameter (HP) tuning. The recent Tensor Program (TP) work of [41] and [42] focuses on zero-shot HP transfer across model scale and establishes a principled network parameterization scheme to facilitate HP transfer. This serves as an anchor for our strategy, though, as Section 3 details, modifications are required to account for dynamic growth. **Learning Rate Adaptation.** Surprisingly, the existing spectrum of network growing techniques utilize relatively standard learning rate schedules and do not address potential discrepancy among subcomponents added at different phases. While general-purpose adaptive optimizers, _e.g._, AdaGrad [8], RMSProp [34], Adam [21], or AvaGrad [32], might ameliorate this issue, we choose to explicitly account for the discrepancy. As layer-adaptive learning rates (LARS) [12; 44] benefit in some contexts, we explore further learning rate adaption specific to both layer and growth stage. ## 3 Method ### Parameterization and Optimization with Growing Dynamics **Functionality Preservation**. We grow a network's capacity by expanding the width of computational units (_e.g.,_ hidden dimensions in linear layers, filters in convolutional layers). To illustrate our scheme, consider a 3-layer fully-connected network with ReLU activations \(\phi\) at a growing stage \(t\): \[\mathbf{u}_{t}=\phi(\mathbf{W}_{t}^{x}\mathbf{x})\quad\mathbf{h}_{t}=\phi(\mathbf{W}_{t}^{u}\mathbf{u} _{t})\quad\quad\mathbf{y}_{t}=\mathbf{W}_{t}^{h}\mathbf{h}_{t}\,, \tag{1}\] where \(\mathbf{x}\in\mathbb{R}^{C^{x}}\) is the network input, \(\mathbf{y}_{t}\in\mathbb{R}^{C^{y}}\) is the output, and \(\mathbf{u}_{t}\in\mathbb{R}^{C_{t}^{u}},\mathbf{h}_{t}\in\mathbb{R}^{C_{t}^{h}}\) are the hidden activations. In this case, \(\mathbf{W}_{t}^{x}\) is a \(C_{t}^{u}\times C^{x}\) matrix, while \(\mathbf{W}_{t}^{u}\) is \(C_{t}^{h}\times C_{t}^{u}\) and \(\mathbf{W}_{t}^{h}\) is \(C^{y}\times C_{t}^{h}\). Our growing process operates by increasing the dimensionality of each hidden state, _i.e._, from \(C_{t}^{u}\) and \(C_{t}^{h}\) to \(C_{t+1}^{u}\) and \(C_{t+1}^{h}\), effectively expanding the size of the parameter tensors for the next growing stage \(t+1\). The layer parameter matrices \(\mathbf{W}_{t}\) have their shapes changed accordingly and become \(\mathbf{W}_{t+1}\). Figure 2 illustrates the process for initializing \(\mathbf{W}_{t+1}\) from \(\mathbf{W}_{t}\) at a growing step.1 Footnote 1: We defer the transformation between \(\mathbf{W}_{t}\) and \(\mathbf{W}_{t}^{{}^{\prime}}\) to the next subsection. It involves rescaling by constant factors, does not affect network functionality, and is omitted in Eq. 1- 4 for simplicity. Following Figure 2(a), we first expand \(\mathbf{W}_{t}^{x}\) along the output dimension by adding two copies of new weights \(\mathbf{V}_{t}^{x}\) of shape \(\frac{C_{t+1}^{u}-C_{t}^{u}}{2}\times C^{x}\), generating new features \(\phi(\mathbf{V}_{t}^{x}\mathbf{x})\). The first set of activations become \[\mathbf{u}_{t+1}=\operatorname{concat}\left(\mathbf{u}_{t},\phi(\mathbf{V}_{t}^{x}\mathbf{x}),\phi(\mathbf{V}_{t}^{x}\mathbf{x})\right)\,, \tag{2}\] where \(\operatorname{concat}\) denotes the concatenation operation. Next, we expand \(\mathbf{W}_{t}^{u}\) across both input and output dimensions, as shown in Figure 2(b). We initialize new weights \(\mathbf{Z}_{t}^{h}\) of shape \(C_{t}^{h}\times\frac{C_{t+1}^{u}-C_{t}^{u}}{2}\) and add to \(\mathbf{W}_{t}^{u}\) two copies of it with different signs: \(+\mathbf{Z}_{t}^{u}\) and \(-\mathbf{Z}_{t}^{u}\). This preserves the output of the layer since \(\phi(\mathbf{W}_{t}^{u}\mathbf{u}_{t}+\mathbf{Z}_{t}^{u}\phi(\mathbf{V}_{t}^{x}\mathbf{x})+(-\mathbf{Z} _{t}^{u})\phi(\mathbf{V}_{t}^{x}\mathbf{x}))=\phi(\mathbf{W}_{t}^{u}\mathbf{u}_{t})=\mathbf{h}_{t}\,\). We then add two copies of new weights \(\mathbf{V}_{t}^{u}\), which has shape \(\frac{C_{t+1}^{u}-C_{t}^{h}}{2}\times C_{t+1}^{u}\), yielding activations \[\mathbf{h}_{t+1}=\operatorname{concat}(\mathbf{h}_{t},\phi(\mathbf{V}_{t}^{u}\mathbf{u}_{t+1} ),\phi(\mathbf{V}_{t}^{u}\mathbf{u}_{t+1}))\,. \tag{3}\] We similarity expand \(\mathbf{W}_{t}^{h}\) with new weights \(\mathbf{Z}_{t}^{h}\) to match the dimension of \(\mathbf{h}_{t+1}\), as shown in Figure 2(c). The network's output after the growing step is: \[\mathbf{y}_{t+1} =\mathbf{W}_{t}^{h}\mathbf{h}_{t}+\mathbf{Z}_{t}^{h}\phi(\mathbf{V}_{t}^{u}\mathbf{u}_ {t+1})+(-\mathbf{Z}_{t}^{h})\phi(\mathbf{V}_{t}^{u}\mathbf{u}_{t+1}) \tag{4}\] \[=\mathbf{W}_{t}^{h}\mathbf{h}_{t}=\mathbf{y}_{t}\,,\] which preserves the original output features in Eq. 1. Appendix provides illustrations for more layers. **Weights Initialization with Variance Transfer (VT).**[42] investigate weight scaling with width at initialization, allowing hyperparameter transfer by calibrating variance across model size. They modify the variance of output layer weights from the commonly used \(\frac{1}{\text{fan}_{in}}\) to \(\frac{1}{\text{fan}_{in}^{2}}\). We adopt this same correction for the added weights with new width: \(\mathbf{W}^{h}\) and \(\mathbf{Z}^{h}\) are initialized with variances of \(\frac{1}{C_{t}^{\text{n}^{2}}}\) and \(\frac{1}{C_{t+1}^{h^{2}}}\). However, this correction considers training differently-sized models separately, which fails to accommodate the training dynamics in which width grows incrementally. Assuming that the weights of the old subnetwork follow \(\mathbf{W}_{t}^{h}\sim\mathcal{N}(0,\frac{1}{C_{t}^{h^{2}}})\) (which holds at initialization), we make them compatible with new weight tensor parameterization by rescaling it with the \(\text{fan}_{in}\) ratio as \(\mathbf{W}_{t}^{h^{\prime}}=\mathbf{W}_{t}^{h}\cdot\frac{C_{t}^{h}}{C_{t+1}^{h^{2}}}\). See Table 1 (top). Appendix provides detailed analysis. This parameterization rule transfers to modern CNNs with batch normalization (BN). Given a weight scaling ratio of \(c\), the running mean \(\mu\) and variance \(\sigma\) of BN layers are modified as \(c\mu\) and \(c^{2}\sigma\), respectively. **Stage-wise Learning Rate Adaptation (LRA).** Following [42], we employ a learning rate scaling factor of \(\propto\frac{1}{\text{fan}_{in}}\) on the output layer when using SGD, compensating for the initialization scheme. However, subnetworks from different growth stages still share a global learning rate, though they have trained for different lengths. This may cause divergent behavior among the corresponding weights, making the training iterations after growing sensitive to the scale of the newly-initialized weights. Instead of adjusting newly added parameters via local optimization [39; 10], we govern the update of each subnetwork in a stage-wise manner. Let \(\mathcal{W}_{t}\) denote the parameter variables of a layer at a growth stage \(t\), where we let \(\mathbf{W}_{t}\) and \(\mathbf{W}_{t}^{\prime}\) correspond to the same set of variables such that \(\mathcal{W}_{t+1}\setminus\mathcal{W}_{t}\) denotes the new parameter variables whose values are initialized with \(\mathbf{Z}_{t}\) and \(\mathbf{V}_{t}\). Moreover, let \(\mathbf{W}_{\Delta k}\) and \(\mathbf{G}_{\Delta k}\) denote the values and gradients of \(\mathcal{W}_{k}\setminus\mathcal{W}_{k-1}\). We adapt the learning rate used to update each sub-weight \(\mathbf{W}_{\Delta k}\), for \(0\leq k\leq t\), as follows: \[\eta_{k}=\eta_{0}\cdot\frac{f(\mathbf{W}_{\Delta k})}{f(\mathbf{W}_{\Delta 0})}\quad\mathbf{W}_{ \Delta k}\leftarrow\mathbf{W}_{\Delta k}-\eta_{k}\mathbf{G}_{\Delta k}\,, \tag{5}\] where \(\eta_{0}\) is the base learning rate, \(f\) is a function that maps subnetworks of different stages to corresponding train-time statistics, and \(\mathbf{W}_{\Delta 0}\) are the layer's parameter variables at the first growth stage. Table 1 (bottom) summarizes our LR adaptation rule for SGD when \(f\) is instantiated as weight norm, providing an stage-wise extension to the layer-wise adaptation method LARS [12], i.e., \(LR\propto||\mathbf{W}||\). Alternative heuristics are possible; see Appendix. \begin{table} \begin{tabular}{c c c c c} \hline \hline & & Input Layer & Hidden Layer & Output Layer \\ \hline \multirow{2}{*}{Init.} & Old Re-scaling & 1 & \(\sqrt{C_{t}^{u}/C_{t+1}^{u}}\) & \(C_{t}^{h}/C_{t+1}^{h}\) \\ & New Init. & \(1/C_{t}^{x}\) & \(1/C_{t+1}^{u}\) & \(1/(C_{t+1}^{h})^{2}\) \\ \hline \multirow{2}{*}{Adapt.} & 0-th Stage & 1 & 1 & \(1/C_{0}\) \\ & \(t\)-th Stage & \(\frac{||\mathbf{W}_{\Delta k}^{\star}||}{\|\mathbf{W}_{\Delta 0}^{\star}\|}\) & \(\frac{||\mathbf{W}_{\Delta k}^{\star}||}{\|\mathbf{W}_{\Delta 0}^{\star}\|}\) & \(\frac{||\mathbf{W}_{\Delta k}^{\star}||}{\|\mathbf{W}_{\Delta 0}^{\star}\|}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameterization and optimization transition for different layers during growing. \(C_{t}\) and \(C_{t+1}\) denote the input dimension before and after a growth step. Figure 2: Initialization scheme. In practice, we also add noise to the expanded parameter sets for symmetry breaking. ### Flexible and Efficient Growth Scheduler We train the model for \(T_{total}\) epochs by expanding the channel number of each layer to \(C_{final}\) across \(N\) growth phases. Existing methods [26, 39] fail to derive a systemic way for distributing training resources across a growth trajectory. Toward maximizing efficiency, we experiment with a coupling between model size and training epoch allocation. **Architectural Scheduler.** We denote initial channel width as \(C_{0}\) and expand exponentially: \[C_{t}=\begin{cases}C_{t-1}+\lfloor p_{c}C_{t-1}\rceil_{2}&\text{if}\quad t<N-1 \\ C_{final}&\text{if}\quad t=N-1\end{cases} \tag{6}\] where \(\lfloor\cdot\rceil_{2}\) rounds to the nearest even number and \(p_{c}\) is the growth rate between stages. **Epoch Scheduler.** We denote number of epochs assigned to \(t\)-th training stage as \(T_{t}\), with \(\sum_{t=0}^{N-1}T_{t}=T_{total}\). We similarly adapt \(T_{t}\) via an exponential growing scheduler: \[T_{t}=\begin{cases}T_{t-1}+\lfloor p_{t}T_{t-1}\rceil\quad\text{if}\quad t<N-1 \\ T_{total}-\sum_{i=0}^{N-2}T_{i}\quad\text{if}\quad t=N-1\end{cases} \tag{7}\] **Wall-clock Speedup via Batch Size Adaptation.** Though the smaller architectures in early growth stages require fewer FLOPs, hardware capabilities may still restrict practical gains. When growing width, in order to ensure that small models fully utilize the benefits of GPU parallelism, we adapt the batch size along with the exponentially-growing architecture in a reverse order: \[B_{t-1}=\begin{cases}B_{base}&\text{if}\quad t=N\\ B_{t}+\lfloor p_{b}B_{t}\rceil\quad\text{if}\quad t<N\end{cases} \tag{8}\] where \(B_{base}\) is the batch size of the large baseline model. Algorithm 1 summarizes our full method. ``` Input: Data \(\mathbf{X}\), labels \(\mathbf{Y}\), task loss \(L\) Output: Grown model \(\mathcal{W}\) Initialize: \(\mathcal{W}_{0}\) with \(C_{0}\), \(T_{0}\), \(B_{0}\), \(\eta_{0}\) for\(\mathbf{t}=0\)to \(N-1\)do if\(t>0\)then Init. \(S_{n}\) from \(S_{n-1}\) using VT in Table 1. Update \(C_{t}\) and \(T_{t}\) using Eq. 6 and Eq. 7. Update \(B_{t}\) using Eq. 8 (optional) \(\text{Iter}_{total}=T_{t}*len(X)//B_{t}\) endif for\(\mathbf{iter}=1\)to\(\mathbf{Iter}_{total}\)do Forward and calculate \(l=L(\mathcal{W}_{t}(\mathbf{x}),\mathbf{y}))\). Back propagation with \(l\). Update each sub-component using Eq. 5. endfor endfor return \(\mathcal{W}_{N-1}\) ``` **Algorithm 1** : Growing using Var. Transfer and Learning Rate Adapt. with Flexible Scheduler ## 4 Experiments We evaluate on image classification and machine translation tasks. For image classification, we use CIFAR-10 [22], CIFAR-100 [22] and ImageNet [6]. For the neural machine translation, we use the IWSLT'14 dataset [2] and report the BLEU [29] score on German to English (De-En) translation. **Large Baselines via Fixed-size Training.** We use VGG-11 [33] with BatchNorm [20], ResNet-20 [16], MobileNetV1 [18] for CIFAR-10 and VGG-19 with BatchNorm, ResNet-18, MobileNetV1 for CIFAR-100. We follow [19] for data augmentation and processing, adopting random shifts/mirroring and channel-wise normalization. CIFAR-10 and CIFAR-100 models are trained for 160 and 200 epochs respectively, with a batch size of 128 and initial learning rate (LR) of 0.1 using SGD. We adopt a cosine LR schedule and set the weights decay and momentum as \(5\)e-\(4\) and \(0.9\). For ImageNet, we train the baseline ResNet-50 and MobileNetV1 [18] using SGD with batch sizes of 256 and 512, respectively. We adopt the same data augmentation scheme as [14], the cosine LR scheduler with initial LR of 0.1, weight decay of 1e-\(4\) and momentum of \(0.9\). For IWSLT'14, we train an Encoder-Decoder Transformer (6 attention blocks each) [35]. We set width as \(d_{model}=1/4d_{ffn}=512\), the number of heads \(n_{head}=8\) and \(d_{k}=d_{q}=d_{v}=d_{model}/n_{head}=64\). We train the model using Adam for 20 epochs with learning rate 1e-\(3\) and \((\beta_{1},\beta_{2})=(0.9,0.98)\). Batch size is 1500 and we use 4000 warm-up iterations. **Implementation Details.** We compare with the growing methods Net2Net [4], Splitting [26], FireFly split, FireFly [39] and GradMax [10]. In our method, noise for symmetry breaking is 0.001 to the norm of the initialization. We re-initialize the momentum buffer at each growing step when using SGD while preserving it for adaptive optimizers (e.g., Adam, AvaGrad). For image classification, we run the comparison methods except GradMax alongside our algorithm for all architectures under the same growing scheduler. For the architecture scheduler, we set \(p_{c}\) as 0.2 and \(C_{0}\) as 1/4 of large baselines in Eq. 6 for all layers and grow the seed architecture within \(N=9\) stages towards the large ones. For epoch scheduler, we set \(p_{t}\) as \(0.2\), \(T_{0}\) as \(8\), \(10\), and \(4\) in Eq. 7 on CIAFR-10, CIFAR-100, and ImageNet respectively. Total training epochs \(T_{total}\) are the same as the respective large fixed-size models. We train the models and report the results averaging over 3 random seeds. For machine translation, we grow the encoder and decoder layers' widths while fixing the embedding layer dimension for a consistent positional encoding table. The total number of growing stages is 4, each trained for 5 epochs. The initial width is 1/8 of the large baseline (i.e. \(d_{model}=64\) and \(d_{k,q,v}=8\)). We set the growing ratio \(p_{c}\) as 1.0 so that \(d_{model}\) evolves as 64, 128, 256 and 512. We train all the models on an NVIDIA 2080Ti 12GB GPU for CIFAR-10, CIFAR-100, and IWSLT'14, and two NVIDIA A40 48GB GPUs for ImageNet. ### CIFAR Results All models grow from a small seed architecture to the full-sized one in 9 stages, each trained for \(\{8,9,11,13,16,19,23,28,33\}\) epochs (\(160\) total) in CIFAR-10, and \(\{10,12,14,17,20,24,29,35,39\}\) (\(200\) total) in CIFAR-100. Net2Net follows the design of growing by splitting via simple neuron replication, hence achieving the same training efficiency as our gradient-free method under the same growing schedule. Splitting and Firely require additional training effort for their neuron selection schemes and allocate extra GPU memory for auxiliary variables during the local optimization stage. This is computationally expensive, especially when growing a large model. **ResNet-20, VGG-11, and MobileNetV1 on CIFAR-10.** Table 2 shows results in terms of test accuracy and training cost calculated based on overall FLOPs. For ResNet-20, Splitting and Firefly achieve better test accuracy than Net2Net, which suggests the additional optimization benefits neuron selection at the cost of training efficiency. Our method requires only \(54.9\%\) of the baseline training cost and outperforms all competing methods, while yielding only \(0.09p.p\) (percentage points) performance degradation compared to the static baseline. Moreover, our method even outperforms the large fixed-size VGG-11 by \(0.20p.p\) test accuracy, while taking only \(52.91\%\) of its training cost. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{3}{*}{Method} & \multicolumn{3}{c}{ResNet-20} & \multicolumn{3}{c}{VGG-11} & \multicolumn{3}{c}{MobileNetv1} \\ \cline{2-7} & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{ \begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } \\ \hline Large Baseline & \(100\) & \(92.62\pm 0.15\) & \(100\) & \(92.14\pm 0.22\) & \(100\) & \(92.27\pm 0.11\) \\ Net2Net & \(\mathbf{54.90}\) & \(91.60\pm 0.21\) & \(\mathbf{52.91}\) & \(91.78\pm 0.27\) & \(\mathbf{53.80}\) & \(90.34\pm 0.20\) \\ Splitting & \(70.69\) & \(91.80\pm 0.10\) & \(63.76\) & \(91.88\pm 0.15\) & \(65.92\) & \(91.50\pm 0.06\) \\ FireFly-split & \(58.47\) & \(91.78\pm 0.11\) & \(56.18\) & \(91.91\pm 0.15\) & \(56.37\) & \(91.56\pm 0.06\) \\ FireFly & \(68.96\) & \(92.10\pm 0.13\) & \(60.24\) & \(92.08\pm 0.16\) & \(62.12\) & \(91.69\pm 0.07\) \\ Ours & \(\mathbf{54.90}\) & \(\mathbf{92.53\pm 0.11}\) & \(\mathbf{52.91}\) & \(\mathbf{92.34\pm 0.15}\) & \(\mathbf{53.80}\) & \(\mathbf{92.01\pm 0.10}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Growing ResNet-20, VGG-11, and MobileNetV1 on CIFAR-10. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{3}{*}{Method} & \multicolumn{3}{c}{ResNet-18} & \multicolumn{3}{c}{VGG-19} & \multicolumn{3}{c}{MobileNetv1} \\ \cline{2-7} & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Train \\ Cost\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } & \multicolumn{1}{c}{ \begin{tabular}{c} Test \\ Accuracy\((\%)\) \\ \end{tabular} } \\ \hline Large Baseline & \(100\) & \(78.36\pm 0.12\) & \(100\) & \(72.59\pm 0.23\) & \(100\) & \(72.13\pm 0.13\) \\ Net2Net & \(\mathbf{52.63}\) & \(76.48\pm 0.20\) & \(\mathbf{52.08}\) & \(71.88\pm 0.24\) & \(\mathbf{52.90}\) & \(70.01\pm 0.20\) \\ Splitting & \(68.01\) & \(77.01\pm 0.12\) & \(60.12\) & \(71.96\pm 0.12\) & \(58.39\) & \(70.45\pm 0.10\) \\ FireFly-split & \(56.11\) & \(77.22\pm 0.11\) & \(54.64\) & \(72.19\pm 0.14\) & \(54.36\) & \(70.69\pm 0.11\) \\ FireFly & \(65.77\) & \(77.25\pm 0.12\) & \(57.48\) & \(72.79\pm 0.13\) & \(56.49\) & \(70.99\pm 0.10\) \\ \hline Ours & \(\mathbf{52.63}\) & \(\mathbf{78.12\pm 0.15}\) & \(\mathbf{52.08}\) & \(\mathbf{73.26\pm 0.14}\) & \(\mathbf{52.90}\) & \(\mathbf{71.53\pm 0.13}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Growing ResNet-18, VGG-19, and MobileNetV1 on CIFAR-100. For MobileNetV1, our method also achieves the best trade-off between training efficiency and test accuracy among all competitors. **ResNet-18, VGG-19, and MobileNetV1 on CIFAR-100.** We also evaluate all methods on CIFAR-100 using different network architectures. Results in Table 3 show that Firely consistently achieves better test accuracy than Firefly-split, suggesting that adding new neurons provides more flexibility for exploration than merely splitting. Both Firely and our method achieve better performance than the original VGG-19, suggesting that network growing might have an additional regularizing effect. Our method yields the best accuracy and largest training cost reduction. ### ImageNet Results We first grow ResNet-50 on ImageNet and compare the performance of our method to Net2Net and FireFly under the same epoch schedule: \(\{4,4,5,6,8,9,11,14,29\}\) (\(90\) total) with 9 growing phases. We also grow MobileNetV1 from a small seed architecture, which is more challenging than ResNet-50. We train Net2Net and our method uses the same scheduler as for ResNet-50. We also compare with Firefly-Opt (a variant of FireFly) and GradMax and report their best results from [10]. Note that both methods not only adopt additional local optimization but also train with a less efficient growing scheduler: the final full-sized architecture needs to be trained for a \(75\%\) fraction while ours only requires \(32.2\%\). Table 4 shows that our method dominates all competing approaches. ### IWSLT14 De-En Results We grow a Transformer from \(d_{model}=64\) to \(d_{model}=512\) within 4 stages, each trained with 5 epochs using Adam. Applying gradient-based growing methods to the Transformer architecture is non-trivial due to their domain specific design of local optimization. As such, we only compare with Net2Net. We also design variants of our method for self-comparison, based on the adaptation rules for Adam in Appendix. As shown in Table 5, our method generalizes well to the Transformer architecture. ### Analysis **Ablation Study.** We show the effects of turning on/off each of our modifications to the baseline optimization process of Net2Net (1) Growing: adding neurons with functionality preservation. (2) Growing+VT: only applies variance transfer. (3) Growing+RA: only applies LR rate adaptation. (4) Full method. We conduct experiments using both ResNet-20 on CIFAR-10 and ResNet-18 on CIFAR-100. As shown in Table 6, different variants of our growing method not only outperform Net2Net consistently but also reduce the randomness (std. over 3 runs) caused by random replication. We also see that, both RA and VT boost the baseline growing method. Both components are designed and woven to accomplish the empirical leap. Our full method bests the test accuracy. **Justification for Variance Transfer.** We train a simple neural network with 4 convolutional layers on CIFAR-10. The network consists of 4 resolution-preserving convolutional layers; each convolution has 64, 128, 256 and 512 channels, a \(3\times 3\) kernel, and is followed by BatchNorm and ReLU activations. Max-pooling is applied to each layer for a resolution-downsampling of 4, 2, 2, and 2. These CNN layers are then followed by a linear layer for classification. We first alternate this network into four variants, given by combinations of training epochs \(\in\{13(1\times),30(2.3\times)\}\) and initialization methods \(\in\) {standard, \(\mu\)transfer [42]}. We also grow from a thin architecture within \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{ResNet-50} & \multicolumn{2}{c}{MobileNet-v1} \\ \cline{2-5} Method & \begin{tabular}{c} Train \\ Cost(\(\%\)) \\ \end{tabular} & \begin{tabular}{c} Test \\ Acc.(\(\%\)) \\ \end{tabular} & \begin{tabular}{c} Train \\ Cost(\(\%\)) \\ \end{tabular} & \begin{tabular}{c} Test \\ Acc.(\(\%\)) \\ \end{tabular} \\ \hline \begin{tabular}{l} Large \\ Net2Net \\ Firefly \\ GradMax \\ \end{tabular} & \begin{tabular}{c} 100 \\ \(60.12\) \\ \end{tabular} & \begin{tabular}{c} 76.72 \\ \(74.89\) \\ \(74.0\) \\ \end{tabular} & \begin{tabular}{c} 100 \\ \(63.72\) \\ \(66.60\) \\ \end{tabular} & \begin{tabular}{c} 70.80 \\ \(66.19\) \\ \(66.40\) \\ \end{tabular} \\ \begin{tabular}{l} Ours \\ \end{tabular} & \begin{tabular}{c} **60.12** \\ \end{tabular} & \begin{tabular}{c} **75.90** \(\pm\) **0.14** \\ \end{tabular} & \begin{tabular}{c} **63.72** \\ \(69.91\) \(\pm\) **0.16** \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 4: ResNet-50 and MobileNetV1 on ImageNet. \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{Transformer} \\ \cline{2-3} Method & \begin{tabular}{c} Train Cost(\(\%\)) \\ \end{tabular} & \begin{tabular}{c} BLEU\(\uparrow\) \\ \end{tabular} \\ \hline \begin{tabular}{l} Large \\ Net2Net \\ \end{tabular} & \begin{tabular}{c} 100 \\ \(64.64\) \\ \end{tabular} & \begin{tabular}{c} \(32.82\pm 0.21\) \\ \(30.97\pm 0.35\) \\ \(31.44\pm 0.18\) \\ \(31.62\pm 0.16\) \\ \end{tabular} \\ \begin{tabular}{l} Ours-w/o buffer \\ \end{tabular} & \begin{tabular}{c} \(64.64\) \\ \(64.64\) \\ \end{tabular} & \begin{tabular}{c} \(31.44\pm 0.18\) \\ \(31.62\pm 0.16\) \\ \(32.01\pm 0.16\) \\ \(32.01\pm 0.16\) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 5: Transformer on IWSLT’14. 3 stages, where the channel number of each layer starts with only 1/4 of the original one, _i.e.,_\(\{16,32,64,128\}\rightarrow\{32,64,128,256\}\rightarrow\{64,128,256,512\}\), each stage is trained for 10 epochs. For network growing, we compare the baselines with standard initialization and variance transfer. We train all baselines using SGD, with weight decay set as 0 and learning rates sweeping over \(\{0.01,0.02,0.05,0.1,0.2,0.5,0.8,1.0,1.2,1.5,2.0\}\). In Figure 3(b), growing with Var. Transfer (blue) achieves overall better test accuracy than standard initialization (orange). Large baselines with merely \(\mu\)transfer in initialization consistently underperform standard ones, which validate that the compensation from the LR re-scaling is necessary in [42]. We also observe, in both Figure 3(a) and 3(b), all growing baselines outperform fixed-size ones under the same training cost, demonstrating positive regularization effects. We also show the effect of our initialization scheme by comparing test performance on standard ResNet-20 on CIFAR-10. As shown in Figure 4(a), compared with standard initialization, our variance transfer not only achieves better final test accuracy but also appears more stable. See Appendix for a fully-connected network example. **Justification for Learning Rate Adaptation.** We investigate the value of our proposed stage-wise learning rate adaptation as an optimizer for growing networks. As shown in the red curve in Figure 3, rate adaptation not only bests the train loss and test accuracy among all baselines, but also appears to be more robust over different learning rates. In Figure 4(a), rate adaptation further improves final test accuracy for ResNet-20 on CIFAR-10, under the same initialization scheme. Figure 4(b) and 4(c) visualize the gradients of different sub-components for the 17-th convolutional layer of ResNet-20 during last two growing phases of standard SGD and rate adaptation, respectively. Our rate adaptation mechanism rebalances subcomponents' gradient contributions to appear in lower divergence than global LR, when components are added at different stages and trained for different durations. In Figure 5, we observe that the LR for newly added Subnet-8 (red) in last stage starts around \(1.8\times\) the base LR, then quick adapts to a smoother level. This demonstrates that our method is able to automatically adapt the updates applied to new weights, without any additional local optimization costs [40; 10]. All above show our method has a positive effect in terms of stabilizing training dynamics, which is lost if one attempts to train different subcomponents using a global LR scheduler. Appendix provides more analysis. **Flexible Growing Scheduler.** Our growing scheduler gains the flexibility to explore the best trade-offs between training budgets and test performance in a unified configuration scheme (Eq. 6 and Eq. 7). We compare the exponential epoch scheduler (\(p_{t}\in\{0.2,0.25,0.3,0.35\}\)) to a linear one (\(p_{t}=0\)) in ResNet-20 growing on CIFAR-10, denoted as 'Exp.' and 'Linear' baselines in Figure 6. Both baselines use the architectural schedulers with \(p_{c}\in\{0.2,0.25,0.3,0.35\}\), each generates trade-offs between train costs and test accuracy by alternating \(T_{0}\). The exponential scheduler yields better overall trade-offs than the linear one with the same \(p_{c}\). In addition to different growing schedulers, we also plot a baseline for fixed-size training with different models. Growing methods with both schedulers consistently outperforms the fixed-size baselines, demonstrating that the regularization effect of network growth benefits generalization performance. **Wall-clock Training Speedup.** We benchmark GPU memory consumption and wall-clock training time on CIFAR-100 for each stage during training on single NVIDIA 2080Ti GPU. The large baseline ResNet-18 trains for 140 minutes to achieve 78.36\(\%\) accuracy. As shown in the green bar of Figure 7(b), the growing method only achieves marginal wall-clock acceleration, under the same fixed batch size. As such, the growing ResNet-18 takes 120 minutes to achieve \(78.12\%\) accuracy. The low GPU utilization in the green bar in Figure 7(a) hinders FLOPs savings from translating into real-world training acceleration. In contrast, the red bar of Figure 7 shows our batch size adaptation results in a large proportion of wall clock acceleration by filling the GPU memory, and corresponding parallel execution resources, while maintaining test accuracy. ResNet-18 trains for \(84\) minutes (\(1.7\times\) speedup) and achieves \(78.01\%\) accuracy. ## 5 Conclusion We tackle a set of optimization challenges in network growing and invent a corresponding set of techniques, including initialization with functionality preservation, variance transfer and learning rate adaptation to address these challenges. Each of these techniques can be viewed as 'upgrading' an original part for training static networks into a corresponding one that accounts for dynamic growing. There is a one-to-one mapping of these replacements and a guiding principle governing the formulation of each replacement. Together, they accelerate training without impairing model accuracy - a result that uniquely separates our approach from competitors. Applications to widely-used architectures on image classification and machine translation tasks demonstrate that our method bests the accuracy of competitors while saving considerable training cost.
2304.10020
A Survey on Deep Neural Network Partition over Cloud, Edge and End Devices
Deep neural network (DNN) partition is a research problem that involves splitting a DNN into multiple parts and offloading them to specific locations. Because of the recent advancement in multi-access edge computing and edge intelligence, DNN partition has been considered as a powerful tool for improving DNN inference performance when the computing resources of edge and end devices are limited and the remote transmission of data from these devices to clouds is costly. This paper provides a comprehensive survey on the recent advances and challenges in DNN partition approaches over the cloud, edge, and end devices based on a detailed literature collection. We review how DNN partition works in various application scenarios, and provide a unified mathematical model of the DNN partition problem. We developed a five-dimensional classification framework for DNN partition approaches, consisting of deployment locations, partition granularity, partition constraints, optimization objectives, and optimization algorithms. Each existing DNN partition approache can be perfectly defined in this framework by instantiating each dimension into specific values. In addition, we suggest a set of metrics for comparing and evaluating the DNN partition approaches. Based on this, we identify and discuss research challenges that have not yet been investigated or fully addressed. We hope that this work helps DNN partition researchers by highlighting significant future research directions in this domain.
Di Xu, Xiang He, Tonghua Su, Zhongjie Wang
2023-04-20T00:17:27Z
http://arxiv.org/abs/2304.10020v1
# A Survey on Deep Neural Network Partition over Cloud, Edge and End Devices ###### Abstract "Deep neural network (DNN) partition" is a research problem that involves splitting a DNN into multiple parts and offloading them to specific locations. Because of the recent advancement in multi-access edge computing and edge intelligence, DNN partition has been considered as a powerful tool for improving DNN inference performance when the computing resources of edge and end devices are limited and the remote transmission of data from these devices to clouds is costly. This paper provides a comprehensive survey on the recent advances and challenges in DNN partition approaches over the cloud, edge, and end devices based on a detailed literature collection. We review how DNN partition works in various application scenarios, and provide a unified mathematical model of the DNN partition problem. We developed a five-dimensional classification framework for DNN partition approaches, consisting of deployment locations, partition granularity, partition constraints, optimization objectives, and optimization algorithms. Each existing DNN partition approache can be perfectly defined in this framework by instantiating each dimension into specific values. In addition, we suggest a set of metrics for comparing and evaluating the DNN partition approaches. Based on this, we identify and discuss research challenges that have not yet been investigated or fully addressed. We hope that this work helps DNN partition researchers by highlighting significant future research directions in this domain. Survey Deep Neural Network DNN Partition Classification Framework Edge Computing Cloud Computing ## 1 Introduction ### Background Deep neural networks (DNNs) have achieved considerable success in various machine-learning applications in recent years. The DNN model is a specific type of artificial neural network with multiple layers of feature extraction [1]. It has consistently achieved state-of-the-art performance on various tasks, such as computer vision, natural language processing, intelligent personal assistance services, augmented reality, smart homes, and smart cities [14, 25]. Because of the rapid spread of Internet of Things (IoT) devices (e.g., wearable sensors) that are integrated into all aspects of people's lives, researchers are aiming to study more complex DNNs with high accuracy [17]. However,as DNNs become deeper or more complex [21], they require higher processing capabilities to achieve an acceptable latency for training and inference, including the requirements of energy, memory, processors, and network. In recent years, meeting the requirements of DNN inference with limited hardware resources has become a challenge. Because of the limitation of hardware resources and the demands of application capabilities, the common assumption has been that end devices cannot realize a large amount of computations with reasonable latency and energy consumption. Thus, cloud computing has emerged as a solution to this problem, because it provides infinite computing storage and resources to end-users based on their demands, anywhere and at any time (Kumar et al., 2019). However, cloud computing causes high latency and requires high transmission bandwidth. In addition, the cloud is usually unreliable (Lin et al., 2017). Edge computing has been proposed to compensate for these drawbacks. In the emerging edge computing (Shi et al., 2016), edge nodes are usually closer to the sensors than the remote cloud, resulting in the advantage of low transmission delay and the disadvantage of limited resources. The advantages of cloud computing and edge computing have led to them being used in DNN-driven applications and piqued the interest of researchers. DNNs run on the cloud because of a lack of processing capacity of end devices; this requires data transmission to the cloud through a wireless network, imposing significant computational pressure on the data center. In addition, running DNNs on the cloud may result in high latency and require high transmission bandwidth. In edge computing, the energy and accuracy of DNN-driven applications are limited because of resource constraints. Therefore, DNN partition was proposed in recent years to split the DNN into several parts and offload them to the specified deployment locations. DNN partition has made progress in various cognitive services (Ding et al., 2020a). For example, DNN partition has been widely applied in wearable cameras used for recognizing objects and understanding the surrounding environment, because it can overcome the limitations of mobile devices and the unsatisfactory responses of these cameras. In smart healthcare and disease detection, minimizing response latency and ensuring user experience are extremely importance (Zeng et al., 2020). DNN partition has shown unprecedented ability in processing human-central contents, such as learning abstract representation and extracting high-level features; moreover, it has resolved the limited source in edge devices while protecting patients' privacy when offloading data in the cloud. Therefore, studying DNN partition is useful. DNN partition approaches over cloud, edge, and end devices have been investigated in many studies. However, a survey on the overall framework in DNN partition approaches is lacking. In this paper, we systematically review the typical partition approaches to facilitate researchers' understanding. ### Contributions and Paper Organization In this study, we first acquire a comprehensive literature on DNN partition approaches using major search engines and digital libraries. Then, we systematically review the DNN partition approaches over cloud, edge, and end devices. Subsequently, we introduce the application scenarios and general definition of DNN partition. The main contributions of this study are listed as follows: * We summarize the technical contributions of related studies and describe the five-dimensional classification framework for DNN partition approaches. * We propose metrics for evaluating and comparing different DNN partition approaches. * We highlight and discuss some challenges and present potential future research directions. This paper is organized as follows. Section 3 introduces application scenarios for DNN partition and the general mathematical definition of the DNN partition problem. The implementation technologies for offloading DNN partition models are also introduced. Section 4 presents the classification framework, which consists of five factors for describing DNN partition approaches. Section 5 describes the refinement of the metrics of DNN partition models and the analysis and comparison of the typical partition approaches based on these metrics. Section 6 provides a discussion on future challenges and opportunities. Section 7 concludes this paper. ## 2 Paper Collection Methodology As a general framework, we followed the guidelines described by Kitchenham and Charters Kitchenham and Charters (2007)to plan and conduct our survey. We classified the collection of the papers into four phases. * **Phase 1**: We collected papers by using typical search engines and digital libraries, including Google Scholar, IEEE Xplore, ACM Digital Library, SpringerLink, DBLP, and arXiv. * **Phase 2**: We conducted exact keyword searches on these search engines and digital libraries to collect papers related to DNN partition approaches over cloud, edge, and end devices. This resulted in more than 1800 papers. We used the following search terms: deep learning, DNN(s), deep neural network, partition, splitting, split, offloading, deployment, uploading, edge computing, MEC, cloud, device, collaborative, joint(ly), resource-efficient, energy-efficient and delay. * **Phase 3**: We created the search string based on the search aforementioned research terms by splitting the keywords into three buckets. Each bucket was represented as an "OR" relation of keywords, whereas the complete search string was an "AND" relation between the three buckets. We considered 133 of the papers found by applying the following search string: ("deep learning" \(\vee\) "DNN" \(\vee\) "deep neural network") \(\wedge\)("edge computing" \(\vee\) "MEC" \(\vee\) "cloud") \(\wedge\) ("partition" \(\vee\) "split" \(\vee\) "offloading" \(\vee\) "joint" \(\vee\) "uploading" \(\vee\) "deployment" \(\vee\) "collaborative") * **Phase 4**: After careful analysis of the three buckets, we filtered for quality by using exclusion criteria. This required a manual analysis of large parts of each publication. We performed one level of snowballing and analyzed the references and research cited in each included paper. We applied the inclusion and exclusion criteria, ensuring that essential papers missed using our selection of search engines and terms were found. Finally, we considered 60 out of the 133 papers collected for this survey. Only papers published before December 2021 were considered for this survey. ## 3 Problem Definition DNN partition involves the splitting of the DNN at the granularity of layers or to finer granularity. Parts of the DNN are then offloaded on cloud, edge, and end devices to improve DNN inference performance. This section introduces the scenarios of DNN-driven applications, to highlight the necessity of DNN partitioning. Then, implementation technologies are introduced based on application scenarios. Finally, the mathematical definition of DNN partition is provided. ### Application Scenarios Owing to the rapid advancement in wireless communication technology (e.g., 5G, edge computing, and cloud), the number of IoT devices has increased dramatically, resulting in a massive amount of data. To fully utilize this data, deep learning has been widely adopted in many scenarios, such as smart cities, smart homes, and virtual/augmented reality (VR/AR), as shown in Fig. 1. We introduce some typical DNN-driven applications in these scenarios below. Smart cities have become a part of people's daily lives. For example, as shown in Fig. 1(a), a smart home camera runs convolutional neural network (CNN)-based face recognition (Gunes and Piccardi, 2007) to provide real-time inspection and warnings to protect the home. Furthermore, the fall detection system (Hsu et al., 2017) generates an alert message when an object falls in the smart home. DNNs are also widely used in smart traffic. For example, in Fig. 1(b), the edge video convergence node is connected to the local surveillance camera, providing AI capabilities to various stock cameras with different capabilities. Industrial parks deploy AI and digital analysis capabilities to achieve real-time industrial control intelligence in the edge and local devices. In Fig. 1(c), a DNN achieves real-time processing and analysis of data and objects with characteristic values by deploying target recognition and mining surveillance capabilities to meet real-time monitoring requirements. VR and AR are new technologies in various fields (Schmoll et al., 2018). For example, typical multiplayer games are designed to run in the cloud with all the gaming clients are connected to it, as shown in Fig. 1(d). The performance requirements are presented in the application scenarios of DNNs in the IoTs. For example, the response delay may last for a few seconds when running a local DNN because of the limited computing capacity. This delay may result in a poor user experience and a completely unusable service. Therefore, improving DNN performance using DNN partition technologies is critical. ### Implementation Technologies Recent research has introduced the prototype of DNN partition approaches. Most DNN partition approaches are efficient in simulation environments; therefore, it is essential to illustrate how to deploy and run DNN partition over cloud, edge, and end devices. Here, we introduce how DNN partition technologies are implemented. Each part of the DNN model is regarded as a microservice, and the container technology is adopted (Kum et al., 2019). The DNN partition algorithm is offloaded in a master edge server at the container level (Zhou et al., 2019), and micro-services with parts of the model are generated and packaged as containers. Multiple microservices run an entire DNN model in containers across the end devices, edge devices, and cloud servers. This enables continuous delivery and deployment in large and complex services through API calls between microservices, which systems (Balalaie et al., 2016; Satyanarayanan, 2017). In addition, tools such as Kubernetes, an open resource infrastructure for automated deployment and management of containerized applications (Sayfan, 2017; Bernstein, 2014), are employed to manage the containers. The systems must be implemented at runtime. The overall framework is shown in Fig. 2. ### Mathematical Models This subsection details the definition of DNN partition over cloud, edge, and end devices and presents the mathematical model of this problem. The output of a DNN partition model depends on the characteristics of the DNN, size of input data, memory footprint, battery, energy consumption, deployment location, network bandwidth, and number of deployed devices. Therefore, we formulate an ordinary DNN partition model based on these factors. **Definition 1** (Communication).: The communication among the cloud, edge, and end devices can be modeled by a graph \(G_{1}\cup G_{2}\cup G_{3}=(D,E_{d,e})\cup(D,E_{d,c})\cup(M,E_{e,c})\), where * \(D=\{1,2,\ldots,|D|\}\) denotes the set of end devices. \(M=\{1,2,\ldots,|M|\}\) denotes the set of edges. * \(E_{d,e}=\{1,2,\ldots,|E_{d,e}|\}\) denotes the set of physical links connecting edges to the end devices, \(E_{d,c}=\{1,2,\ldots,|E_{d,c}|\}\) denotes the set of physical links connecting cloud to the end devices, \(E_{e,c}=\{1,2,\ldots,|E_{d}|\}\) denotes the set of physical links connecting cloud to the edges, and \(B_{n,n}^{w}\) denotes the bandwidth of wireless link between the devices. Figure 1: Application scenarios * \(S=\{S_{1},S_{2},\ldots,S_{N}\}\) defines the set of the size of the input data of DNN, where the number of DNNs is denoted by \(N\). **Definition 2** (Partition strategy).: A DNN partition strategy can be defined as the set of several DNN partition results denoted by \(O=\{O_{1},O_{2},\ldots,O_{l}\}\). The type-n DNN partition result is denoted by \(O_{n}=\{O_{n,1},O_{n,2},\ldots,O_{n,l}\}\), where \(O_{n,l}\) denotes a partition point. **Definition 3** (Performance).: The type-n indicators are defined as \(P^{n}=\{P^{n}_{1},P^{n}_{2},\ldots,P^{n}_{\alpha}\}\), which denote the accuracy of DNN inference, delay time of DNN inference, energy consumption and others. Furthermore, \(p^{r}_{n,a}=\{p^{r,a}_{n,0},p^{r,a}_{n,1},\ldots,p^{r,a}_{n,k}\}\), \(p^{l,a}_{n}=\{p^{l,a}_{n,0},p^{l,a}_{n,1},\ldots,p^{l,a}_{n,k}\}\) and \(p^{f,a}_{n}=\{p^{f,a}_{n,0},p^{f,a}_{n,1},\ldots,p^{f,a}_{n,k}\}\) denote the sets of type-\(a\) performance for type-n DNN inference on the cloud, end and edge device, respectively, and \(p^{i,a}_{n}=\{p^{i,a}_{n,0},p^{l,a}_{n,1},\ldots,p^{i,a}_{n,k}\}\) denotes the set of type-\(a\) performance for DNN intermediate data transmission. Then, \(\forall a\in\alpha\), type-\(a\) performance of type-n DNN is denoted as follows: \[P^{n}_{a}=\bigcup_{i=1}^{k}[p^{r,a}_{n,i}\lor p^{f,a}_{n,i}\lor p^{l,a}_{n,i} \lor p^{l,a}_{n,i}] \tag{1}\] In addition, we define a representative function as follows: \[\sigma(i,u)=\left\{\begin{array}{ll}1,&i\in u\\ 0,&i\notin u\end{array}\right. \tag{2a}\] where \(i\) indicates whether or not layer \(i\) of the DNN is on device \(u\). **Modeling**. The optimal objective of partitioning and offloading over cloud, edge, and end devices is to improve the DNN inference performance. The partition model of the total optimal performance, (also the optimization target) on a distributed system can be formulated as follows: \[\begin{split} P^{a}_{total}=\max/\min&\sum_{n=0}^{N}\sum_{ i\leq n}^{k_{n}}\{p^{r,a}_{n,i}\sigma(i,r)+p^{f,a}_{n,i}\sigma(i,f)+p^{l,a}_{n,i} \sigma(i,l)\\ &+p^{l,a}_{n,i}\prod_{j\in\{r,f\}}[\sigma(i,j)(1-\sigma(i+1,j))] ]\}.\end{split} \tag{3}\] s.t. \[\sigma(i,r)+\sigma(i,f)+\sigma(i,t)=1,\forall i\leq N \tag{4}\] Figure 2: Implementation technologies and runtime environment \[\begin{split}\bigvee_{b\neq b,b=1}^{\alpha}\{\sum_{n=0}^{N}\sum_{i=0}^ {k_{b}}\{p^{r,b}_{n,i}\sigma(i,r)+p^{f,b}_{n,i}\sigma(i,f)+p^{lb}_{n,i}\sigma(i, l)\\ +p^{t,b}_{n,i}\prod_{j\in[r,f,l)}[\sigma(i,j)(1-\sigma(i+1,j))]] \leq\hat{C}_{b}\}.\end{split} \tag{5}\] where \(\hat{C}_{b}\) indicates the constraint boundary of performance \(b\). Note that there can be multiple optimal objectives because Eq. 3 comprises multiple performance indicators. ## 4 Classification Framework ### Overview A five-dimensional classification framework for DNN partition approaches is described. First, we answer why, how, and what DNN partition is. Because of the limited resources and performance requirements, the DNN is divided into some parts; thus, the constraint and optimization objectives are two main factors in DNN partition. Then we seek to understand how to divide the DNN model partitions; this depends on the partition granularity: the layers, sub-layers, and input data. Therefore, this is one of the main factors influencing the DNN partition strategy. Furthermore, the DNN partition problem is formulated, we decide the optimal partitions, obtained by the optimization algorithms, which comprise machine learning, dynamical programming, and others. Finally, the parts of DNN are offloaded to the deployment locations, which are usually provided in advance. Based on these five factors, the systematic classification framework of the DNN partition is shown in Fig. 3. In addition, the DNN partition approaches employed in recent studies are classified in this framework, as shown in Table 1. ### Dimension 1: Deployment Locations of DNN Partition We first describe DNN partition's deployment locations over cloud, edge, and end devices. The structure of deployment locations after partitioning has six categories: distributed computing across the cloud + end devices, edges + end devices, cloud + edges, cloud + edges + end devices, multiple end devices, and multiple edges. In this study, edge nodes included edge servers, base stations, and others. The devices that obtained the data were regarded as end devices. #### 4.2.1 Cloud + End Devices The DNN partition approach is widely used on end devices and cloud collaboration. In a DNN partition approach, some parts of the DNN are inferred locally, and others are offloaded to the cloud. Today, research has achieved collaborative Figure 3: Classification framework inference technology between the cloud and single, multiple, or mobile end devices. Primarily, Neurosurgeon [14] can orchestrate the distribution of computation between mobile end devices and the cloud. Approximate model scheduling (MCDNN) [1] and joint optimization of DNN Partition and scheduling [15] are presented for multiple DNN inferences on a single cloud and multiple user devices. JointDNN [16] with AppealNet [17] is also a method to deploy the parts of DNN on the cloud and end devices. \begin{table} \begin{tabular}{p{42.7pt}|p{34.1pt}|p{34.1pt}} \hline **Framework** & **Description** & \multicolumn{1}{c}{**Reference**} \\ \hline \multirow{4}{*}{Deployment Locations} & Multiple End Devices & [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 99, 99, 99, 99, 91, 92, 93, 94, 95, 96, 97, 98, 9 #### 4.2.2 Edges + End Devices Similar to cloud and end devices, researchers have focused on on the collaboration between the edge and end devices. Initially, the DNN was only partitioned on one end device and one edge server [Ali et al., 2019, Li et al., 2018a]. However, computation offloading must be jointly handled with resource allocation among users in the case of more general multi-users. Thus, joint multi-user DNN partitioning based on multilevel offloading is proposed [Tang et al., 2020, Li et al., 2021a]. Furthermore, IONN [Jeong et al., 2018] and EPDNN [Shin et al., 2019] consider the mobility of clients or the edge server for a single DNN. In addition, DNN partition approaches are processed between some edge servers and mobile end devices [Tian et al., 2021, Ren et al., 2020, Mohammed et al., 2020, Jeong et al., 2020]. #### 4.2.3 Cloud + Edges To meet the users' demands for fast response, long duration, and high accuracy, cloud-edge collaboration computing is proposed. The end device does not require the capability to compute because it only sends task requests and data to edge servers [Ding et al., 2020, Kum et al., 2019, Ko et al., 2018, Fang et al., 2019, Hu et al., 2019, Gao et al., 2021]. Furthermore, some researchers have considered the structure of DNN, such as chain-DNN and DAG-DNN, when they split the DNN into parts on edge devices or the cloud [Fang et al., 2019, Hu et al., 2019]. #### 4.2.4 Cloud + Edges + End Devices The DNN partition over cloud, edge, and end devices has been an important research topic in recent years because it not only considers the characteristics of the end device and edge, but also the properties of the cloud [Ashouri et al., 2020, Lockhart et al., 2020, Huang et al., 2019, Teerapittayanon et al., 2017]. In DDNN [Teerapittayanon et al., 2017], one part of the DNN runs on a single device, and the intermediate DNN output is sent to the cloud. Similarly, a collaborative framework has been presented that connects the mobile web to edge and remote cloud servers [Huang et al., 2020]. Furthermore, one DNN partition [Hu et al., 2021] has presented a pipeline execution model for the mobility of devices and multiple DNNs. #### 4.2.5 Multiple End Devices The multiple end devices synergy has also been considered. MeDNN [Mao et al., 2017] is a local distributed mobile computing system with enhanced partitioning and deployment, which is tailored for large DNNs. The parallel processing of DNN inference across multiple heterogeneous devices is still being explored [Zhou et al., 2019]. #### 4.2.6 Multiple Edges In addition, the deployment locations include multiple end devices. CoEdge [Zeng et al., 2020] orchestrates a single DNN inference over heterogeneous edge devices. Furthermore, multiple DNNs partition inference in MEC and DNN training on the cloud, which is analyzed to accurately minimize the End-to-End (E2E) delay [He et al., 2020]. ### Dimension 2: Partition Granularity This subsection describes how to partition DNN models. One typical method is to partition the DNN into layers or sub-layers. The other is to tune the DNN model. #### 4.3.1 DNN Partition The DNN is split into parts at the granularity of layers, input data, or sub-layers. **Layer Partition**. In general, the DNN is split into two parts. Neurosurgeon [Kang et al., 2017], Edgent [Li et al., 2018a], JALAD [Li et al., 2018b], and PriPro [Gao et al., 2021] are typical partition methods that split a DNN into two parts. Some DNN partition approaches [Ko et al., 2018, Hu et al., 2019, Li et al., 2021a] are similar in that the output of the partition is only two parts. In addition, the DNN can be subdivided into two more parts at the granularity of layers. JointDNN [Eshratifar et al., 2019] is a new method that allows computation on either platform for each layer independently of the other layers; this may allow one more partition point across the mobile device and cloud. However, compared with the general solutions, the approach in [Ren et al., 2021] is more flexible with regard to the fine-grained DNN computation partitioning mechanism. The number of DNNs influences DNN partition. The joint optimization of multiple DNN partition and scheduling for mobile cloud [Duan and Wu, 2021] splits each DNN into two parts at the granularity of each DNN layer and the different partition points. Moreover, to address the multiple DNN partition problem, a DNN partition strategy with layer partition operations is also considered to be efficient [Chen et al., 2021, Hu et al., 2021]. **Data Partition**. DNN partition can be employed at the granularity of the input data. The input data are split and processed on multiple devices simultaneously at runtime. For instance, CoEdge [Zeng et al., 2020] divides the input data and reserves model parameters for the given DNN model. AppealNet [Li et al., 2021b] is a unique method that joints edge devices and the cloud according to the complexity of the input data; the light DNN is deployed in the devices, and the heavy DNN is uploaded to the cloud. **Sub-layer Partition**. In addition to the aforementioned DNN partitions, a DNN partition at a finer granularity has also been proposed. An adaptive DNN partition algorithm is presented at the granularity of branches in each layer [Miao et al., 2020]. Mohammed et al. [Mohammed et al., 2020] proposed a fine-grained adaptive partitioning method to split a DNN into pieces that can be smaller than a single layer. All parallel paths in the DNN are considered, depending on the convolutional or fully-connected layer type. Homoplastically, there is a DNN partition to slice the original CNN layer stacks into independently distributed execution units, and each unit occupies a small memory footprint [Zhao et al., 2018]. The multi-granularity of the DNN partition is considered because a DNN partition model that combines two or more partition granularities can improve the DNN performance. For example, Yang et al. [Yang et al., 2021] leveraged the data and layer partition by dividing a DNN model into several blocks and processing each block differently. Furthermore, this method splits the input data of each layer to divide the computation in a block into independent tasks performed by different edge devices. ### Dimension 3: Partition Constraints The DNN partition is an optimization problem that addresses the limited resources and user requirements. Therefore, the partition constraints become one of the most indispensable factors that influence the DNN partition model. Generally, the DNN partition relates to deployment resources, such as the deployment location, the limitation of device and edge servers, network bandwidth, and the model's property. Naturally, the constraint also contains user requirements; for example, the inference accuracy cannot be lower than that for the user requirements. We introduce the constraints from two aspects in the following section. #### 4.4.1 DNN Tuning One method deserves to be mentioned is the fine-tuning of DNN usually used in DNN partitions. The purpose of tuning DNN is to overcome two challenges. One is decreasing the DNN parameters' redundancy to within the required accuracy, such as pruning; a lightweight DNN can optimize resource utilization. The other challenge is to decrease the relevance between DNN layers, such as layer fusion. When the output of one layer is the input of other layers, large quantities of data result in transmission delay. Therefore, tuning DNN is the most common method to improve the performance of DNN-driven applications. Fine-tuning DNN is achieved by tuning the internal parameters or structure to meet the requirements of applications. For example, fine-tuning methods include parameters binarization, matrix factorization, pruning, compression, and others. The main tuning methods in recent papers are described in Table 2. #### 4.4.2 Resource Constraints The aim in most studies was to enable DNN computation on resource-constrained mobile devices by partitioning DNNs horizontally or vertically into different sub-networks. The limited memory of end devices and communication costs are \begin{table} \begin{tabular}{l|l} \hline \hline **Reference** & **Tuning Methods** \\ \hline BranchyNet [Teerapittayanon et al., 2017] & Parameters Binarization \\ \hline MCDNN [Han et al., 2016] & Matrix Factorization, Pruning \\ Architectural change \\ \hline DeepAdapter [Huang et al., 2020] & Pruning \\ \hline MeDNN [Mao et al., 2017] & Pruning, Quantization Compression \\ \hline AppealNet [Li et al., 2021b] & Architectural change \\ \hline CSMEC [Ding et al., 2020a] & Parameter Sharing \\ \hline Edgent [Li et al., 2018a] & DNN Right-sizing \\ \hline JALAD [Li et al., 2018b] & Quantization Compression \\ \hline PADCS [Hu et al., 2021] & Quantization Compression \\ \hline Edge-host Partitioning [Ko et al., 2018] & Encoding Compression \\ \hline EdgeLD [Xue et al., 2020] & Layer-fusion \\ \hline ShadowTutor [Chung et al., 2020] & Knowledge Distillation \\ \hline \hline \end{tabular} \end{table} Table 2: Studies in which different tuning methods were adopted. the primary constraints. The BranchyNet [Teerapittayanon et al., 2017] is a typical algorithm that balances the accuracy and resource constraints. In addition, energy consumption is a type of DNN partition constraint [Huang et al., 2019, Gao et al., 2021]. DeepAdapter [Huang et al., 2020] is a new DNN partition approach involving the network bandwidth constraint. The constraints dynamically consider the network's changes [Hu et al., 2019]. Furthermore, a DNN partition strategy is constrained because the bandwidth allocated to all mobile devices covered by base stations cannot exceed the total bandwidth [He et al., 2020]. Some DNN partition approaches consider more comprehensive constraints. For example, the collaborative constraints of memory size, device energy budget, and cloud-cost budget are considered [Han et al., 2016, Lockhart et al., 2020, Li et al., 2021a]. Moreover, the parameters of the real-time adaptive model [Eshratifar et al., 2019] depend on the following factors: mobile and cloud hardware and software resources, battery capacity, network specifications, and inference delay requirement. A new DNN partition method [Xu et al., 2020a] considers the energy efficiency of both user devices and base stations in a 5G-enabled MEC along with the devices and base stations' capabilities, the number of DNN tasks, and latency constraint. #### 4.4.3 Self-Model Constraints When partitioning the DNN, we often need to compromise some performance to meet other requirements, which is unrealistic regardless of performance loss. Therefore, the constraints of DNN performance have been considered in recent studies. Constrained by the performance of DNNs, Chen et al. [Chen et al., 2021] and Zeng et al. [Zeng et al., 2020] considered the latency requirements. JALAD [Li et al., 2018b] considered the minimum accuracy requirement and number of DNN partitions. Encoding feature spacing on the intermediate layers [Ko et al., 2018] constrains accuracy as do data compression and early exiting algorithms [Hu et al., 2021]. #### 4.4.4 Privacy Privacy protection is essential in cloud-edge collaboration. Privacy protection is often compromised when considering the load on an edge device [Osia et al., 2018]. DNN partition must also alert to the privacy issue. Sending intermediate DNN data from edge devices to the cloud is at risk of interception during various stages; therefore, PriPro [Gao et al., 2021] was introduced to protect privacy. ### Dimension 4: Optimization Objectives DNN-driven applications have different optimization requirements, such as the lowest latency to obtain inferred results and the client's minimum energy consumption. We now summarize the optimization targets into six categories based on relevant research in recent years. First, minimization latency is the most studied topic in relevant references. Minimizing the overall delay of a frame is also an optimization objective [Hu et al., 2019]. An algorithm [Miao et al., 2020] was proposed to balance multiple devices' loading rates and minimize latency. Edgent [Li et al., 2018a] was proposed as a solution to low latency edge intelligence. The optimization objective for the multi-DNNs partition algorithm is to minimize the maximum DNN inference latency among all devices to reduce the global latency [Tang et al., 2020]. DeepAdapter [Huang et al., 2020] incorporates the mobile devices' latency, network condition, and computing capability. Minimizing energy consumption or maximizing the accuracy of DNN inferences to achieve optimization objectives are also important. Reducing energy consumption decreases the cost of edge computing for offloading DNN-based applications to multiple DNNs [Huang et al., 2019]. Moreover, the optimization computation schedule [Eshratifar et al., 2019] has been presented to meet the lowest energy consumption. Generally, multiple DNNs partition algorithms [Chen et al., 2021] mainly aim to minimize the energy consumption, with each DNN running an open loop, considering the runtime energy consumption per time unit and computing energy consumption. Maximizing the accuracy is also regarded as an optimization objective [Han et al., 2016, Ding et al., 2020a]. PriPro [Gao et al., 2021] injects noise for privacy protection targeting DNNs. Under this condition, the optimization objective is to maximize the accuracy. Generally, multi-objective optimization is more useful in practice than single-objective optimization. Scission [Lockhart et al., 2020] can obtain an appropriate partition scheme according to the hardware conditions and user's demands, such as minimizing the latency and minimizing energy consumption. The optimal objectives of IONN [Jeong et al., 2018] not only reduce the latency but also consider the time to upload the DNN partitions. DDNN [Teerapittayanon et al., 2017] also adopts layers partition to balance accuracy and energy consumption. Furthermore, a trade-off study on the energy efficiency and throughput of the edge platform [Ko et al., 2018] has been presented. ### Dimension 5: Optimization Algorithms In different scenarios, many technologies employ different optimization objectives to obtain the optimal solution, including dynamic programming, integer programming (IP), convex optimization, reinforcement learning (RL), and the shortest path algorithm. In particular, the constructed optimization objective function is generally NP-hard because of the non-linearity of the function and the uncertainty of the number of parameters. Therefore, researchers usually used approximate algorithms, including greedy algorithms, approximate convex optimization, and genetic algorithms. In this section, we introduce some systematic algorithms. #### 4.6.1 Dynamic Programming Dynamic programming is a classic algorithm used to solve the optimization problem. The local solutions that are likely to be optimal are retained through decision making, and the others are discarded. Each subproblem is solved in turn, with the last sub-problem being the solution to the original problem (Stuckey et al., 2020). Dynamic programming is employed to obtain a set of optimal partition points for all devices algorithm (Li et al., 2021), which minimizes the sum of total local computing time and the computing time on the edge server. CooAI (Yang et al., 2021) adopts multi-layer partition and slicing (MLS) to solve the DNN inference optimal problem. MLS leverages dynamic programming by first computing and recording the optimal solution to each smaller subproblem and then reusing these solutions to solve a larger subproblem iteratively. #### 4.6.2 Integer Programming IP is a subset of linear programming that only differs from linear programming in that it includes integer constraints. However, it is prevalently used to solve the optimization problem because of the mathematical definition of generalization. IP algorithms solve many DNN partition problems. For example, a nonlinear integer optimization problem can be formulated as an optimization partition problem (Tang et al., 2020). Furthermore, JALAD (Li et al., 2018) is formulated as an integer linear programming (ILP) problem. An exact solution is obtained by formulating an ILP for offloading with a single request (Xu et al., 2020). A joint optimization model of partition and resource allocation has been developed by establishing mixed-integer nonlinear programming (He et al., 2020). #### 4.6.3 Benchmarking Benchmarking is similar to the listing technique and has strong universality. In this method, all the partition solutions are listed. However, the cost is high if the problem is complex or the number of solutions is massive. In some scenarios, there is no single optimal objective because the user's requirements change dynamically. Benchmarking is a standard method for specific static DNN partitions. For example, the multiple criteria decision-making method based on the analytical hierarchy process (AHP) (Ashouri et al., 2020) adopts benchmarking to provide a DNN partition strategy. Scission (Lockhart et al., 2020) is a six-step methodology for automated partitioning. To find the valid partition points, Scission benchmarks each layer and block on the target hardware resources and creates partition configurations from benchmark data. #### 4.6.4 Shortest Path of a Graph The DNN inference process can be considered as a path from the beginning to the end of a graph. Each node of the DNN has several choices to be deployed in any environment (end device, edge device, server, etc); thus, it is a multipath from one node to another. Optimizing metrics such as delay and energy consumption can be regarded as the weight of the edge; therefore, the goal is to find the shortest path within the DAG formed by all the DNN computing layers. A mobility-included DNN partition offloading algorithm (MDPO) (Tian et al., 2021) uses the shortest path to solve the optimization partition problem. In addition, the latency of uploading the DNN layer onto the server is considered. To decide optimal DNN partitions and uploading orders, IONN (Jeong et al., 2018) uses a novel graph-based algorithm and PerDNN (Jeong et al., 2020) uses the shortest path of a DAG to partition and offload, considering the mobility of end devices. #### 4.6.5 Machine Learning Machine learning is a promising method to handle high complexity. An optimal DNN partition decision for machine learning is determined according to the system's state. It obtains this optimization decision using learning techniques. The RL online algorithm (Xu et al., 2020) decides whether to wait for a subsequent request or select a request from the current arrival list. The reward function is the inverse of the average delay experienced by the admitted requests. Similarly, FEPD (Ren et al., 2021) also adopts the RL algorithm. Using the difference in the DNN partition, AppealNet (Li et al., 2021) presents a two-head network architecture that consists of an approximator head, predictor head, and feature extractor. #### 4.6.6 Early Exiting The early exiting mechanism is also proposed to improve the DNN inference. The main objective of early exiting is to terminate the inference process in an intermediate layer. The early exiting mechanism can avoid the forward process of the entire DNN through the input layer to the final layer. The existing early exiting methods include two categories (Teerapittayanon et al., 2016). The first category modifies added exit branches at specific layer locations in the standard DNN model structure and then trains the original model with the exit branches together. However, it is hard to find an exit layer for a given DNN, and an additional cost may occur due to the retraining model. The second category determines the exit point after the convolutional layer (Panda et al., 2016) before adding a classifier to determine whether the inference result is correct. In the research on DNN inference optimization, many methods combine DNN partition with an early exiting mechanism to enhance the DNN inference performance. For example, the aggregation scheme (Teerapittayanon et al., 2017), Edgent (Li et al., 2018), ADDA (Wang et al., 2019), and offloading strategy optimization (Pacheco et al., 2021) are the DNN partition approaches that use early exiting mechanisms. #### 4.6.7 Heuristic Algorithms Heuristic algorithms are proposed concerning the optimization algorithm. The objective is to choose an efficient heuristic algorithm and obtain the best or sub-best solution. Some typical heuristic algorithms have solved optimization DNN partition problems in recent years. An adaptive DNN partition algorithm (Miao et al., 2020) is a type of heuristic algorithm. In addition, a discrete particle swarm optimization with genetic operators (DPSO-GO) (Huang et al., 2019) has been used to find an offloading strategy by solving the NP-Hard problem to address the optimization problem. Similarly, a threshold-based workload partition algorithm (Zeng et al., 2020), iterative alternating optimization algorithm (IAO) (Tang et al., 2020), greedy two dimensional partition (GTDP) (Mao et al., 2017), and a binary-search-based partition algorithm (Duan and Wu, 2021) were proposed to meet the NP-Hard problem. ### Instantiation of Framework In summary, we discussed the recent research on DNN partition algorithms in terms of five dimensions, further verifying the research dimension's completeness. For example, we described Energy-Aware Inference Offloading for DNN-Driven Applications (Xu et al., 2020) in the classification framework. The **deployment locations** were mobile end devices and edges, and the DNN was divided into several sub-parts at the **granularity of layers**. The **optimization goal** was constructed to achieve the minimum energy consumption considering the latency and limited edge resources as the **partition constraints**. Thus, the **optimization algorithm** was IP, the Random Rounding Approximation algorithm, and RL. Similarly to the aforementioned description, we listed several typical algorithms and introduced their characteristics in terms of five dimensions. The reason for choosing these algorithms is that they cover all the possible values in each dimension (see Table 3). ## 5 Comparisons This section presents the analysis of the DNN partition strategies in terms of two aspects: to refine the metrics of these DNN partition approaches and compare the DNN partition approaches based on these metrics. The characteristics of each algorithm are outlined in detail. ### Metrics Each DNN partition method is proposed to solve practical problems or improve the reliability of DNN-driven applications. We list some metrics for evaluating the DNN partition algorithms to compare the proposed algorithms. Qualitative and quantitative indicators are used to measure the DNN partition methods. * **The number of optimization performances** includes the number that the DNN partition strategy considered. The optimization performance differs by the DNN partition approaches, such as accuracy, latency, energy, processor usage, and privacy. * **Space complexity** denotes the amount of storage space temporarily occupied by an algorithm when running. It is denoted by \(O(\cdot)\). In this study, the space complexity of an algorithm only considers the size of the storage space allocated for local variables during operation. * **Time complexity** denotes the algorithm's running time denoted by \(o(\cdot)\). Generally, time complexity relates to the input data and the DNN layers. The weaknesses and strengths are mainly measured in terms of space and time complexity. * **Self-adaptability** denotes an ability to execute the DNN partition approach in runtime when the hardware resources and input data vary. * **Generalizability** denotes an indicator for evaluating the overall application value of the DNN partition algorithm. For example, a particular DNN model may be partitionable, but a generalization of this algorithm determines whether the algorithm applies to other DNNs. In addition, generalization focuses on the universality of the DNN to be split and deployed. * **Scalability** indicates whether the algorithm is adaptable when the scenario changes, such as an increase or decrease in the number of sensors and scaling of the number of edge computing nodes. For generalization, scalability focuses on adaptability to the scene. ### Comparisons This subsection presents a comparison of several typical DNN partition approaches. The details are listed in Table 4. Most DNN partitions have strong generalization, except Pripro (Gao et al., 2021), AppealNet (Li et al., 2021), ADDA (Wang et al., 2019), and Edgent (Li et al., 2018). In most in-depth research, is considered because most DNNs split in given deployment locations. MDPO (Tian et al., 2021) and JODS (Duan and Wu, 2021) both reduce \begin{table} \begin{tabular}{p{56.9pt}|p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline **Reference** & **Deployment Locations** & **Partition Granularity** & **Partition Constraints Constraints** & **Optimization Objectives** & **Optimization Algorithms** \\ \hline DCOSD2D (Guo et al., 2021) & Co-Ends & Layers & Computation resource Bandwidth & Latency & Greedy Heuristic KM \\ \hline MeDNN (Mao et al., 2017) & Co-Ends & Layers & Energy Data Tuning & Latency Bandwidth & Greedy \\ \hline ADPMEC (Miao et al., 2020) & Co-Edges & Sub-layers & Bandwidth Device quantity & Latency & Traversal \\ \hline EdgeLD (Xue et al., 2020) & Co-Edges & Sub-Layers Tuning & Bandwidth resource & Latency & Traversal \\ \hline DeepThings (Zhao et al., 2018) & Co-Edges & Sub-Layers Tuning & Computation resource & Memory & Traversal \\ \hline JMDP (Tang et al., 2020) & Edges-Ends & Layers & Computation resource & Latency & IP IAO \\ \hline PANDA (Shi et al., 2019) & Edges-Ends & Layers & Energy & Privacy Latency & Lyapunov \\ \hline EPDNN (Shi et al., 2019) & Edges-Ends & Layers & Computation resource & Efficiency & Greedy \\ \hline CoopAI (Yang et al., 2021) & Edges-End & Layers & Bandwidth Bandwidth Data resource & Latency & DP \\ \hline PerDNN (Jeong et al., 2020) & Edges-Ends & Layers & Computation resource & Latency & Shortest Path \\ \hline ADDA (Wang et al., 2019) & Edges-Ends & Layers & Accuracy & Latency & Greedy Early Exiting \\ \hline JALAD (Li et al., 2018) & Cloud-Ends & Layer Tuning & Accuracy & Latency & ILP \\ \hline AppealNet (Li et al., 2021) & Cloud-Edges & Data Tuning & Energy & Accuracy & ML \\ \hline TREND-WANT (Lin et al., 2019) & Cloud-Edges-Ends & Layers & Latency & Throughput & DP \\ \hline FEPD (Ren et al., 2021) & Cloud-Edges-Ends & Layers & Bandwidth Bandwidth Energy & \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline EEOS (Chen et al., 2021) & Cloud-Edges-Ends & Layers & Latency & Energy & \begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline \end{tabular} \end{table} Table 3: Using the proposed classification framework to delineate existing DNN partition approaches. the delay time of inference. In most DNN partition approaches, there are multiple optimization performances, such as Neurosurgeon (Kang et al., 2017), Scission (Lockhart et al., 2020), and MAHP (Ashouri et al., 2020). In particular, Scission (Lockhart et al., 2020) considers five types of performances; however, it ignores the complexity, generalizability, and other capabilities of the partition model. MAHP (Ashouri et al., 2020) considers more performances and it has greater generalizability compared with Scission. These approaches require many experiments in advance; therefore, it is essential to design lightweight DNN partition approaches. Only a few DNN partition approaches are considered for the time and space complexity, which needs to be further considered. For example, PADCS (Hu et al., 2021) considers time and space complexity while also having generalizability, scalability, and self-adaptability. Table 4 can be extended with as many schemes as one wishes to consider or discuss and is available at [https://github.com/xudi2021/Table-4/blob/main/Table](https://github.com/xudi2021/Table-4/blob/main/Table) %204.pdf. ## 6 Challenges and Opportunities Based on the survey of relevant papers on DNN collaborative inference, we can discuss the limitations of this work, which may provide potential future research directions. The overall challenges are depicted in detail in Fig. 4. ### Consideration of additional factor In the aforementioned studies, the DNN partition algorithms considered many factors, such as delay, accuracy, and throughput. However, when establishing the DNN partition optimization model, we considered other factors, such as resource utilization, DNN magnitude (number of layers), and offloading cost. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Reference**} & \multirow{2}{*}{\begin{tabular}{c} \# Optimization \\ Performances \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Space \\ Complexity \\ \end{tabular} } & \multicolumn{3}{c}{**Metrics**} \\ & & & & & Generalizability & Scalability & Self-adaptability \\ \hline DDNNC (Zhou et al., 2019) & 1 & N & N & N & N & N \\ \hline EdgeLD (Xue et al., 2020) & 1 & N & N & N & Y & Y \\ \hline MDPO (Tian et al., 2021) & 1 & N & N & Y & N & Y \\ \hline EPDNN (Shi et al., 2019) & 1 & N & N & Y & N & Y \\ \hline CoopAI (Yang et al., 2021) & 1 & N & Y & Y & Y & N \\ \hline JODS (Duan and Wu, 2021) & 1 & N & Y & Y & Y & Y \\ \hline ADDA (Wang et al., 2019) & 2 & N & N & N & N & Y \\ \hline Edgent (Li et al., 2018) & 2 & N & N & N & N & Y \\ \hline AppeeNult (Li et al., 2021b) & 2 & N & N & N & N & Y \\ \hline Pripro (Gao et al., 2021) & 2 & N & N & N & Y & N \\ \hline JointDNN (Eshratifar et al., 2019) & 2 & N & N & Y & N & N \\ \hline McDNN (Mao et al., 2017) & 2 & N & N & Y & N & Y \\ \hline DADS (Hu et al., 2019) & 2 & N & N & Y & Y & Y \\ \hline PANDA (Shi et al., 2019) & 2 & N & N & Y & Y & Y \\ \hline FEPD (Ren et al., 2021) & 2 & N & N & Y & Y & Y \\ \hline PerDNN (Jeong et al., 2020) & 2 & N & N & Y & Y & Y \\ \hline SPSO-GA (Chen et al., 2021) & 2 & N & Y & Y & N & Y \\ \hline QDMP (Zhang et al., 2020) & 2 & N & Y & Y & N & Y \\ \hline TREND-WANT (Lin et al., 2019) & 2 & N & Y & Y & Y & N \\ \hline JMDP (Tang et al., 2020) & 2 & N & Y & Y & Y & Y \\ \hline JALAD (Li et al., 2018b) & 2 & Y & Y & Y & N & Y \\ \hline PADCS (Hu et al., 2021) & 2 & Y & Y & Y & Y & Y \\ \hline Neurosurgeon (Kang et al., 2017) & 3 & N & N & Y & N & Y \\ \hline IONN (Jeong et al., 2018) & 3 & N & N & Y & N & Y \\ \hline EAIOD (Xu et al., 2020a) & 3 & N & N & Y & Y & Y \\ \hline JPDRA (He et al., 2020) & 3 & N & Y & Y & Y & Y \\ \hline DeepThings (Zhao et al., 2018) & 4 & N & N & N & N & Y \\ \hline MCDNN (Han et al., 2016) & 4 & N & N & Y & Y & Y \\ \hline Scission (Lockhart et al., 2020) & 5 & N & N & N & N & N \\ \hline MAHP (Ashouri et al., 2020) & 7 & N & N & Y & N & N \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of existing works on DNN partition approaches. #### 6.1.1 Improved resource utilization In most studies, only latency, energy consumption, and accuracy are considered. The delay and energy consumption of the DNN inference are related to the hardware computing power and resources. When the hardware resources are determined in advance, the delay and energy consumption can be obtained. However, the utilization of resources is different from energy consumption and inference delay. The inference time is influenced by sharing a GPU or system resources such as streaming multiprocessors. Therefore, determining the delay and energy consumption by considering only hardware is inaccurate. However, executing multiple DNN tasks with limited resources results in low memory or processor usage and a congested network. Thus, resource utilization can impact the DNN partition strategy. Therefore, resource utilization and reasonable allocation should be regarded as optimization objectives. #### 6.1.2 Additional DNN Partitions As the DNN model's scale increases, the partition process generally takes longer. Thus, the DNN partition algorithm must be lightweight and suitable for a more complex DNN model. Recent research has rarely studied the influence of the DNN layers on DNN partition algorithms. As a result, the time and space complexity of the partition algorithms are often ignored. This is a potential future direction for an optimal algorithm: consider the computation complexity. #### 6.1.3 Offloading Costs Most studies only consider the computing and transmission times of each DNN layer in the total DNN inference latency, assuming simultaneous installation of the entire DNN model on the deployment devices or servers and uploading of the entire layers. This is inappropriate for the emerging edges where the end devices send the intermediate data to the generic servers located at the network's edge. Because the changing offloading edge devices are frequently moved considering the mobile end devices, it is essential to study offloading cost of DNN inference in the dynamic runtime environment. Figure 4: Challenges and opportunities ### Privacy Protection Concerns Although the DNN partition algorithm over the cloud, edge, and end devices, boosts the development of deep learning applications, privacy protection is a significant concern. Sending the DNN intermediate data from edge devices to the cloud is at risk of interception during various stages. The cooperative inference helps to enhance data privacy in DNN-driven applications that employ deep learning models to perform task inference. Privacy protection regarding synergy is still in its preliminary stage and requires more research. Therefore, further work could establish a dual goal that considers privacy and accuracy in the constraint of other performance indicators. ### Dynamic DNN Partitions Current IoT applications involve various scenarios. The optimal partition strategy combined with actual scenarios is dynamic; therefore, we need to recalculate the optimal partitions based on the current status of each device being careful to select an interval between recalculations that avoids DNN performance degradation and high overhead. In addition, end devices and edges have mobile properties; therefore, the deployment location moves, and the number of deployment devices changes. Thus, dynamic deployment locations must be considered. ### Vertical- and Horizontal-oriented DNN Partitions The E2E-based collaborative computing mode is an essential and promising one, attributable to the support of E2E communication technology. We conclude the DNN partition technology on E2E, called "horizontal-oriented" partition. At present, many forms of entertainment, including the famous metaverse (e.g., multiplayer games and AR), are typical multi-end collaboration application scenarios. In contrast, the computing platform in the "vertical" scenario mainly consists of the end device, edges, and cloud. Although the edge nodes to edge nodes oriented DNN partition algorithm has been studied, only a slight gap between the edge node resource and network is assumed. More generally, multi-level edges collaboration partitioning, called "vertical-oriented" DNN partition technology is proposed. Furthermore, oriented DNN partitions are an excellent attempt at designing DNN partition strategies with regard to the hardware resources and locations. ## 7 Conclusion This paper provides a comprehensive overview of DNN partition approaches over cloud, edge, and end devices. First, the definition of DNN partition and DNN-based intelligent applications are introduced. Then, the five-dimensional classification framework of DNN partition is described and typical partition approaches are reviewed. Finally, the challenges are listed and several directions for future work are outlined. In summary, DNN partition is a fast-growing research area with numerous challenges and opportunities. We hope that this survey is helpful for understanding state-of-the-art DNN partition research and conducting further research.
2307.00777
GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for DAG Task Scheduling over Dynamic Vehicular Clouds
Vehicular clouds (VCs) are modern platforms for processing of computation-intensive tasks over vehicles. Such tasks are often represented as directed acyclic graphs (DAGs) consisting of interdependent vertices/subtasks and directed edges. In this paper, we propose a graph neural network-augmented deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process. We then adopt a multi-head graph attention network (GAT) to extract the features of DAG subtasks. Our developed GAT enables a two-way aggregation of the topological information in a DAG task by simultaneously considering predecessors and successors of each subtask. We further introduce non-uniform DAG neighborhood sampling through codifying the scheduling priority of different subtasks, which makes our developed GAT generalizable to completely unseen DAG task topologies. Finally, we augment GAT into a double deep Q-network learning module to conduct subtask-to-vehicle assignment according to the extracted features of subtasks, while considering the dynamics and heterogeneity of the vehicles in VCs. Through simulating various DAG tasks under real-world movement traces of vehicles, we demonstrate that GA-DRL outperforms existing benchmarks in terms of DAG task completion time.
Zhang Liu, Lianfen Huang, Zhibin Gao, Manman Luo, Seyyedali Hosseinalipour, Huaiyu Dai
2023-07-03T06:41:15Z
http://arxiv.org/abs/2307.00777v1
GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for DAG Task Scheduling over Dynamic Vehicular Clouds ###### Abstract Vehicular clouds (VCs) are modern platforms for processing of computation-intensive tasks over vehicles. Such tasks are often represented as directed acyclic graphs (DAGs) consisting of interdependent vertices/subtasks and directed edges. In this paper, we propose a graph neural network-augmented deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over dynamic VCs. In doing so, we first model the VCs-assisted DAG task scheduling as a Markov decision process. We then adopt a multi-head graph attention network (GAT) to extract the features of DAG suflasks. Our developed GAT enables a two-way aggregation of the topological information in a DAG task by simultaneously considering predecessors and successors of each subtask. We further introduce non-uniform DAG neighborhood sampling through codifying the scheduling priority of different subtasks, which makes our developed GAT generalizable to completely unseen DAG task topologies. Finally, we augment GAT into a double deep Q-network learning module to conduct subtask-to-vehicle assignment according to the extracted features of subtasks, while considering the dynamics and heterogeneity of the vehicles in VCs. Through simulating various DAG tasks under real-world movement traces of vehicles, we demonstrate that GA-DRL outperforms existing benchmarks in terms of DAG task completion time. Vehicular cloud, directed acyclic graph, deep reinforcement learning, graph neural network. ## I Introduction ### _Background and Challenges_ Vehicular networks are one of the main components of the Internet-of-Things (IoT) ecosystem. They have been envisioned to provide a reliable platform for execution of a myriad of applications/tasks, such as autonomous driving and mobile E-health [1, 2]. Many of these tasks possess complex computation topologies, which are often represented as directed acyclic graphs (DAGs) [28]. Fig. 1 illustrates a real-world DAG task model corresponding to a navigation application executed on a vehicle [27], where vertices denote subtasks of the task and directed edges describe the dependencies between the execution of subtasks. In particular, each subtask represents a _processing component_ of navigation, while directed edges dictate the _sequence of executions_ of subtasks. The sequential execution of subtasks in a DAG model stems from the fact that processing of a subtask may depend on the output data of others (e.g., in Fig. 1, processing of subtask \(b_{2}\) relies on the output data of subtask \(b_{1}\) and processing of \(b_{4}\) relies on the output data of both \(b_{2}\) and \(b_{3}\)). In vehicular networks, DAG tasks are frequently encountered. Nevertheless, one of the main obstacles in the execution of DAG tasks is that a task owner (i.e., a vehicle with a DAG task) in a vehicular network often fails to fulfill the task's execution requirements due to its limited on-board resources. To circumvent this, offloading the computation of DAG tasks from task owners to edge servers through the mobile edge computing (MEC) platform has been proposed [12, 13, 14]. However, such task offloading strategies often rely on vehicle-to-infrastructure (V2I) communications, which can suffer from a high latency (e.g., due to a high data traffic congestion on the fronthaul/backhaul links) and limited coverage (e.g., in suburban areas) [9]. In response to these limitations, vehicular clouds (VCs) have emerged as novel computing platforms that integrate heterogeneous and distributed computation resources of moving vehicles via opportunistic vehicle-to-vehicle (V2V) communications to build flexible and scalable computing topologies for real-time task processing [15, 16, 17]. Specifically, in a VC, DAG subtasks are dispersed across vehicles and the data needed for the execution of subtasks is transmitted via V2V links. Fig. 1: A schematic of a DAG task describing a navigation application [27]. Although DAG task processing over VCs is promising, efficient scheduling of DAG subtasks across vehicles is a highly non-trivial problem, which resembles mixed integer programming (MIP) due to the existence of continuous and binary variables in the formulation (detailed in Section III-E). MIP are NP-hard problems [43], for which, dynamic programming [4, 7] and list scheduling algorithms [5, 12] have been widely used to obtain solutions. These algorithms, however, often suffer from a prohibitively high computation complexity, which renders them impractical for large-scale VC networks. Also, these algorithms require a prior knowledge about the system dynamics (e.g., time-varying V2V channel qualities), which is cumbersome to acquire in practical systems. To overcome these challenges, researchers have recently started exploring the learning-based methods, a popular example of which is deep reinforcement learning (DRL) [20, 21, 22, 23, 24, 25, 26]. Roughly speaking, DRL learns from interacting with an environment so as to generate real-time near-optimal mappings from the state space (detailed in Section IV-C) to the action space (detailed in Section IV-D) without requiring any prior knowledges about the environment. Although DRL has shown a tremendous success in task scheduling/offloading [20, 21, 22, 23, 24, 25, 26], it cannot be readily adopted for scheduling of DAG tasks over dynamic VCs. This is due to the fact that DAG tasks' data (i.e., tasks' topologies) resides in a non-Euclidean space (i.e., graph). As a result, conventional DRL with handcrafted states designed to work with data in Euclidean spaces fails to automatically learn the topological information of DAG tasks, and thus can be hardly applied to unseen DAG task topologies upon deployment in real-world systems. To overcome this challenge, we propose to augment DRL with an emerging learning architecture called graph neural network (GNN). GNNs are capable of adaptively extracting discriminative features for each node of a graph based on the topological information aggregated from its neighboring nodes [41]. As a sub-category of GNNs, graph attention networks (GATs) have recently gained tremendous attentions, which extend the spatial convolution in convolutional neural networks (CNNs) to graph structures [42] and thus enjoy _inductive learning_, making their learned models generalizable to unseen graph topologies. ### _Overview and Summary of Contributions_ Inspired by the unique advantages of DRL and GNNs, we propose GA-DRL, a GNN-augmented DRL scheme to conduct DAG subtask-to-vehicle allocation aiming at minimizing the DAG task completion time. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process (MDP). We then tailor a GAT to extract a set of features for each subtask of a DAG task. Finally, we integrate our developed GAT into the learning architecture of a double deep Q-network (DDQN) to generate subtask-to-vehicle allocation decisions, while taking into account the dynamics and heterogeneity of vehicles in a VC. Major contributions of this paper can be summarized as follows: * We develop a multi-head GAT capable of extracting features for DAG subtasks. Particularly, our GAT conducts a two-way topological information aggregation by simultaneously considering predecessors and successors of each subtask. Further, we incorporate a non-uniform neighborhood sampling methodology into our GAT by codifying the scheduling priority of subtasks, making our GAT generalizable to unseen DAG task topologies upon being deployed over the real-world systems. * We propose a DDQN to conduct subtask-to-vehicle allocation decisions according to the extracted features of subtasks by our GAT, while taking into account the dynamics and heterogeneity of vehicles in a VC. Further, we incorporate an action mask module into the DDQN to avoid infeasible subtask-to-vehicle allocations, ensuring successful execution of subtasks. * We evaluate the performance of GA-DRL on a real-world road network obtained from OpenStreetMap [10] where SUMO [11], one of the most popular softwares for generating traffic flow, is used to form a VC. Through simulating DAG tasks with various topologies, we reveal that GA-DRL can outperform existing benchmarks in terms of task completion time. The rest of this paper is organized as follows: Section II contains the related work. In Section III, we present the system model and formulate the VC-assisted DAG task scheduling as a MIP problem. We develop GA-DRL in Section IV. In Section V, we present simulation results before concluding the paper in Section VI. ## II Related Work Existing works on DAG task scheduling over cloud-assisted networks can be roughly divided into two categories with respect to the type of their scheduling mechanisms: _i_) heuristic-based algorithms [17, 12, 13, 3, 4, 15, 14, 3]; _ii_) learning-based methods [16, 18, 20, 21, 22, 23, 24, 25, 26]. Below, we summarize the contributions of these works, and highlight the differences between our methodology in this paper and prior works. ### _Heuristic DAG Task Scheduling_ #### Ii-A1 Static computing environment Heuristic methods for DAG task scheduling have been extensively studied for static MEC networks with fully connected servers [12, 13, 4, 6, 12]. H. Topcuoglu _et al._ in [12] proposed HEFT algorithm, where each subtask is assigned to the processor with the least execution time. In [3], L. F. Bittencourt _et al._ proposed forward looking attributions to improve the performance of HEFT. In [6], H. Kanemitsu _et al._ proposed a clustering-based DAG task scheduling algorithm via prioritizing assigning the subtasks located on the critical path to the same processor. G. C. Sih _et al._ in [4] adopted a compile-time-aware scheduling algorithm to dynamically allocate DAG subtasks over the existing processing units in the system. Recently, in [13], Y. Sahni _et al._ introduced JDOFH to simultaneously consider dependencies among DAG subtasks and start time of network flows to transmit the data of subtasks over the network. #### Ii-A2 Dynamic computing environment Few recent works have studied DAG task scheduling over dynamic networks [14, 15, 17]. Q. Shen _et al._ in [14] proposed DTOSC to conduct DAG task offloading and service caching in vehicular edge computing. F. Sun _et al._ in [15] addressed DAG task scheduling over VC via a modified genetic algorithm focusing on vehicles' dwell times. In [17], Y. Liu _et al._ developed MAMTS to prioritize allocation of different DAG tasks according to their computation topologies in vehicular edge computing. The methodologies developed in the aforementioned works are heuristic, applying of which requires considerable number of iterations to reach locally optimal solutions. As a result they often suffer from prohibitively high computation complexities, which renders them impractical for real-time DAG task allocation. Also, these heuristic algorithms often presume a prior knowledge about the system dynamics (e.g., known time-varying V2V channel qualities), obtaining of which is extremely challenging in dynamic VCs, where the network topology may exhibit a significant temporal variation. ### _Learning-based DAG Task Scheduling_ #### Ii-B1 Static computing environment DRL schemes have become one of the most popular learning-based techniques in the literature of task scheduling, especially for static MEC networks [20, 21, 22, 23, 24]. In [20], J. Yan _et al._ proposed an actor-critic DRL to learn the optimal DAG subtask assignment to access points. M. S. Mekala _et al._ in [21] developed a DRL-based DAG task offloading approach to reduce the utilization cost of edge servers. In [22], J. Wang _et al._ proposed a DAG task offloading methodology based on meta reinforcement learning. M. Goudarzi _et al._ in [23] introduced weighted actor-learner architectures for DAG task allocation over resource-constrained IoT devices. In [24], Z. Hu _et al._ presented a DRL-based Monte-Carlo tree search method to minimize DAG tasks' completion times through a clustered scheduler. #### Ii-B2 Dynamic computing environment Considering dynamic computing environments [16, 18, 25, 26], H. Liu _et al._ in [16] utilized a policy-based DRL for minimizing DAG tasks' completion times in multi-vehicle scenarios. In [18], J. Shi _et al._ proposed a DRL-based DAG task offloading scheme for vehicular fog computing considering the vehicles' mobility and availability. X. Wei _et al._ in [25] developed a DRL-based algorithm to jointly optimize the unmanned aerial vehicle trajectory planning, and DAG task scheduling. In [26], L. Geng _et al._ proposed a multi-agent actor-critic DRL to schedule DAG tasks in a vehicular edge computing network. The DRL algorithms developed in the above works are based on handcrafted features, making them unable to fully capture the existing topological information in DAG tasks. This is because the state space of the DRL architectures studied in the above works merely contains basic, human-selected information regarding subtasks (e.g., their computation workloads, transmission data sizes, and number of predecessors/successors). As a result, the DRL methods explored in the above works are solely capable of making allocation decisions for DAG tasks with computation topologies that they have seen during their training period. In this work, we take the first steps towards addressing this limitation. ### _Footprints of GNNs in Mobile Edge Computing_ Recently, the success of GNNs in solving a variety of complex problems in wireless communications has been revealed [34, 35, 36, 37], while studying their application in the context of DAG task scheduling is still in early stages. In [34], Z. He _et al._ investigated the spectrum allocation in vehicle-to-everything networks based on the integration of GNNs and deep Q-learning. Y. Li _et al._ in [35] proposed a meta-reinforcement learning method for DAG task offloading in MEC platform, where the interdependencies between subtasks was extracted by GNNs. In [36], H. Lee _et al._ developed a graph convolution network (GCN) and DRL to effectively learn a priority-based scheduling policy for DAG tasks. J. Chen _et al._ in [37] proposed an algorithm called ACED for DAG task offloading, where a GCN is leveraged to capture the topological information of DAG subtasks. The aforementioned works either ignore the topology of computation-intensive tasks (e.g., interdependencies among subtasks) [34] or focus on static MEC environments, overlooking the dynamics and instability of resource provisioning [35, 36, 37], which are significant features of VCs. Moreover, the GCN architecture developed in [36, 37] relies on _transductive learning_, which requires knowing the graph structure of DAG tasks upfront. As a result, their learned solutions for DAG task scheduling are not applicable to unseen DAG task topologies, which makes them suffer from a prohibitively high training overhead for each newly arrived DAG task to the system. In this work, we are particularly interested to address the shortcomings mentioned above. ## III System Model and Problem Formulation In this section, we first give an overview of the system of our interest, DAG task model, vehicle mobility model, and computation offloading model. We then obtain an optimization formulation for VC-assisted DAG task scheduling. Table I summarizes the major notations used in this section. ### _System Overview_ We consider a time-slotted VC-assisted DAG task scheduling scenario, which is coordinated by a road side unit (RSU) with coverage diameter of \(D\). We presume that the area comprises \(|\mathcal{V}|\) vehicles collected by the set \(\mathcal{V}=\{v_{m}\mid 1\leq m\leq|\mathcal{V}|\}\). In order to fulfill its DAG task completion demands, a task owner engages in offloading its DAG task with \(|\mathcal{B}|\) subtasks collected by the set \(\mathcal{B}=\{b_{i}\mid 1\leq i\leq|\mathcal{B}|\}\) to other vehicles1. Footnote 1: This paper investigates the DAG task scheduling problem for a single task owner with a single DAG task in one VC for analytical simplicity. Cooperations and resource sharing among VCs and competitions between multiple task owners to acquire computation resources are left as future work. Fig. 2 shows a schematic of our VC of interest for the DAG task topology depicted in Fig. 1, subtask \(b_{0}\) is a virtual subtask executed on the task owner (detailed in Section III-B). After receiving the offloading request from the task owner (i.e., \(v_{1}\)), the RSU acts as a centralized coordinator [16], which processes a set of collected data (e.g., locations and resources of vehicles) to assign DAG subtasks to vehicles. Specifically, in Fig. 2, virtual subtask \(b_{0}\) is assumed to be executed on the task owner locally, while subtask \(b_{1}\) is allocated to vehicle \(v_{3}\). After executing subtask \(b_{1}\), vehicle \(v_{3}\) is scheduled to transmit the output data of subtask \(b_{1}\) to vehicles \(v_{2}\) and \(v_{4}\) for processing subtasks \(b_{2}\) and \(b_{3}\). Due to the interdependencies among DAG subtasks, the execution of subtask \(b_{4}\) relies on the output data of both subtasks \(b_{2}\) and \(b_{3}\). Hence, vehicles \(v_{2}\) and \(v_{4}\) will both be scheduled to transmit their output data to vehicle \(v_{5}\). Finally, vehicle \(v_{5}\) will send a feedback (i.e., the final result of DAG task processing) to the RSU. The main assumptions made in this paper are summarized below: * It is assumed that VC remains stationary during each time slot [39]. * We presume that a single vehicle can only handle one subtask at a time [21]. Consequently, if multiple subtasks are assigned to a vehicle, they must wait until resources become available2. Footnote 2: Upon having vehicles that can process multiple subtasks simultaneously, those vehicles can be modeled as multiple virtual vehicles with unlimited contact duration among them, where each of them can process one subtask at a time. * Due to the mobility and the limited contact durations among vehicles, this paper only focuses on one-hop data transmission between vehicles [38]. * Since the size of the feedback sent to the RSU is usually smaller than that of the original input data, the time it takes to transmit this feedback is neglected [30]. ### _DAG Task Model_ Without loss of generality, we index the task owner as \(v_{1}\) with a computation-intensive DAG task, which is represented by a graph \(\mathcal{G}=(\mathcal{B},\mathcal{E})\). In graph \(\mathcal{G}\), \(\mathcal{B}=\{b_{i}\mid 1\leq i\leq|\mathcal{B}|\}\) denotes the set of subtasks, and \(\mathcal{E}\) denotes the set of directed edges, where \(e_{i,j}\in\mathcal{E}\) indicates that subtask \(b_{i}\) has to be completed before the execution of subtask \(b_{j}\). To better capture the sequential execution nature of DAG tasks, we further define the set of immediate _predecessors_ of each subtask \(b_{i}\) as \(\mathcal{P}_{i}=\{b_{j}\mid b_{j}\in\mathcal{B},e_{j,i}\in\mathcal{E}\}\). Similarly, we define the set of immediate _successors_ of each subtask \(b_{i}\) as \(\mathcal{S}_{i}\). For example, in Fig. 1, we have \(\mathcal{B}=\{b_{1},b_{2},b_{3},b_{4}\}\), \(\mathcal{E}=\{e_{1,2},e_{1,3},e_{2,4},e_{3,4}\}\), \(\mathcal{P}_{4}=\{b_{2},b_{3}\}\), and \(\mathcal{S}_{1}=\{b_{2},b_{3}\}\). Furthermore, to make our analysis tractable, we introduce a virtual subtask to the DAG task topology denoted by \(b_{0}\), which is connected to subtask(s) with no immediate predecessors as shown in Fig. 2. ### _Vehicle Mobility Model_ We assume that each vehicle \(v_{m}\) is driving at a random and constant speed \(g_{m}\) (meters per second). Since the speeds of vehicles are non-negative, we adopt a truncated Gaussian distribution [32] to capture them. Specifically, for any value of speed \(g\), the probability density function of the truncated Gaussian distribution is defined as \[\widehat{F}(g)=\frac{2F\left(g\right)}{\Phi\left(\frac{g_{\text{max}}-\mu_{a} }{\sigma_{g}\sqrt{2}}\right)-\Phi\left(\frac{g_{\text{min}}-\mu_{a}}{\sigma_{g }\sqrt{2}}\right)}, \tag{1}\] where \(\Phi(x)=\frac{2}{\sqrt{2\pi}}\int_{0}^{x}e^{-t^{2}}dt\) is the Gaussian error function, and \(g_{\text{max}}\) and \(g_{\text{min}}\) are defined as the maximum and minimum Fig. 2: VC-assisted cooperative DAG task scheduling scenario. speed of vehicles, respectively. In (1), \(F\left(g\right)\) is the probability density function of a Gaussian distribution which is given by \[F\left(g\right)=\frac{1}{\sigma_{g}\sqrt{2\pi}}\exp\left(\frac{-(g-\mu_{g})^{2}} {2\sigma_{g}^{2}}\right), \tag{2}\] where \(\mu_{g}\) is the average speed of all vehicles, and \(\sigma_{g}\) is the standard deviation. Considering resource provisioning for DAG subtasks is conducted by vehicles that are located in the VC (i.e., within the coverage of the RSU), we utilize the notion of the _dwell time_ to characterize vehicles' mobility. Specifically, considering that a contact event (i.e., V2V link formation) can happen between two vehicles as long as they have not left the VC, we define the dwell time of vehicle \(v_{m}\) in the VC as interval \([AT_{m},DT_{m}]\), where \(AT_{m}\) and \(DT_{m}\) represent the arrival and departure time of \(v_{m}\) at and from the VC, respectively, between which vehicle \(v_{m}\) is available to offer its computation resource. ### _Computation Offloading Model_ **Path Loss Model.** Let \((x_{m}(t),y_{m}(t))\) denote the 2D coordinates of each vehicle \(v_{m}\) at time slot \(t\), to consider the impact of dynamics of VCs on V2V links, we first adopt a dual-slope piecewise-linear model [29] to represent the propagation loss (in dB) between two vehicles \(v_{m}\) and \(v_{n}\), denoted by \(PL\left(d_{m,n}(t)\right)\), as follows: \[PL\left(d_{m,n}(t)\right)=PL_{\mathsf{LoS}}\left(d_{m,n}(t)\right)+\beta,\ \forall v_{m},v_{n}\in\mathcal{V}, \tag{3}\] where \(d_{m,n}(t)=\sqrt{(x_{m}(t)-x_{n}(t))^{2}+(y_{m}(t)-y_{n}(t))^{2}}\) (in meters) denotes the Euclidean distance between vehicles \(v_{m}\) and \(v_{n}\) at time slot \(t\), and \(\beta\) is an additional attenuation factor modeled according to a lognormal random variable with mean \(\mu_{\beta}=5+\text{max}(0,15\text{log}_{10}(d_{m,n}(t))-41)\) (in dB) and standard deviation \(\sigma_{\beta}=4.5\) (in dB). In (3) \(PL_{\mathsf{LoS}}\left(d_{m,n}(t)\right)\) is the path loss of the light-of-sight (LoS) transmission between two vehicles, which is given by \[PL_{\mathsf{LoS}}\left(d_{m,n}(t)\right)=32.4+20\text{log}_{10}( d_{m,n}(t))\] \[+20\text{log}_{10}(F_{c})+\delta,\ \forall v_{m},v_{n}\in\mathcal{V}, \tag{4}\] where \(F_{c}\) is the center frequency (in GHz), and \(\delta\) captures the effect of signal power fluctuations due to surrounding objects modeled by a lognormal random variable with standard deviation \(\sigma_{\delta}=3\) (in dB). We then introduce the notion of _ready time_ which enables us to develop our scheduling methodology for DAG tasks by taking their sequential execution into account. **Definition 1**.: _(Ready Time). Ready time \(RT_{i}\) indicates the time when all of the immediate predecessors of subtask \(b_{i}\) are completed/finished, which corresponds to the starting time of data transmission between \(b_{j}\) (\(b_{j}\in\mathcal{P}_{i}\)) and the vehicle that processes \(b_{i}\):_ \[RT_{i}=\max_{b_{j}\in\mathcal{P}_{i}}\{AFT_{j}\},\ b_{i}\in\mathcal{B}, \tag{5}\] _where \(AFT_{j}\) is the actual finish time of subtask \(b_{j}\) when it is practically executed on a vehicle._ **Transmission Model.** Combining (3) - (5), we let \(TT_{i,m;j,n}\) denote the data transmission time associated with edge \(e_{i,j}\) when subtasks \(b_{i}\) and \(b_{j}\) are allocated to vehicles \(v_{m}\) and \(v_{n}\), respectively, which can be calculated as \[TT_{i,m;j,n}=\left\{\begin{array}{l}c_{i,j}\Psi\left(PL\left(d_{m,n}(RT_{j })\right)\right),\ m\neq n\\ 0,\ over the VC as the following mixed integer programming (MIP): \[\min_{\mathcal{I}}\ \max_{b_{i}\in\mathcal{B},v_{m}\in\mathcal{V}} \{EFT_{i,m}\xi_{i,m}\},\] (9) s.t. \[(\ref{eq:C1}),(\ref{eq:C2}),\] \[\sum_{v_{m}\in\mathcal{V}}\xi_{i,m}=1,\ b_{i}\in\mathcal{B},\] (C1) \[\xi_{i,m}\in\{0,1\},\ b_{i}\in\mathcal{B},\ v_{m}\in\mathcal{V},\] (C2) \[\bigcap_{b_{i}\in\mathcal{B}_{m}}[EST_{i,m},EFT_{i,m}]=\varnothing, \ v_{m}\in\mathcal{V},\] (C3) \[EST_{i,m}\geq EFT_{j,n},\ b_{i}\in\mathcal{B},\ b_{j}\in\mathcal{ P}_{i},\ v_{m},v_{n}\in\mathcal{V},\] (C4) \[\left[EST_{i,m},EFT_{i,m}\right]\subset\left[AT_{m},DT_{m}\right],\ b_{i}\in\mathcal{B},\ v_{m}\in\mathcal{V}.\] (C5) In (9), the objective function captures the sequential execution of DAG subtasks, where the maximum finish time of all subtasks indicates the overall DAG task completion time. Also, constraint (C1) guarantees that each subtask is allocated to only one vehicle, while (C2) restricts the value of the allocation indicator \(\xi_{i,m}\) to be binary. Constraint (C3) ensures that a vehicle can only process one subtask at a time, where \(\mathcal{B}_{m}=\{b_{i}\ |\ \xi_{i,m}=1,\ 0\leq i\leq|\mathcal{B}|\}\) denotes the set of subtasks processed on vehicle \(v_{m}\). Constraint (C4) indicates that the processing of a subtask can not start until all of its predecessors are completed, (C5) guarantees the availability of computation resources of vehicles with respect to the vehicles' dwell times: the earliest start time and earliest finish time of executing each subtask \(b_{i}\) on vehicle \(v_{m}\) should between the arrival time and departure time of vehicle \(v_{m}\) in a VC. It is known that MIP formulations similar to what we have in (9) are NP-hard [36]. Also, considering the sequential execution of DAG subtasks (i.e. different subtasks may be executed at different time slots), we need the prior knowledge of the V2V path loss and availability of vehicles' computation resources, obtaining of which is cumbersome in practice. As a result, to tackle these challenges, we propose a GNN-augmented DRL scheme, named GA-DRL, to efficiently find near-optimal solutions for (9). ## IV GNN-Augmented DRL (GA-DRL) for DAG Task Scheduling over Dynamic VCs In this section, we first provide an overview of our GA-DRL methodology and the challenges we aim to address. We then tailor a GAT module for extracting features of subtasks. Subsequently, the VC-assisted DAG task scheduling is modeled as an MDP consisting of the state space, action space, and reward. Finally, we utilize a DDQN architecture to tackle (9) and discuss its training procedure. ### _GA-DRL Overview and Challenges_ #### Iv-A1 GA-DRL overview Our method takes a different approach from traditional DRL methods developed for task scheduling [16, 18, 25], which only consider predetermined states, such as computation workload, data size, and number of subtask predecessors/successors. Instead, we propose a GNN-augmented DRL approach that automatically learns distinctive subtask features and creates assignments between subtasks and vehicles. In particular, as shown in Fig. 3, the features of subtasks are acquired through a GAT module, rather than being predetermined. Our GA-DRL conducts subtask-to-vehicle allocations through a sequence of decision steps. At each decision step \(k\), the DRL agent functioning at the RSU diligently collects relevant data on the system _state_\(s^{(k)}\), which includes the extracted features of current subtask obtained by GAT, as well as the parameters of the vehicles describing their dynamics and heterogeneity. DRL agent then feeds state \(s^{(k)}\) to a DDQN. The objective of DDQN is to effectively assign subtasks to vehicles by determining the best course of _action_\(a^{(k)}\). To this end, DDQN evaluates the value of each state-action combinations, and conducts a subtask-to-vehicle allocation \(a^{(k)}\), which moves the system to the next state \(s^{(k+1)}\). Finally, the DRL agent receives a _reward_\(r^{(k)}\), that aids in the training of a deep learning model. This, in turn, enhances the agent's ability to take better actions over time. #### Iv-A2 Main challenges When applying GA-DRL to the VC-assisted DAG task scheduling, there are two main challenges that need to be tackled. **(1) Feasibility of allocation decisions.** Unlike static computing environments that have stable, fully-connected computing servers [3, 4, 6], the dynamics of VC's resources can Fig. 3: A schematic of our GNN-augmented DRL scheme for VC-assisted DAG task scheduling. greatly affect the execution of DAG subtasks. This is captured by constraint (C5) in (9), satisfying of which guarantees the time-interval of processing subtask \(b_{i}\) on vehicle \(v_{m}\) to be within the dwell time of \(v_{m}\). Ensuring that subtask-to-vehicle allocation decisions are feasible (specifically, meeting constraint (C5)) can be difficult because neural networks typically lack a module to filter out infeasible actions. **(2) Generalizability of designed GNN.** Efficient inductive learning is a key feature of GAT [31], which makes it suitable for using with previously unseen graph topologies. However, it can be difficult to ensure that the GNN model is applicable to various DAG tasks, as each task has its own unique topology and interdependence among subtasks. To overcome this challenge, we must carefully encode the information of each DAG task's topology to achieve meaningful results when combined with our later developed GAT. Table II summarizes the major notations used in this section. ### _Graph Neural Network_ In this subsection, we explain the structure of GNNs and how we use a multi-head GAT to extract distinctive features of subtasks. Our GAT incorporates a two-way aggregation method that considers the topological information of both pre-decessors and successors of each subtask. To further enhance the adaptability of our GAT to new DAG tasks, we utilize a ranking-based sampling technique. #### Iii-B1 Architecture of GNNs The architecture of a GNN is depicted in Fig. 4, where a GNN takes raw features of all subtasks as the input and subsequently generates result features containing corresponding topological information of the DAG task. Specifically, GNN utilizes an **Aggregate** function to accumulate the topological information passed by the neighbors of each subtask. The accumulated information is then modified through a nonlinear **Update** function. This procedure is repeated \(L\) times to create the result feature for each subtask. **Raw feature of each subtask.** Similar to conventional DRL methods [20, 21, 22, 23, 24], which rely on human-selected information to define DAG subtasks, we also define the raw feature3 of each subtask \(b_{i}\) as Footnote 3: Super-index \(0\) is used to capture that these are initial features of the subtask, which are later processed and enhanced through GNN. \[h_{i}^{(0)}=\{u_{i},\overline{c_{i}},|\mathcal{P}_{i}|,|\mathcal{S}_{i}|\},\ b_{i}\in\mathcal{B},\ b_{j}\in\mathcal{S}_{i}, \tag{10}\] where \(u_{i}\) is the computation workload of subtask \(b_{i}\), and \(\overline{c_{i}}=\sum_{b_{j}\in\mathcal{S}_{i}}\frac{c_{i,j}}{|\mathcal{S}_{i}|}\) indicates the average transmission data size associated with edges \(e_{i,j},\ b_{j}\in\mathcal{S}_{i}\). Also, \(|\mathcal{P}_{i}|\) and \(|\mathcal{S}_{i}|\) represent the number of predecessors and successors of subtask \(b_{i}\), respectively. **Neighbor set of each subtask.** Considering that DAG subtasks are executed sequentially, we define \(\mathcal{N}_{i}\) as the neighbor set of each subtask \(b_{i}\) which includes all of its predecessors as well as \(b_{i}\) itself; mathematically \[\mathcal{N}_{i}=\{b_{j}\mid e_{j,i}\in\mathcal{E}\}\cup\{b_{i}\}. \tag{11}\] Through an iterative process involving the use of **Update** and **Aggregate** functions, GNN obtains the result feature of each subtask \(b_{i}\). Mathematically, at each iteration \(\ell\), we have \[h_{i}^{(\ell+1)}=\textbf{Update}(\textbf{Aggregate}(\{h_{j}^{(\ell)}|b_{j}\in \mathcal{N}_{i}\})), \tag{12}\] where \(h_{i}^{(\ell+1)}\) denotes the result feature of subtask \(b_{i}\) at iteration \(\ell\). Through \(L\) iterations, the GNN derives the final result feature for each subtask, denoted by \(h_{i}^{(L)}\). This feature incorporates both the raw feature of each subtask (i.e., at \(\ell=0;h_{i}^{(0)}\)), as well as the topological information from neighboring subtasks (i.e., \(\mathcal{N}_{i}\)) within the DAG task. Hereafter, we detail the **Aggregate** and the **Update** functions designed to extract features of DAG subtasks. #### Iii-B2 Multi-head GAT Considering that the subtasks involved in \(\mathcal{N}_{i}\) have different computation workloads, transmission data sizes and interdependencies, we employ an attention mechanism, which is inspired by [31] to assign diverse weights to subtasks with the aim of enhancing information of key subtasks. Specifically, at each iteration \(\ell\), we define an attention-based aggregation function called **Aggregate4** as Footnote 4: Super-index \(0\) is used to capture that these are initial features of the subtask, which are later processed and enhanced through GNN. \[\textbf{Aggregate}^{\textbf{at}}(\{h_{j}^{(\ell)}|b_{j}\in\mathcal{N}_{i}\})= \hskip-11.381102pt\sum\limits_{b_{j}\in\mathcal{N}_{i}}\alpha_{i,j}^{(\ell)}W ^{(\ell)}h_{j}^{(\ell)}, \tag{13}\] where \(W^{(\ell)}\) is a trainable weight matrix at iteration \(\ell\), and \(\alpha_{i,j}^{(\ell)}\) is a normalized attention coefficient at iteration \(\ell\), which measures the relative importance of subtask \(b_{j}\) to subtask \(b_{i}\) as follows: \[\alpha_{i,j}^{(\ell)}=\frac{\exp\left(A^{(\ell)}[W^{(\ell)}h_{i}^{(\ell)}||W^{ (\ell)}h_{j}^{(\ell)}]\right)}{\sum\limits_{b_{j^{\prime}}\in\mathcal{N}_{i}} \exp\left(A^{(\ell)}[W^{(\ell)}h_{i}^{(\ell)}||W^{(\ell)}h_{j^{\prime}}^{(\ell )}]\right)}. \tag{14}\] In (14), \(A^{(\ell)}\) is a trainable vectors at iteration \(\ell\), and \(\cdot||\cdot\) denotes the vector concatenation. Further, in order to enhance the effectiveness of GAT's learning process, we propose to use a multi-head GAT, where different attention heads learn to give more relevant weights to different subtasks. Let \(Z\) denote the total number of heads. Each attention head, denoted by \(z\) will individually aggregate topological information of subtasks, in conjunction with other Fig. 4: A schematic of architecture of GNN for extracting the features of the DAG task depicted in Fig. 1. The colored squares in the diagram correspond to different features. attention modules. The multi-head attention-based aggregation function called \(\textbf{Aggregate}^{\text{mat}}\) can be then formulated as \[\textbf{Aggregate}^{\text{mat}}(\{h_{j}^{(\ell)}|b_{j}\in\mathcal{N }_{i}\})\] \[=\frac{1}{Z}\sum_{z=1}^{Z}\left(\sum_{b_{j}\in\mathcal{N}_{i}} \alpha_{i,j}^{(\ell)(z)}W^{(\ell)(z)}h_{j}^{(\ell)}\right), \tag{15}\] where iteration index \(\ell\) and head index \(z\) are both used as superscripts hereafter. To better suit our problem, we aim to modify the \(\textbf{Aggregate}^{\text{mat}}\) function defined in (15) through developing a two-way aggregation for the multi-head GAT. This approach takes into consideration the predecessors and successors of each subtask, which helps to aggregate topological information in a more effective manner. #### Iii-B3 Two-way aggregation To execute DAG subtasks, capturing the conditions of predecessors and successors of each subtasks are equally important. As a result, we develop a two-way aggregation approach that utilizes two different types of attention heads. This approach involves using the inverse neighbor set \(\mathcal{N}_{i}^{-1}\) of each subtask \(b_{i}\), which includes all of its successors and \(b_{i}\) itself \[\mathcal{N}_{i}^{-1}=\{b_{j}\mid e_{i,j}\in\mathcal{E}\}\cup\{b_{i}\}. \tag{16}\] At each iteration \(\ell\), half of the attention heads from \(Z\) are then allocated to collect topological information from the neighboring subtasks, while the remaining half is utilized to gather topological information from the inverse neighbor set, which leads to the modification of (15) to \[\textbf{Aggregate}^{\text{mat}}(h_{j}^{(\ell)})=\frac{1}{Z}\Bigg{[} \sum_{z=1}^{Z/2}\Bigg{(}\sum_{b_{j}\in\mathcal{N}_{i}}\alpha_{i,j}^{(\ell)(z)}W ^{(\ell)(z)}h_{j}^{(\ell)}\Bigg{)}\] \[+\sum_{z=Z/2}^{Z}\Bigg{(}\sum_{b_{j}\in\mathcal{N}_{i}^{-1}} \alpha_{i,j}^{(\ell)(z)}W^{(\ell)(z)}h_{j}^{(\ell)}\Bigg{)}\Bigg{]}. \tag{17}\] We then aim to further modify the \(\textbf{Aggregate}^{\text{mat}}\) function defined in (III-B3). This makes out GAT module different from other existing GNNs [34, 35, 36, 37]: we do not consider all the neighbors of a given subtask to accumulate topological information. Instead, we opt for a weighted sample of neighbors, based on their scheduling priority. This approach allows our GAT to be more generalizable to unseen DAG task topologies. We next describe this approach. #### Iii-B4 Ranking-based sampling We first devise an approach to prioritize scheduling of subtasks based on their ranking value. By employing a recursive method, we determine the ranking value of each subtask \(b_{i}\) labeled as \(rank_{i}\) as follows: \[rank_{i}=\max_{b_{j}\in\mathcal{P}_{i}}\{rank_{j}+\overline{u_{j}}+\overline{ c_{j,i}}\},\ b_{i}\in\mathcal{B}, \tag{18}\] where \(\overline{u_{j}}\) is the average execution cost of subtask \(b_{j},b_{j}\in\mathcal{P}_{i}\), which is given by \[\overline{u_{j}}=\frac{\sum_{m=1}^{|\mathcal{V}|}u_{j}/f_{m}}{|\mathcal{V}|}, \ v_{m}\in\mathcal{V}, \tag{19}\] and \(\overline{c}_{j,i}\) denotes the average transmission cost associated with edge \(e_{j,i},b_{j}\in\mathcal{P}_{i}\) at the beginning (i.e., at time slot \(0\)), which is given by \[\overline{c_{j,i}}=\frac{\sum_{m=1}^{|\mathcal{V}|}\sum_{n=1}^{|\mathcal{V}|} c_{j,i}\Psi\left(PL\left(d_{m,n}\left(0\right)\right)\right)}{\left|\mathcal{V} \right|^{2}}. \tag{20}\] Assuming \(rank_{0}=0\) for virtual subtask \(b_{0}\), we maintain a subtask scheduling priority list \(\mathcal{L}^{\text{rank}}\) as \[\mathcal{L}^{\text{rank}}=\{b_{i}\succ b_{j}\mid b_{i},b_{j}\in\mathcal{B}, \ rank_{i}<rank_{j}\}, \tag{21}\] where the preference relation \(b_{i}\succ b_{j}\) indicates that subtask \(b_{i}\) has a higher scheduling priority compared with subtask \(b_{j}\) due to a lower value of \(rank_{i}\)4. Footnote 4: Our current ranking method for DAG subtasks relies on heuristics, which may limit the GNN-augmented DRL algorithm’s ability. We plan to address this issue by exploring alternative methods for determining the scheduling priority of DAG subtasks using DRL in the future. Finally, we define \(\mathcal{N}_{i}^{\text{rank}}\) as a ranking-based neighbor set of subtask \(b_{i}\) which contains the subtasks sampled from \(\mathcal{N}_{i}\). The sampling probability/weight of subtask \(b_{j}\) from \(\mathcal{N}_{i}\) to be included in \(\mathcal{N}_{i}^{\text{rank}}\), denoted by \(p_{j}\), is calculated as \[p_{j}=\frac{\exp\left(rank_{j}\right)}{\sum_{b_{j^{\prime}}\in\mathcal{P}_{i}} \exp\left(rank_{j^{\prime}}\right)}. \tag{22}\] This weighted subtask sampling method leads to improving the generalizability of our method by intentionally losing topological information passed by the subtasks which are not sampled, which makes our GAT model less sensitive to the topological variations in DAG tasks. This resembles the dropout [19] mechanism widely leveraged in training deep neural network. Note that subtask sampling is done with replacement if the sample size is larger than the size of \(\mathcal{N}_{i}\). **Aggregate function.** By integrating aforementioned methodologies, our designed \(\textbf{Aggregate}^{\text{mat}}\) function not only enables information enhancement of key subtasks by considering a two-way multi-head attention-based aggregation, but also improves generalizability by considering a ranking-based sampling; mathematically \[\textbf{Aggregate}^{\text{mat}}(h_{j}^{(\ell)})=\frac{1}{Z}\Bigg{[} \sum_{z=1}^{Z/2}\Bigg{(}\sum_{b_{j}\in\mathcal{N}_{i}^{\text{rank}}}\alpha_{i,j} ^{(\ell)(z)}W^{(\ell)(z)}h_{j}^{(\ell)}\Bigg{)}\] \[+\sum_{z=Z/2}^{Z}\Bigg{(}\sum_{b_{j}\in\mathcal{N}_{i}^{-\text{ rank}}}\alpha_{i,j}^{(\ell)(z)}W^{(\ell)(z)}h_{j}^{(\ell)}\Bigg{)}\Bigg{]}, \tag{23}\] where \(\mathcal{N}_{i}^{-\text{rank}}\) is the inverse ranking-based neighbor set of subtask \(b_{i}\) sampled from the \(\mathcal{N}_{i}^{-1}\) using a similar sampling method described in (22). **Update function.** After receiving aggregated topological information in (23), we apply the exponential linear unit activation (ELU) [48] in the **Update** function. Finally, combining the aforementioned **Aggregate** and **Update** functions, we can express (12) as \[h_{i}^{(\ell+1)} =\textbf{ELU}\Bigg{(}\frac{1}{Z}\Bigg{[}\sum_{z=1}^{Z/2}\Bigg{(}\sum _{b_{j}\in\mathcal{N}_{i}^{\text{rank}}}\alpha_{i,j}^{(\ell)(z)}W^{(\ell)(z)}h_ {j}^{(\ell)}\Bigg{)}\] \[\quad+\sum_{z=Z/2}^{Z}\Bigg{(}\sum_{b_{j}\in\mathcal{N}_{i}^{ \text{rank}}}\alpha_{i,j}^{(\ell)(z)}W^{(\ell)(z)}h_{j}^{(\ell)}\Bigg{)}\Bigg{]} \Bigg{)}. \tag{24}\] In our experiments, we found that our approach could achieve high performance with \(L=2,Z=4\), where \(W^{(1)}\in\mathbb{R}^{4\times 16}\), \(A^{(1)}\in\mathbb{R}^{32\times 1}\), and \(W^{(2)}\in\mathbb{R}^{16\times 32}\), \(A^{(2)}\in\mathbb{R}^{64\times 1}\). A flow chart of the relationships between the components developed for **Aggregate** function is shown in Fig. 5. Also, **Algorithm** 1 details the corresponding procedure of our GAT module with computation complexity \(\mathcal{O}(|\mathcal{B}|LZ)\), where we assume a set of learned parameters (i.e., \(W^{(\ell)(z)}\) and \(A^{(\ell)(z)}\)). These parameters are later optimized in conjunction with DDQN parameters. We next formulate DAG task scheduling as an MDP with state, action, and reward representation. ### _State Representation_ The result feature of each subtask \(b_{i}\), denoted by \(h_{i}^{(L)}\), is generated through consecutive \(L\) iterations in (24). We assume subtasks to vehicles assignments through a series of decision steps indexed by \(k\), at each decision step \(k\), let subtask \(b_{\tau(k)}\) be the _current subtask_ waiting to be allocated to a vehicle, where \(\tau(k)\) indicates the subtask's index at position \(k\) in \(\mathcal{L}^{\text{rank}}\). We define the system state \(s^{(k)}\) as follows: \[s^{(k)}=\Big{\{}h_{\tau(k)}^{(L)},\mathcal{I}^{(k-1)},\mathcal{A}^{(k)}, \mathcal{O}^{(k)}\Big{\}}, \tag{25}\] where \(\mathcal{I}^{(k-1)}\) denotes subtask-to-vehicles allocation decisions for the subtasks located before current subtask \(b_{\tau(k)}\) in \(\mathcal{L}^{\text{rank}}\), and \(\mathcal{A}^{(k)}=\{avail_{1},avail_{2},\cdots,avail_{|\mathcal{V}|}\}\) is the availability indicator set at the instant of decision step \(k\), where \(avail_{m}=1\) denotes that vehicle \(v_{m}\) is available for offering its computation resource or processing current subtask, and \(avail_{m}=0\) otherwise. Also, \(\mathcal{O}^{(k)}=\{(x_{m},y_{m})\mid v_{m}\in\mathcal{V}\}\) is the instantaneous location of vehicles at decision step \(k\). ### _Action Space_ During each decision step \(k\), we need to determine which vehicle should be assigned to each subtask based on the system state \(s^{(k)}\) and subtask scheduling priority list \(\mathcal{L}^{\text{rank}}\). In particular, at decision step \(k\), for current subtask \(b_{\tau(k)}\), action \(a^{(k)}\) is defined as \[a^{(k)}\in\{1,2,\cdots,|\mathcal{V}|\}, \tag{26}\] where \(a^{(k)}=1\) implies that current subtask \(b_{\tau(k)}\) is processed locally on task owner \(v_{1}\), and \(a^{(k)}\in\{2,\cdots,|\mathcal{V}|\}\) implies that current subtask \(b_{\tau(k)}\) is allocated to other vehicles for a faster execution. ### _Reward Design_ At decision step \(k\), given state \(s^{(k)}\), we associate performing action \(a^{(k)}\) for allocating of current subtask \(b_{\tau(k)}\) to an immediate reward \(r^{(k)}\) leveraged to evaluate the quality of action \(a^{(k)}\). We define the reward \(r^{(k)}\) as the decrease in the \(EFT\) of all subtasks as \[r^{(k)}=\underbrace{\max_{b_{i}\in\mathcal{B},v_{m}\in\mathcal{V}}\{EFT_{i,m }^{(k-1)}\}}_{\text{(I)}}-\underbrace{\max_{b_{i}\in\mathcal{B},v_{m}\in \mathcal{V}}\{EFT_{i,m}^{(k)}\}}_{\text{(II)}}, \tag{27}\] where term (I) and (II) denote the maximum DAG task completion time before and after scheduling the current subtask, respectively. We next demonstrate the rationality of reward function introduced above. Fig. 5: A flow chart of the relationships between different components of our methodology in Section IV-B. **Rational of the Choice of Reward.** Let \(K\) denote the total number of decision steps. According to (27), the discounted cumulative reward can be calculated as \[R=\sum_{k=1}^{K}\gamma_{1}^{k}r^{(k)}\] \[=\!\!\!\sum_{k=1}^{\infty}\gamma_{1}^{k}\Big{(}\max_{b_{i}\in \mathsf{B},v_{m}\in\mathcal{V}}\{EFT_{i,m}^{(k-1)}\}\max_{b_{i}\in\mathsf{B},v_ {m}\in\mathcal{V}}\{EFT_{i,m}^{(k)}\}\Big{)}, \tag{28}\] where \(\gamma_{1}\) is the discount factor. Assuming \(\gamma_{1}=1\) for simplicity, since at decision step \(k\), we determine the allocation of only the current subtask \(b_{\tau(k)}\) according to scheduling priority list \(\mathcal{L}^{\text{ank}}\), we have \(K=|\mathcal{L}^{R}|=|\mathcal{B}|\). Thus, (28) can be rewritten as \[R=\sum_{k=1}^{K}\Big{(}\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(k-1 ),m}\}-\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(k),m}\}\Big{)}\] \[=\Big{(}\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(0),m}\}-\max_{v_{m} \in\mathcal{V}}\{EFT_{\tau(1),m}\}\] \[+\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(1),m}\}+\cdots-\max_{v_{m }\in\mathcal{V}}\{EFT_{\tau(K),m}\}\Big{)}\] \[=-\Big{(}\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(K),m}\}-\max_{v_{m }\in\mathcal{V}}\{EFT_{\tau(0),m}\}\Big{)}, \tag{29}\] where we define \(b_{\tau(0)}\) as the virtual subtask with \(\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(0),m}\}=0\). The last result in (29) (i.e., term \(-\max_{v_{m}\in\mathcal{V}}\{EFT_{\tau(K),m}\}\)) indicates that maximizing the cumulative reward is consistent with minimizing the task completion time given in (9). Hereafter, in order to solve the above mentioned MDP, we resort to a DDQN, which adopts the action (i.e., subtask-to-vehicle allocation) at each decision step yielding the largest Q-value (i.e., state-action value) prior to DAG task scheduling over dynamic VCs. ### _Double Deep Q-Network_ #### Iv-F1 Deep Q-network We first describe DQN methodology [49], which paves the way for DDQN. In DQN, we have two deep neural networks (DNNs) called _predict_ Q-network \(Q(s,a;\theta^{\text{p}})\) and _target_ Q-network \(Q(s,a;\theta^{\text{t}})\). Particularly, \(\theta^{\text{p}}\) and \(\theta^{\text{t}}\) are the vectors of weights/parameters of DNNs, and \(s\) and \(a\) denote the state and action, respectively. **Predict Q-value.** At each decision step \(k\), given state \(s^{(k)}\), using the predict Q-network, the DRL agent first estimates/predicts the Q-value \(Q(s^{(k)},a;\theta^{\text{p}})\) of all actions \(a=1,2,\cdots,|\mathcal{V}|\), where \(s^{(k)}\) consists of the extracted feature of current subtask \(b_{\tau(k)}\) and vehicles' parameters given in (25). Q-value is a measure of the quality of the action: a higher Q-value is an indicator to a better action. **Action selection.** The DRL agent then performs an action \(a^{(k)}\) using a max mathematical estimator as follow \[a^{(k)}=\text{argmax}_{a}Q(s^{(k)},a;\theta^{\text{p}}),a\in\{1,2\cdots| \mathcal{V}|\}. \tag{30}\] The DRL agent then receives a reward \(r^{(k)}\) computed by (27). **Target Q-value.** The system subsequently transits to the next state \(s^{(k+1)}\), and the DRL agent resorts to target Q-network for calculating the target Q-value of state \(s^{(k)}\), denoted by \(\mathsf{y}^{(k)}\): \[\mathsf{y}^{(k)}\!=\!r^{(k)}+\gamma_{2}\underbrace{Q\left(s^{(k+1 )},\underbrace{\text{argmax}_{a}Q\left(s^{(k+1)},a;\theta^{\text{t}}\right)}_{ \text{(II)}};\theta^{\text{t}}\right)}_{\text{(II)}},\] \[a\in\{1,2\cdots|\mathcal{V}|\}. \tag{31}\] To obtain the parameter \(\theta^{\text{p}}\), the mean square error, denoted by \(\mathsf{G}(\theta^{\text{p}})\), is used with discount factor \(\gamma_{2}\) as follows \[\mathsf{G}(\theta^{\text{p}})\!=\!\frac{1}{2}\left[\mathsf{y}^{(k)}-Q(s^{(k)}, a^{(k)};\theta^{\text{p}})\right]\!\! Using which the parameter \(\theta^{\mathsf{p}}\) is obtained, the weights of the target network \(\theta^{\mathsf{t}}\) are then periodically copied from the predict network \(\theta^{\mathsf{p}}\). ### _Training Process_ We consider training of DRL through a series of _episodes_, where each episode contains total of \(K\) sequential decision steps. At each decision step \(k\), DRL agent generates a pair of observation \((s^{(k)},a^{(k)},r^{(k)},s^{(k+1)})\). An episode is considered to be complete when a vehicle is assigned the subtask with the lowest scheduling priority, which is listed in the last position of \(\mathcal{L}^{\mathsf{rank}}\) (i.e, \(K=|\mathcal{B}|\)). #### Iv-G1 Q-network training Based on policy gradient algorithm [40], predict Q-network \(Q(s,a;\theta^{\mathsf{p}})\) is trained by iteratively tuning the weights \(\theta^{\mathsf{p}}\) at each decision step \(k\) through minimizing the mean square error given in (34) as follows: \[\theta^{\mathsf{p}}\leftarrow\theta^{\mathsf{p}}-\mu\frac{ \partial\mathsf{G}(\theta^{\mathsf{p}})}{\partial\theta^{\mathsf{p}}} \tag{35}\] where \(\mu\) is the tunable learning rate. As for the target Q-network \(Q(s,a;\theta^{\mathsf{t}})\), \(\theta^{\mathsf{t}}\) is copied from \(\theta^{\mathsf{p}}\) at beginning, and \(\theta^{\mathsf{t}}\) will be iteratively updated to \(\theta^{\mathsf{p}}\) after conducting some iterations (5 decision steps in our simulations). We adopt a \(\epsilon\)-greedy policy to select action, in which the DRL agent probabilistically explores the actions which have not been adopted yet instead of an action with the maximum Q-value in (30). Also, we leverage a replay buffer \(\mathcal{R}\) to store the sequence of \((s^{(k)},a^{(k)},r^{(k)},s^{(k+1)})\) obtained through decision steps \(k\). In particular, the gradient in (35) is obtained by selecting mini-batches of data from the reply buffer. At each decision step \(k\), we consider the feasibility of actions for the current subtask \(b_{\tau(k)}\). Actions that meet constraint (C5) are defined as feasible, while the others are infeasible. We leverage action mask [47] technique to prevent DDQN from performing infeasible actions. In this approach, the Q-value for an infeasible action is set to a large negative value, to ensure that taken actions are feasible. #### Iv-G2 GAT training The state \(s^{(k)}\) which consists of the extracted features of current subtask \(b_{\tau(k)}\) is obtained from the GAT with parameters \(\mathcal{W}=\{W^{(\ell)(z)}\mid 1\leq\ell\leq L,\ 1\leq z\leq Z\}\) and \(\mathcal{A}=\{A^{(\ell)(z)}\mid 1\leq\ell\leq L,\ 1\leq z\leq Z\}\). Thus, we can rewrite the right hand-side of (34) as \[\left[r^{(k)}+ \gamma_{2}Q\left(s^{(k+1)}(\mathcal{W},\mathcal{A}),\operatorname {arg\,max}_{a^{*}}Q\left(s^{(k+1)},a^{*},\theta^{\mathsf{p}}\right);\theta^{ \mathsf{t}}\right)\right.\] \[-Q(s^{(k)}(\mathcal{W},\mathcal{A}),a^{(k)};\theta^{\mathsf{p}}) \right]^{2},a\in\{1,2\cdots|\mathcal{V}|\}, \tag{36}\] which indicates that parameters \(\mathcal{W}\), \(\mathcal{A}\) and \(\theta^{\mathsf{p}}\) are trained simultaneously by minimizing (36) during the decision steps of the DRL agent. **Algorithm** 2 presents a pseudocode of GA-DRL training procedure. ## V Performance Evaluation In this section, we first provide parameter settings for simulations. We then study the convergence of GA-DRL. Finally, we compare the performance of GA-DRL with four DAG task scheduling benchmarks in terms of the task completion time. ### _Simulation Setting_ **Simulation environment.** All neural networks considered in this work are implemented using PyTorch 2.0.0 [44] and Python 3.8.1 platforms, and Adam [45] is leveraged to optimize networks. In our simulations, we consider a real-world highway traffic region as shown in Fig. 7(a) of size 1km\(\times\)1km in Xiamen, China, obtained from OpenStreetMap [10]. Moreover, SUMO [11] is utilized to import mobile vehicles using the mobility model developed in (1)-(2), and subsequently emulate a real-world VC as shown in Fig. 7(b). Also, the arrival time of each vehicle, i.e., \(AT_{m}\), is assumed to be uniformly distributed in \([1,5]\) (in second) for analytical simplicity, and \(\mu_{g}=50\) (in Kliometres per hour) with \(\sigma_{g}=10\). **Parameter setting of DAG tasks.** The task owner has a DAG task which is generated according to [5]. We assume Fig. 7: VC network visualization. that the computation capability of each vehicle is uniformly distributed in \([1,10]\) (in GHz) [19], the distance between different vehicles during the task scheduling process are captured by SUMO, and function \(\Psi(\cdot)\) in (6) is defined as \(\Psi\left(PL\left(d_{m,n}(t)\right)\right)=0.15PL\left(d_{m,n}(t)\right)+0.001\)[38]. Also, the computation workload of each subtask is uniformly distributed in \([1,2]\) (in Gigalock cycles) [26] and the transmission data size of each edge is uniformly distributed in \([100,500]\) (in KB) [26]. During training, we have chosen \(\epsilon\)-greedy policy with \(\epsilon=0.9\) and discount factor \(\gamma_{2}=0.9\). ### _Convergence Performance_ In Fig. 8, we depict the convergence behavior of GA-DRL with respect to the number of episodes. Note that the best convergence and reward values are achieved when the GA-DRL's learning rate is 0.0001. On the other hand, as the learning rate increases from 0.0001 to 0.0005, the average reward is significantly decreased due to the instability of learning. As a result, we fix the learning rate of the GA-DRL to 0.0001 when comparing it with benchmarks in the following. ### _Benchmarks_ To study the performance of GA-DRL, we implement four DAG task scheduling benchmarks, including LPS, HEFT [12], MGA [15], and DRLOSM [46] as detailed below. * _Local processing scheme (LPS):_ All subtasks are processed locally by the task owner itself without offloading to other vehicles. * _Heterogeneous earliest finish time (HEFT) [12]:_ All subtasks are first sorted according to their ranking value in (24). The subtasks are assigned to the vehicles that can complete them in the shortest time. The HEFT algorithm does not take into account the constraint of V2V transmission (C5) since it was designed for a static computing environment. We assume that subtasks-to-vehicles allocations that do not satisfy constraint (C5) are executed locally. * _Modified genetic algorithm (MGA) [15]:_ MGA considers an integer encoding to denote subtask-to-vehicle assignments. The assignments with high fitness (i.e., low task completion time) are stochastically selected to perform crossover (i.e., exchange their processing vehicles). Finally, a mutation (i.e., changing the processing vehicle) is adopted to avoid early convergence. MGA considers a VC environment satisfying V2V communication constraint (C5). * _DRL offloading scheduling method (DRLOSM) [46]:_ DRLOSM is an improved version of the method proposed in [46]. All subtasks are first sorted according to their ranking value in (24). DRLOSM uses a DDQN architecture, where at decision step \(k\), the raw feature of _current subtask_\(b_{\tau(k)}\) is integrated in \(s^{(k)}\) without the use of GNNs. DRLOSM also satisfies the V2V communication constraint (C5) through an action mask module. ### _Simulation Results of Randomly Generated DAG Tasks_ We conduct performance evaluations by analyzing the average completion time of DAG tasks for various numbers of layers5 of DAG tasks, subtasks, and vehicles in the network. The results are the average performance obtained via 100 independent Monte-Carlo iterations. Also, to compare the generalizability of DRLOSM and our GA-DRL, during the training period, we use the same DAG task topology, while deploying them for various DAG task topologies under performance evaluation. Footnote 5: The number of layers of a DAG task refers to the length of the longest path from the starting subtask to the finishing subtask. For a DAG task with a fixed number of subtasks, as the number of layers increases/decreases, there are more/less subtasks that are successors of the same subtask, implying a higher/lower potential for parallelism during the task execution. #### V-D1 Impact of the number of vehicles in VC The results presented in Fig. 9 illustrate the impact of increasing the number of vehicles from 1 to 20 on the completion time of the DAG task. The experiment was conducted with 20 subtasks and 5 layers. We observed that when only one vehicle is involved in VC, all DAG subtasks have to be executed sequentially and locally, resulting in the same completion time across different algorithms. However, as the number of vehicles increases, the completion of DAG tasks is significantly accelerated due to the sufficient computation resources. Overall, GA-DRL outperforms other algorithms in terms of task completion time. Itis 51.63\(\%\) better than LPS, 27.82\(\%\) better than HEFT, 24.69\(\%\) better than MGA, 5.17\(\%\) better than DRLOSM at 5 vehicles; and is 57.59\(\%\) better than LPS, 25.15\(\%\) better than HEFT, 17.08\(\%\) better than MGA, and 10.41\(\%\) better than DRLOSM at 20 vehicles. #### V-D2 Impact of the number of subtasks In Fig. 10, we can see the evaluation of the completion time for DAG tasks as the number of subtasks increases. In this experiment, we set the number of vehicles involved in the VC at 10, and the number of layers at 5. The results show that our proposed GA-DRL algorithm outperforms the other four benchmarks, achieving faster task completion times. Additionally, Fig. 10 Fig. 8: Average reward under different learning rates. demonstrates the effectiveness of GA-DRL compared to conventional DRL in terms of generalizability. As the number of subtasks increases from 25 to 30, the task completion time of DRLOSM becomes longer than that of both MGA and HEFT. This is due to the fact that the topologies of DAG tasks become more complicated, making the DRLOSM algorithm, which relies solely on human-selected features without the usage of GNNs, unable to capture the topological information of the newly generated DAG task topologies. On the other hand, our GA-DRL algorithm benefits from the subtasks' features, which are automatically learned from GAT, making its models well generalizable to unseen DAG task topologies. In summary, the performance of GA-DRL in terms of the task completion time is 27.29\(\%\) better than LPS, 19.87\(\%\) better than HEFT, 13.76\(\%\) better than MGA, 11.29\(\%\) better than DRLOSM at 10 subtasks; and is 59.84\(\%\) better than LPS, 11.01\(\%\) better than than HEFT, 0.08\(\%\) better than MGA, and 15.19\(\%\) better than DRLOSM at 30 subtasks. #### Iv-D3 Impact of the number of layers within DAG task In Fig. 11, it is evident that changing the number of DAG task layers from 4 to 8 has a significant impact on the completion time of the DAG task. In this result, we considered 20 subtasks and 10 vehicles. It is observed that increasing the number of layers leads to a longer completion time. This is because, as the number of layers increases, the parallelism of the DAG task decreases, resulting in more subtasks being executed in a sequential manner. This, in turn, leads to a longer task completion time. The performance of GA-DRL in terms of the task completion time is 61.39\(\%\) better than LPS, 29.31\(\%\) better than HEFT, 23.35\(\%\) better than MGA, 8.27\(\%\) better than DRLOSM at 4 layers; and is 30.41\(\%\) better than LPS, 14.04\(\%\) better than HEFT, 4.36\(\%\) better than MGA, and 1.38\(\%\) better than DRLOSM at 8 layers. ### _Simulation Results for Real Application DAG Task_ In Fig. 12, we illustrate a real-world DAG task of a modified molecular dynamic code [12]. The subtasks' computation workload and transmission data size were set according to the parameter settings, and we considered 20 vehicles in the result. Table III presents the performance comparison of various benchmarks, except LPS6, with respect to the DAG task completion time (in seconds) and the algorithm running time (in seconds). It is important to note that DRLOSM, which solely learns from human-selected features of subtasks without the usage of GAT, exhibits a higher task completion time than others, such as HEFT, MGA, and our GA-DRL. This is a clear indication of the superiority of generalization of our GA-DRL, especially in the case of a large number of subtasks. Additionally, MGA has the longest running time among the benchmarks due to its internal iteration time for convergence. However, our GA-DRL shows better performance in terms of task completion time at the mild cost of higher algorithm running time. The performance of GA-DRL is \(19.09\%\) better Fig. 11: Performance evaluations upon considering the different numbers of layers within DAG task. Fig. 10: Performance evaluations upon considering the different numbers of subtasks within DAG task. Fig. 9: Performance evaluations upon considering the different numbers of vehicles within VC’
2306.07331
Splitting and Parallelizing of Quantum Convolutional Neural Networks for Learning Translationally Symmetric Data
The quantum convolutional neural network (QCNN) is a promising quantum machine learning (QML) model that is expected to achieve quantum advantages in classically intractable problems. However, the QCNN requires a large number of measurements for data learning, limiting its practical applications in large-scale problems. To alleviate this requirement, we propose a novel architecture called split-parallelizing QCNN (sp-QCNN), which exploits the prior knowledge of quantum data to design an efficient model. This architecture draws inspiration from geometric quantum machine learning and targets translationally symmetric quantum data commonly encountered in physics and quantum computing science. By splitting the quantum circuit based on translational symmetry, the sp-QCNN can substantially parallelize the conventional QCNN without increasing the number of qubits and improve the measurement efficiency by an order of the number of qubits. To demonstrate its effectiveness, we apply the sp-QCNN to a quantum phase recognition task and show that it can achieve comparable classification accuracy to the conventional QCNN while considerably reducing the measurement resources required. Due to its high measurement efficiency, the sp-QCNN can mitigate statistical errors in estimating the gradient of the loss function, thereby accelerating the learning process. These results open up new possibilities for incorporating the prior data knowledge into the efficient design of QML models, leading to practical quantum advantages.
Koki Chinzei, Quoc Hoan Tran, Kazunori Maruyama, Hirotaka Oshima, Shintaro Sato
2023-06-12T18:00:08Z
http://arxiv.org/abs/2306.07331v3
Splitting and Parallelizing of Quantum Convolutional Neural Networks for Learning Translationally Symmetric Data ###### Abstract A quantum convolutional neural network (QCNN) is a promising quantum machine learning (QML) model to achieve quantum advantages in classically intractable problems. However, QCNN requires a large number of measurements for data learning, limiting its practical applications for large-scale problems. To relieve this requirement, we propose a novel architecture called split-parallelizing QCNN (sp-QCNN), which exploits the prior knowledge of quantum data for designing efficient circuits. This architecture draws inspiration from geometric quantum machine learning and targets translationally symmetric quantum data commonly encountered in condensed matter physics. By splitting the quantum circuit based on translational symmetry, sp-QCNN substantially parallelizes conventional QCNN without increasing the number of qubits and further improves the measurement efficiency by an order of the number of qubits. To demonstrate its effectiveness, we apply sp-QCNN to a quantum phase recognition task and show that it can achieve similar performance to conventional QCNN while considerably reducing the measurement resources required. Due to its high measurement efficiency, sp-QCNN can mitigate statistical errors in estimating the gradient of the loss function, thereby accelerating the learning process. These results open up new possibilities for incorporating the prior knowledge of data into the efficient design of QML models, leading to practical quantum advantages. ## I Introduction Quantum computing is an innovative technology that solves classically intractable problems and opens up new frontiers in scientific research and technological advancements [1]. Quantum machine learning (QML) is one of the central research fields in quantum computing, allowing us to solve various tasks such as classification, regression, and clustering by discovering relationships and patterns between data using quantum computers [2]. Recent studies demonstrated quantum speedups in QML beyond classical machine learning for specific artificially engineered tasks, suggesting the potential of QML [3; 4]. It is conjectured that achieving quantum speedups or advantages in QML requires encoding the prior knowledge of the problem into the learning models [5; 6]. However, how to use the prior knowledge in real and practical applications to harness the potential quantum advantages remains unclear. A quantum neural network (QNN) is a promising QML model that combines the principles of quantum information processing and artificial neural networks to enhance the capabilities of data-driven technologies [7; 8; 9; 10; 11]. A QNN is represented by a parametrized quantum circuit, which is optimized via training data to solve a given task [12]. Since efficiently simulating quantum circuits is generally impossible with classical computers, QNN can learn the complex features of data that are classically unwieldy [13; 14]. Among QNN architectures, a quantum convolutional neural network (QCNN) is a leading one that enables classification tasks [15; 16] [Fig. 1(a)]. For instance, QCNN can classify the phases of matter in quantum many-body systems, an important research object in the broad field of physics [17; 18; 19]. Due to its good trainability and feasibility [20], QCNN is particularly suitable for noisy intermediate-scale quantum (NISQ) devices with a limited number of possible gate operations [21]. The high resource requirement of the measurement process remains a practical barrier for QNNs, including QCNN, to learn data on real quantum computers [12]. During the learning process of QNN, a predefined loss function is minimized by adjusting the circuit's variational parameters. This loss function is computed from a training dataset by measuring specific observables in the parameterized quantum circuit. Therefore, the number of measurements scales with the number of parameters to be optimized and the amount of data to be processed. This situation presents a significant bottleneck when considering large-scale QML applications and the potential of practical quantum advantages [22]. To mitigate the measurement requirement, a possible solution is the multi-programming of quantum computation, enabling the execution of multiple circuits in parallel on different regions of a quantum processor [23; 24; 25; 26]. Although this parallelization reduces the total run time, it increases the required qubit resources, which are limited in current devices. We address this issue by proposing a novel QNN architecture called split-parallelizing QCNN (sp-QCNN). This architecture targets translationally symmetric data, such as solid states in condensed matter physics, and exploits data symmetry as prior knowledge to substantially parallelize QCNN without increasing qubits and improve the measurement efficiency [Fig. 1(b)]. The circuit of sp-QCNN consists of two elements: translational sym metrization on each layer and circuit splitting. First, we impose translational symmetry on the convolutional and fully-connected layers to maintain the symmetry of the input state. Second, we split the circuit (rather than discard some qubits) at the pooling layers and then perform the same unitary operations on each branch in parallel. The combination of this circuit structure and data symmetry substantially parallelizes conventional QCNN consisting of the same unitary layers, improving the measurement efficiency of local observables and their gradients with respect to the circuit parameters by \(\mathcal{O}(n)\) times (\(n\) is the number of qubits throughout this paper). For verification, we apply sp-QCNN to a quantum phase recognition task. The results show that sp-QCNN has sufficient expressivity, trainability, and generalization to recognize the symmetry-protected topological (SPT) phase [27, 28, 29, 30] and improves the measurement efficiency by a factor of \(\mathcal{O}(n)\). In training with limited measurement resources, sp-QCNN with high measurement efficiency can suppress statistical errors in estimating the gradient of the loss function to accelerate the learning process, compared with conventional QCNN. Our architecture exploits the translational symmetry of data as prior knowledge to reduce measurement costs. Such symmetry-tailored architecture design, or more generally inductive bias, is considered crucial to take full advantage of QML capability [5, 6]. For instance, geometric quantum machine learning (GQML), in which the symmetry of a problem is prebuilt into the network structure, can enhance trainability and generalization [31, 32, 33, 34, 35, 36, 37, 38, 39] (e.g., permutation-equivariant QNNs do not suffer from barren plateaus, the exponential vanishing of the gradient in a loss function [40]). Meanwhile, our model exploits data symmetry to improve the measurement efficiency, opening up a new research direction on the architecture design of QNNs. Maximally using the prior knowledge of data is especially significant for near-term quantum devices lacking sufficient computational resources. The remainder of this paper is organized as follows. First, Sec. II briefly reviews QCNN and discusses its computational cost. Section III introduces two key components of sp-QCNN, translational symmetrization and circuit splitting, and clarifies the similarities and differences between sp-QCNN and GQML. Section IV shows the advantage of sp-QCNN, i.e., the improvement of measurement efficiency for local observables and their gradients with respect to the circuit parameters, based on symmetry. For verification, Sec. V presents the application of sp-QCNN to a quantum phase recognition task, showing that it can solve the task with sufficient accuracy and improve the measurement efficiency by \(\mathcal{O}(n)\) times. Finally, Sec. VI summarizes this paper and discusses potential future research directions. ## II Review of QCNN A convolutional neural network (CNN) is a celebrated classical machine learning model that solves classification tasks, such as image recognition [41, 42, 43]. A CNN consists of three different types of layers: convolutional, pooling, and fully connected layers. The convolutional layer filters input data to extract its local features, and the pooling layer coarse-grains the data to leave only relevant information. After the convolutional and pooling layers are alternately applied, the fully-connected transformation is applied to the remaining data to obtain a final output. In classification problems, for example, the output indicates which class the input data belongs to. For supervised learning tasks, CNN is trained to correctly classify training data. A QCNN is a CNN-inspired QNN model that can treat quantum data whose dimension is exponentially larger than classical ones and is expected to achieve practical quantum advantages [15, 16]. Similar to CNN, QCNN consists of convolutional, pooling, and fully connected layers [Fig. 1(a)]. The convolutional layers apply local unitary gates to extract the local features of input data, and the pooling layers discard some qubits to coarse-grain the quantum information. After alternately applying the two layers, we perform the fully connected unitary, measure the remaining qubits, and obtain an output indi Figure 1: Basic structures of (a) conventional and (b) sp-QCNNs. (a) In conventional QCNN, some qubits are discarded at each pooling layer, and only one of the remaining qubits is measured in the end to classify the quantum data. (b) In sp-QCNN, the translational symmetry of data is used as prior knowledge to design an efficient QML model. The circuit of sp-QCNN (the left circuit) consists of translationally symmetric layers and splitting structures, allowing us to substantially parallelize nonsplitting QCNN (the right circuit) to improve the measurement efficiency. cating the data class. In QCNN, the quantum circuit is characterized by variational parameters, which are optimized to correctly classify training data. Such a variational algorithm is central in the NISQ era because it works even in a relatively shallow circuit [21]. A QCNN is prospective for quantum advantages in NISQ devices because of its two significant features. One is its high feasibility. In QCNN, since the number of qubits decreases exponentially in each pooling layer, the circuit depth is \(\mathcal{O}(\log n)\). This logarithmic depth is advantageous for NISQ devices in which the number of possible gate operations is limited. The other feature of QCNN is its high trainability. In many variational quantum algorithms, the exponential vanishing of the gradient in a loss function, known as the barren plateau phenomenon, prevents scalable optimization [44, 45, 46]. Meanwhile, Ref. [20] proved that QCNN does not suffer from the barren plateaus due to the logarithmic depth and the locality of unitary operations and observables. This property leads to the high trainability of QCNN, which is crucial for achieving quantum advantages in QML tasks. The high resource requirement of measurements for optimization presents practical difficulties in QNNs, including QCNN [12]. Let us estimate the required measurement cost in QCNN. First, we suppose that half of the qubits are discarded at each pooling layer and the number of variational parameters is \(\mathcal{O}(n)+\mathcal{O}(n/2)+\mathcal{O}(n/4)+\cdots\sim\mathcal{O}(n)\) in total [in common QCNNs, gates acting in parallel share the same parameters, thus, the number of independent parameters is \(\mathcal{O}(\log n)\), but measuring the gradient of the loss function requires \(\mathcal{O}(n)\) cost (see Sec. IV.2 for details)]. We also let \(N_{\rm train},N_{\rm epoch}\), and \(N_{\rm shot}\) denote the number of training data, maximum epoch (one epoch refers to a complete iteration through a dataset), and the number of measurement shots used per observable, respectively. Then, the total required number of shots during training is \(\mathcal{O}(nN_{\rm train}N_{\rm epoch}N_{\rm shot})\). In terms of practicality, QCNN is not easy to implement for large-scale problems in which many qubits and a large dataset are necessary. Below, we introduce a new architecture of QCNN that can ideally decrease the required number of shots by \(\mathcal{O}(1/n)\) times, bringing QCNN close to realization. ## III Split-parallelizing QCNN In this section, we describe the two key components of sp-QCNN, translational symmetrization on each layer and circuit splitting, and discuss the relationship between sp-QCNN and QQML through symmetry. Although sp-QCNN can be easily generalized to arbitrary-dimensional lattices, we focus on one-dimensional cases for simplicity. ### Translational symmetry We exploit data symmetry as prior knowledge to design an efficient QML model. The target of sp-QCNN is translationally symmetric data, which is represented by a density matrix \(\rho_{i}\) with the following property: \[T\rho_{i}T^{\dagger}=\rho_{i}, \tag{1}\] where \(T\) is the translation operator by one qubit (e.g., \(T\ket{100\cdots}=\ket{010\cdots}\)). The most relevant field for sp-QCNN application is condensed matter physics, in which translationally symmetric materials such as solids are the largest research topic [47]. In Sec. V, we will demonstrate that sp-QCNN can detect the quantum phases of translationally symmetric many-body states. To ensure the equivalence of outputs in the parallel computation, we also impose translational symmetry on each of the convolutional and fully connected layers, whose unitary is denoted by \(V_{i}\), as follows: \[TV_{i}T^{\dagger}=V_{i}. \tag{2}\] An example of such a unitary is given by (Fig. 2) \[V_{i}=\prod_{k=1}^{d}R_{ZZ}^{\rm sym}(\delta_{k})R_{X}^{\rm sym}(\gamma_{k})R_ {Z}^{\rm sym}(\beta_{k})R_{X}^{\rm sym}(\alpha_{k}), \tag{3}\] where we have defined \[R_{X}^{\rm sym}(\theta) =\prod_{j=1}^{n}e^{-i\theta X_{j}},\;\;R_{Z}^{\rm sym}(\theta)= \prod_{j=1}^{n}e^{-i\theta Z_{j}}, \tag{4}\] \[R_{ZZ}^{\rm sym}(\theta) =\prod_{j=1}^{n}e^{-i\theta Z_{j}Z_{j+1}}, \tag{5}\] with the periodic boundary condition \(Z_{j+n}=Z_{j}\). The rotation angles, \(\alpha_{k},\beta_{k},\gamma_{k}\), and \(\delta_{k}\), are variational parameters to be optimized via training and do not depend on the qubit position. By construction, \(V_{i}\) is symmetric by one-qubit translation: \([V_{i},T]=0\). In contrast, the convolutional layer of conventional QCNN, \(V_{i}^{\rm conv}\), is symmetric by two or more-qubit translation: \([V_{i}^{\rm conv},T]\neq 0\) Figure 2: Example of translationally symmetric unitary layer. Single qubit rotations are applied in parallel, followed by ZZ rotations on the nearest neighboring qubits. These procedures are repeated \(d\) times. The rotation angles are translationally symmetric, and thus the number of independent parameters is \(4d\). but, e.g., \([V_{i}^{\text{conv}},T^{2}]=0\) [ Fig. 1(a)]. As shown later, the translational symmetries of sp-QCNN contribute to the parallel computation. ### Circuit splitting Another crucial component in sp-QCNN is circuit splitting. In conventional QCNN, the pooling layer discards some qubits to coarse-grain the quantum data. By contrast, in sp-QCNN, we split the circuit at the pooling layers rather than discarding the qubits, as shown in Fig. 1(b). After splitting, we perform the same operations on each branch and finally measure all the qubits in the computational basis. In some types of quantum computers, such as superconducting [48] and ion-trap devices [49], unitary operations can be performed in parallel, and thus this splitting does not significantly increase the run time. We illustrate a concrete way of splitting a circuit. With \(n\) as the number of qubits, we choose a prime factor of \(n\), denoted by \(p\), and define \(q=n/p\). We, then, introduce splitting in which \(n=pq\) qubits are split into \(p\) branches (Fig. 3). First, we divide the qubits into \(q\) miniblocks each comprising \(p\) qubits in order from the top. Next, we split the circuit such that the \(j\)th qubit of the \(i\)th miniblock is connected to the \(i\)th qubit of the \(j\)th branch. By repeating this procedure on each new branch until the number of qubits becomes one, we obtain the entire sp-QCNN circuit. Due to the translational symmetry of \(V_{i}\) and circuit splitting, sp-QCNN substantially parallelizes nonsplitting QCNN that consists of the same \(V_{i}\) [Fig. 1(b)]. For convenience, we define \(\left\langle A\right\rangle_{\text{ns}}\) and \(\left\langle A\right\rangle\) as the expectation values of an operator \(A\) in nonsplitting and sp-QCNNs. In nonsplitting QCNN, we measure one of the remaining qubit in the computational basis and consider its expectation value (i.e., \(\left\langle Z_{1}\right\rangle_{\text{ns}}\)) as the output of QCNN. On the other hand, in sp-QCNN, we measure all the qubits and regard the average of the \(n\) expectation values (i.e., \(\left\langle Z_{\text{avg}}\right\rangle=\sum_{j}\left\langle Z_{j}\right\rangle /n\)) as the output. In the next section, we will discuss the mechanism and validity of this parallelization in more detail. ### Relation with geometric quantum machine learning We look at sp-QCNN from the viewpoint of GQML or equivariant QNN [31, 32, 33, 34, 35, 36, 37, 38, 39, 40]. The concept of GQML has recently emerged as a potential solution to some critical QML issues associated with trainability and generalization. It leverages the symmetry of a problem as inductive bias and provides a problem-tailored circuit architecture. For example, let us consider a classical task that recognizes whether an image depicts a cat. If an image represents a cat, then its rotated image should also represent a cat. Thus, this task has rotational symmetry. In GQML, such symmetry is prebuilt into the network architecture. Formally, given a symmetry operation \(S\) and an output function \(f(\rho)\), the \(S\)-invariance of GQML is defined as follows: \[f(\rho)=f(S\rho S^{\dagger})\ \ \forall\rho. \tag{6}\] In other words, the symmetry operation \(S\) on input data never changes the output of GQML. In GQML, the neural network is usually designed based on the equivariant circuit to satisfy this invariance. In theory, GQML significantly enhances the capability of machine learning in several tasks [33, 40]. The circuit of sp-QCNN presents the same invariant property as GQML. Let us consider the unitary transformation \(U\) of the entire sp-QCNN. Due to the translational symmetry of each \(V_{i}\) and the splitting structure, \(U\) itself is translationally symmetric: \[TUT^{\dagger}=U. \tag{7}\] This symmetry leads to an equivariant relation between input and output, \(U(T\rho T^{\dagger})U^{\dagger}=T(U\rho U^{\dagger})T^{\dagger}\). That is, the translational operation applied on the input is identical to that on the output. We also define \(f(\rho)=\text{tr}(U\rho U^{\dagger}Z_{\text{avg}})\) with an observable \(Z_{\text{avg}}=\sum_{j}Z_{j}/n\). Then, the equivariant relation and \([Z_{\text{avg}},T]=0\) result in \[f(\rho)=f(T\rho T^{\dagger})\ \ \forall\rho, \tag{8}\] which is the \(T\)-invariance of GQML (6). In this sense, sp-QCNN can be viewed as an application of GQML to Figure 3: Concrete way of splitting a circuit. We first divide the qubits into \(q\) miniblocks consisting of \(p\) qubits and split the circuit such that the \(j\)th qubit of the \(i\)th miniblock is connected to the \(i\)th qubit of the \(j\)th branch. QCNN. This insight suggests that sp-QCNN can be used to raise QML capability in tasks where translational operation should not change the data output. Our work offers a new direction for exploiting data symmetry to enhance QML potential. A critical difference between sp-QCNN and GQML is that input data itself is symmetric in our approach (see Eq. (1)), but not in GQML (e.g., the cat's picture is not rotation-invariant). Hence, the tasks to which each technique can be applied are distinct. In addition, each approach brings different benefits. Although the usual GQML improves trainability and generalization, our method reduces measurement costs through substantial parallelization. Thus, sp-QCNN is especially advantageous for near-term quantum devices in which computational resources are limited. ## IV Measurement efficiency in sp-QCNN In this section, we describe the parallelization in sp-QCNN and show that it can improve the measurement efficiency of local observables and their gradients with respect to the circuit parameters. We also analytically prove that the improvement rate is \(\mathcal{O}(n)\) times for a random input state. ### Measurement efficiency of local observable First, we show that the translational symmetry of \(V_{i}\) and circuit splitting allow for parallel computation and improve the measurement efficiency of local observables. A key property of sp-QCNN is the equivalence of expectation values for all the qubits. Let us recall that the unitary transformation of the entire sp-QCNN, \(U\), is translationally symmetric (see Eq. (7)). This symmetry leads to \[\left\langle Z_{1}\right\rangle =\operatorname{tr}\left(U\rho U^{\dagger}Z_{1}\right)\] \[=\operatorname{tr}\left(U(T^{\dagger})^{j-1}\rho T^{j-1}U^{ \dagger}Z_{1}\right)\] \[=\operatorname{tr}\left(U\rho U^{\dagger}Z_{j}\right)\] \[=\left\langle Z_{j}\right\rangle, \tag{9}\] where \(\rho\) is an input state satisfying Eq. (1), and we have used \(\rho=(T^{\dagger})^{j-1}\rho T^{j-1}\) and \(T^{j-1}Z_{1}(T^{\dagger})^{j-1}=Z_{j}\). This equation indicates the equivalence of the expectation values for all the qubits, i.e., \(\left\langle Z_{i}\right\rangle=\left\langle Z_{j}\right\rangle\) for any \(i\) and \(j\). This argument can be applied to other single-qubit Pauli operators, leading to \(\left\langle X_{i}\right\rangle=\left\langle X_{j}\right\rangle\) and \(\left\langle Y_{i}\right\rangle=\left\langle Y_{j}\right\rangle\). Figure 4(a) graphically illustrates this equivalence, which can also be proved by translating the circuit. This equivalence tells us that sp-QCNN substantially parallelizes nonsplitting QCNN that consists of the same \(V_{i}\), as shown in Fig. 1(b). As mentioned above, we regard the average of the expectation values for all the qubits, \(\left\langle Z_{\text{avg}}\right\rangle=\sum_{j}\left\langle Z_{j}\right\rangle/n\), as the output in sp-QCNN. Meanwhile, we consider the expectation value for only one qubit, \(\left\langle Z_{1}\right\rangle_{\text{ns}}\), as the output in nonsplitting QCNN. Given the equivalence in Eq. (9), nonsplitting and sp-QCNNs produce the same results if statistical errors are absent: \[\left\langle Z_{1}\right\rangle_{\text{ns}}=\left\langle Z_{\text{avg}} \right\rangle. \tag{10}\] Here we have used \(\left\langle Z_{1}\right\rangle_{\text{ns}}=\left\langle Z_{1}\right\rangle\), which can be proved by noticing that nonsplitting QCNN is a part of sp-QCNN. In sp-QCNN, we estimate the output from \(T\) measurement shots as follows: \[\left\langle Z_{\text{avg}}\right\rangle_{\text{est}}=\frac{1}{T}\sum_{\ell= 1}^{T}z_{\text{avg}}^{(\ell)}=\frac{1}{nT}\sum_{\ell=1}^{T}\sum_{j=1}^{n}z_{j} ^{(\ell)}. \tag{11}\] Here, \(z_{j}^{(\ell)}=\pm 1\) is the \(\ell\)th measurement outcome at the \(j\)th qubit, and we have defined the average of the \(\ell\)th measurement outcomes as \(z_{\text{avg}}^{(\ell)}=\sum_{j}z_{j}^{(\ell)}/n\). The value of \(z_{\text{avg}}^{(\ell)}\) can be \(a/n\) (\(a\in\{-n,-n+2,\cdots,n\}\)), corresponding to the measurement outcome of \(Z_{\text{avg}}\). We note that the number of outcomes in sp-QCNN is \(n\) times greater than that in nonsplitting QCNN, in which the output is estimated as \(\sum_{\ell=1}^{T}z_{1}^{(\ell)}/T\). Therefore, sp-QCNN can reduce the required number of shots to achieve a certain estimation accuracy. Since this argument only relies on the symmetry property of data, sp-QCNN is general and can be applied to broad tasks with translationally symmetric data. It is worth noting that sp-QCNN may not improve the measurement efficiency by exactly \(n\) times. This is because, in each shot, \(n\) measurement outcomes are correlated to each other via quantum entanglement. For example, if the output state is the GHZ state \(\left|\psi\right\rangle=(\left|000\cdots\right\rangle+\left|111\cdots\right\rangle )/\sqrt{2}\), then sp-QCNN does not improve the measurement efficiency at all because the \(n\) outcomes are completely correlated and can only provide one bit of information. In contrast, if the output state is the W state \(\left|\psi\right\rangle=(\left|100\cdots 00\right\rangle+\left|010\cdots 00\right\rangle+ \cdots+\left|000\cdots 01\right\rangle)/\sqrt{n}\), then the exact expectation value can be obtained with only one shot by measuring all the qubits in sp-QCNN, whereas many measurements are required in nonsplitting QCNN. Therefore, how well sp-QCNN improves the measurement efficiency depends on the details of the problem, such as input data and circuit structure. Later, we will analytically prove that sp-QCNN can improve the measurement efficiency by \(\mathcal{O}(n)\) times for a typical random input state. The advantage of sp-QCNN is illustrated in Fig. 5(a). In actual experiments, we cannot obtain the exact expectation value \(\left\langle Z_{1}\right\rangle_{\text{ns}}\) or \(\left\langle Z_{\text{avg}}\right\rangle\) because of statistical errors. Therefore it is usually estimated from the mean value of a finite number of measurement outcomes. In nonsplitting QCNN, the estimated value is generally drawn from the Gaussian distribution with a variance of \(\mathcal{O}(1/N_{\text{shot}})\) in accordance with the central limited theorem. In sp-QCNN, we obtain \(n\) measurement outcomes at once and thus expect that the variance scales as \(\mathcal{O}(1/nN_{\text{shot}})\), indicating the \(\mathcal{O}(n)\) times improvement of measurement efficiency. To quantify the effectiveness of sp-QCNN, we introduce the relative measurement efficiency: \[r\equiv\left(\frac{\sigma_{0}}{\sigma_{\text{sp}}}\right)^{2}. \tag{12}\] Here \(\sigma_{0}\) and \(\sigma_{\text{sp}}\) are the standard deviations (i.e., square root of variance) of the Gaussians followed by an estimated expectation value in nonsplitting and sp-QCNNs with the same number of shots. This quantity means that the shot number required to achieve a certain estimation accuracy using sp-QCNN is \(1/r\)-times fewer than that using nonsplitting QCNN. In the next section, we will demonstrate the efficiency of sp-QCNN for a concrete task using this quantity. ### Measurement efficiency of gradient In general, the most costly part of machine learning is the optimization of neural networks using a training dataset, in which the loss function is minimized by tuning the network parameters. In classical machine learning, gradient-based methods are often used for optimization and work well for large-scale problems. Even in QML, gradient-based optimizers are important and powerful tools. However, many measurements are necessary to estimate the gradient in quantum computing [22]. Our architecture makes such gradient measurements efficient. We first describe a conventional way of measuring the gradient of \(\left\langle Z_{1}\right\rangle\). In many QCNNs, including sp-QCNN, multiple quantum gates share a single variational parameter \(\theta\). Here, let \(m_{\theta}\) be the number of gates sharing \(\theta\) in a branch. To calculate the gradient by \(\theta\), we suppose that the \(m_{\theta}\) gates have different variational parameters from each other, \(\theta_{j}\) (\(j=1,\cdots,m_{\theta}\)). Thereby, we calculate the gradient with the chain rule as \(\partial\left\langle Z_{1}\right\rangle/\partial\theta=\sum_{j}(\partial \theta_{j}/\partial\theta)(\partial\left\langle Z_{1}\right\rangle/\partial \theta_{j})=\sum_{j}\partial\left\langle Z_{1}\right\rangle/\partial\theta_{j}\), where we have used \(\partial\theta_{j}/\partial\theta=1\). When each gate is parametrized as \(e^{-\mathrm{i}\theta_{j}P}\) (\(P\) is a Pauli operator), \(\partial\left\langle Z_{1}\right\rangle/\partial\theta_{j}\) can be measured using the parameter-shift rule, \(\partial\left\langle Z_{1}\right\rangle/\partial\theta_{j}=\left\langle Z_{1 }\right\rangle_{\theta_{j}=\theta+\pi/4}-\left\langle Z_{1}\right\rangle_{ \theta_{j}=\theta-\pi/4}\)[9; 50]. Thus, in sp-QCNN, we can compute the gradient as follows: \[\frac{\partial\left\langle Z_{1}\right\rangle}{\partial\theta}=\sum_{j=1}^{m_ {\theta}}\mathrm{tr}\left(\tilde{U}_{j+}\rho\tilde{U}_{j+}^{\dagger}Z_{1} \right)-\mathrm{tr}\left(\tilde{U}_{j-}\rho\tilde{U}_{j-}^{\dagger}Z_{1} \right), \tag{13}\] where \(\tilde{U}_{j\pm}\) is the unitary transformation of sp-QCNN in which \(\theta_{j}=\theta\) is replaced with \(\theta_{j}=\theta\pm\pi/4\). This formula Figure 4: Mechanism of parallelization in sp-QCNN. (a) In sp-QCNN, the expectation value of a local observable is equivalent for all the qubits. This can be proved by virtually translating the entire circuit. The translation does not change the input state and quantum circuit due to their translational symmetry but shifts the position of the measured qubit, showing the equivalence of expectation values at different qubits. (b) The gradient measurement can be parallelized in sp-QCNN. In accordance with the chain rule, the gradient is the sum of several derivatives, \(\partial\left\langle Z_{1}\right\rangle/\partial\theta=\sum_{j}\partial\left \langle Z_{1}\right\rangle/\partial\theta_{j}\). For example, we suppose that the parameter \(\theta\) is in the first convolutional layer as shown in the figure (the red boxes denote \(\partial/\partial\theta_{2}\) and \(\partial/\partial\theta_{1}\)). Then translating the circuit proves \(\partial\left\langle Z_{1}\right\rangle/\partial\theta_{j}=\partial\left\langle Z _{j-2}\right\rangle/\partial\theta_{1}\) and thus \(\partial\left\langle Z_{1}\right\rangle/\partial\theta=\sum_{j}\partial\left \langle Z_{j}\right\rangle/\partial\theta_{1}\), which can be computed with only two circuits by measuring all the qubits. has \(2m_{\theta}\) terms, each of which is usually measured in a different circuit. The circuit splitting and translational symmetry in sp-QCNN allow us to compute \(\partial\left\langle Z_{1}\right\rangle/\partial\theta_{j}\) in parallel, improving the gradient measurement efficiency. For simplicity, we suppose that each \(V_{i}\) has the form shown in Fig. 2 and that \(\theta\) is in the first convolutional layer [Fig. 4(b)]. By translating the entire circuit, we can rewrite each term in Eq. (13) as \[\operatorname{tr}\left(\tilde{U}_{j\pm}\rho\tilde{U}_{j\pm}^{ \dagger}Z_{1}\right) =\operatorname{tr}\left(\tilde{U}_{j\pm}T^{j-1}\rho(T^{\dagger})^ {j-1}\tilde{U}_{j\pm}^{\dagger}Z_{1}\right)\] \[=\operatorname{tr}\left(\tilde{U}_{1\pm}\rho\tilde{U}_{1\pm}^{ \dagger}Z_{2-j}\right). \tag{14}\] Here we have used \(\rho=T^{j-1}\rho(T^{\dagger})^{j-1}\), \((T^{\dagger})^{j-1}Z_{1}T^{j-1}=Z_{2-j}\), and \((T^{\dagger})^{j-1}\tilde{U}_{j\pm}T^{j-1}=\tilde{U}_{1\pm}\). This relation tells us that the derivative of \(Z_{1}\) by \(\theta_{j}\) is identical to that of \(Z_{2-j}\) by \(\theta_{1}\), as illustrated in Fig. 4(b). Thereby, Eq. (13) is reduced to \[\frac{\partial\left\langle Z_{1}\right\rangle}{\partial\theta}=\sum_{j=1}^{m_ {\theta}}\operatorname{tr}\left(\tilde{U}_{1+}\rho\tilde{U}_{1+}^{\dagger}Z_{ j}\right)-\operatorname{tr}\left(\tilde{U}_{1-}\rho\tilde{U}_{1-}^{\dagger}Z_{j} \right), \tag{15}\] where we have replaced \(Z_{j-2}\) with \(Z_{j}\) in the summation. According to this equation, we can obtain the gradient \(\partial\left\langle Z_{1}\right\rangle/\partial\theta\) with just two circuits \(\tilde{U}_{1\pm}\) by measuring all the qubits, instead of using \(2m_{\theta}\) circuits that are conventionally necessary. By generalizing this argument and using the equivalence \(\left\langle Z_{\text{avg}}\right\rangle=\left\langle Z_{1}\right\rangle\), we estimate the gradient of the output as follows: \[\left(\frac{\partial\left\langle Z_{\text{avg}}\right\rangle}{\partial \theta}\right)_{\text{est}}=\frac{m_{\theta}}{nT}\sum_{\ell=1}^{T}\sum_{j=1}^{ n}\left[z_{j+}^{(\ell)}-z_{j-}^{(\ell)}\right], \tag{16}\] where \(z_{j\pm}^{(\ell)}\) is the \(j\)th qubit measurement outcome of the \(\ell\)th shot in the parameter-shifted circuit with \(\theta_{1}=\theta\pm\pi/4\). In our ansatz (Fig. 2), the factor \(m_{\theta}/n\) appears when \(\theta\) is in the second or later layer. We emphasize that sp-QCNN enables us to execute \(n\) parallel computations even for the gradient estimation, thus accelerating the gradient-based training. Similar to the previous case, the relative measurement efficiency \(r\) for the gradient depends on the details of the problem due to the entangled property of the output state. ### Measurement efficiency for random state How well sp-QCNN improves the measurement efficiency depends on the details of the problem. Here, we analytically prove that the efficiency is improved by \(\mathcal{O}(n)\) times for a typical state randomly chosen from the \(T\)-invariant Hilbert subspace in the limit of \(n\to\infty\). Let us begin by considering nonsplitting QCNN, where we measure \(Z_{1}\) and obtain an outcome \(s=\pm 1\) for every measurement. In the limit of \(n\to\infty\), the probability of obtaining an outcome \(\pm 1\) is almost \(1/2\) for a typical random state because the statistical fluctuations by randomness are negligible due to the exponentially large Hilbert space [this probability distribution is depicted in the left panel of Fig. 5(b)]. Given its Bernoulli distribution, the estimation accuracy of the expectation value is \[\sigma_{0}\sim\mathcal{O}\left(\frac{1}{\sqrt{N_{\text{shot}}}}\right), \tag{17}\] where \(N_{\text{shot}}\) is the number of shots. In sp-QCNN, we measure all the qubits in the computational basis and regard the mean of the \(n\) measurement outcomes as the output of QCNN (see Eq. (11)). In other words, we measure \(Z_{\text{avg}}=\sum_{j}Z_{j}/n\) rather than \(Z_{1}\) and obtain one of the eigenvalues \(s\) (\(=\pm 1,\pm(n-2)/n,\cdots\)) as an outcome. Also, given that the full unitary transformation \(U\) is translationally symmetric, the output state Figure 5: (a) Quantification of measurement efficiency. In actual experiments, statistical errors arise in estimating the expectation value of an observable. This figure shows the probability distribution of the estimated expectation value. Here, we define the relative measurement efficiency \(r\) as the ratio of the variances in sp-QCNN and nonsplitting QCNN. (b) Number of eigenstates of \(Z_{1}\) and \(Z_{\text{avg}}\) with an eigenvalue \(s\). While the possible measurement outcome is \(\pm 1\) in nonsplitting QCNN (left panel), it is widely distributed in the range of \(-1\) to \(1\) with a width of \(\mathcal{O}(1/\sqrt{n})\) in sp-QCNN (right panel). of sp-QCNN has the same symmetry. The right panel of Fig. 5(b) shows the number of eigenstates of \(Z_{\rm avg}\) with an eigenvalue \(s\), \(D_{n}(s)\), on the \(T\)-invariant Hilbert subspace. In the limit of \(n\to\infty\), \(D_{n}(s)\) approaches the following asymptotic form (see Appendix A for derivation): \[D_{n}(s)\sim\frac{C_{n}}{(1+s^{2})^{n/2}}, \tag{18}\] where \(C_{n}\) is a constant independent of \(s\). The width of \(D_{n}(s)\) in \(s\) is \(\mathcal{O}(1/\sqrt{n})\), which finally gives rise to a small estimation error. Here, we assume that when measuring \(Z_{\rm avg}\) for a typical state randomly chosen from the \(T\)-invariant subspace, the probability of obtaining an outcome \(s\) is proportional to \(D_{n}(s)\). This assumption would be justified in the limit of \(n\to\infty\), where \(D_{n}(s)\) is sufficiently large, and the statistical fluctuations are insignificant. Considering that the width of \(D_{n}(s)\) is \(\mathcal{O}(1/\sqrt{n})\), we can estimate the expectation value from \(N_{\rm shot}\) experiments with an accuracy \[\sigma_{\rm sp}\sim\mathcal{O}\left(\frac{1}{\sqrt{nN_{\rm shot}}}\right). \tag{19}\] From the quantification in Eq. (12), the relative measurement efficiency of sp-QCNN is \[r=\left(\frac{\sigma_{0}}{\sigma_{\rm sp}}\right)^{2}\sim\mathcal{O}(n). \tag{20}\] This result indicates the \(\mathcal{O}(1/n)\) times reduction in the number of experiments required to achieve a certain accuracy. The scaling argument in Eq. (20) is valid in situations where the output state is a random quantum state. Therefore, it may arise in the early stage of the training process when the parameters of QCNN are randomly initialized and the output state is approximately random. In Sec. V, we will show that sp-QCNN exhibits \(\mathcal{O}(n)\) scaling for a concrete task in the early stage of training and, remarkably, even in the final stage. ## V Application to quantum phase recognition In this section, we apply sp-QCNN to a quantum phase recognition task investigated in Ref. [16] and verify its effectiveness. For the remainder of this paper, we simulate the quantum circuit with Qulacs, an open-source quantum circuit simulator [51]. ### Formulation of problem Let us consider a one-dimensional cluster Ising model with the periodic boundary condition, whose Hamiltonian is given by \[H=-\sum_{j=1}^{n}Z_{j}X_{j+1}Z_{j+2}-h_{1}\sum_{j=1}^{n}X_{j}-h_ {2}\sum_{j=1}^{n}X_{j}X_{j+1}, \tag{21}\] where \(n\) is the number of qubits, and \(X_{j},Y_{j},\) and \(Z_{j}\) are the Pauli operators at the \(j\)th qubit. This Hamiltonian exhibits SPT [27; 28; 29; 30], paramagnetic (PM), and antiferromagnetic (AFM) phases on the \(h_{1}\)-\(h_{2}\) plane. The SPT phase is protected by \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry characterized by \(X_{\rm even(odd)}=\prod_{j\in{\rm even(odd)}}X_{j}\). The ground state of \(H\), an input state in our task, is translationally symmetric because of \(THT^{\dagger}=H\). Our task is to recognize the SPT phase using sp-QCNN. Quantum phase recognition is one of the main applications of QCNN, and many studies have been conducted with the aim of practical quantum advantages [17; 18; 19]. In this task, sp-QCNN can be applied because the input data (i.e., the ground state of \(H\)) is translationally symmetric. For training data, we use \(M=20\) ground states of \(H\), \(|\phi_{i}\rangle\), evenly located on the line of \(h_{2}=0\) from \(h_{1}=0.05\) to \(1.95\). Using the Jordan-Wigner transformation [52], we can analytically obtain the exact ground state for \(h_{2}=0\), which transits from the SPT to PM phases at \(h_{1}=1\). To verify the generalization, we predict the entire phase diagram on the \(h_{1}\)-\(h_{2}\) plane using trained sp-QCNN and compare it with the true one computed with the density matrix renormalization group (DMRG) [53; 54; 55; 56]. In this work, we prepare the input data with exact diagonalization for simplicity. Yet, in actual experiments, other preparation methods must be applied, such as variational quantum eigensolver on a quantum computer and analog-digital transduction from a quantum experiment. We adopt the following mean squared error as the loss function: \[\mathcal{L}=\frac{1}{2M}\sum_{i=1}^{M}\left(\langle\phi_{i}|U^{\dagger}Z_{\rm avg }U|\phi_{i}\rangle-y_{i}\right)^{2}, \tag{22}\] where \(|\phi_{i}\rangle\) and \(y_{i}\) are the training data and its corresponding label, and \(U\) is the total unitary of the circuit. Here, we set \(y_{i}\) as \(1\) if \(|\phi_{i}\rangle\) belongs to the SPT phase and \(0\) if it does not. We optimize the loss function using the stochastic gradient descent (SGD) method [57]. In SGD, we update the parameters as \(\vec{\theta}^{(t+1)}=\vec{\theta}^{(t)}-\eta^{(t)}\nabla\mathcal{L}\), where \(\vec{\theta}^{(t)}\) is the parameter vector at optimization step \(t\), and \(\nabla\mathcal{L}\) is calculated from only one of the training data at each step. We also decrease the learning rate as \(\eta^{(t)}=\eta_{0}/t\) to stabilize the training and set \(\eta_{0}=200\). Meanwhile, to investigate the statistical properties of sp-QCNN, we simulate the same circuits with \(N_{p}\) different random initial parameter sets and set \(N_{p}=200\) in Secs. V.2 and V.3 and \(N_{p}=50\) in Sec. V.4. ### Expressivity, trainability, and generalization We show that our ansatz has sufficient expressivity, trainability, and generalization to recognize the quantum phase transition despite the constraint of translational symmetry before examining the measurement efficiency. To this end, we simulate the quantum circuit with infinite shots, i.e., no statistical errors in estimating expectation values. As an ansatz circuit, we use the translational unitary \(V_{i}\) shown in Fig. 2 with \(d=10\), where each layer has \(4\times 10=40\) independent parameters. We split the circuit such that the number of qubits in a branch varies as \(8\to 4\to 2\to 1\), \(12\to 6\to 3\to 1\), \(16\to 8\to 4\to 2\to 1\), and \(18\to 9\to 3\to 1\) for \(n=8,12,16\), and \(18\), respectively. We first investigate the expressivity and trainability of our ansatz. Figure 6(a) shows the changes in the loss function during training for several problem sizes (i.e., the numbers of qubits) \(n\). We observe that the final loss after 200 epochs becomes smaller as \(n\) increases. This trend is consistent with the nature of phase transition that occurs in the thermodynamic limit (\(n\to\infty\)), suggesting that it is possible to achieve a lower training error in larger problem sizes. In Sec. V.4, we show that the final value of the loss function reached is comparable with that of conventional QCNN, indicating the high expressivity of our ansatz. In addition, the initial drop of the loss function becomes more rapid as \(n\) increases, and the statistical error by the randomness of initial parameters is modest even for \(n=18\). These results imply that our ansatz has high trainability and that sp-QCNN can be trained well even for larger problems. We also examine the generalization of sp-QCNN. Figure 6(b) is the phase diagram predicted by trained sp-QCNN for \(n=16\) qubits. It coincides with the phase boundary computed by DMRG (dashed lines) well. In particular, sp-QCNN can detect the SPT-AFM phase transition that is not present in the training dataset. This result shows that sp-QCNN has enough generalization for the quantum phase recognition task. ### Measurement efficiency As discussed above, the measurement efficiency of sp-QCNN depends on the details of the problem, such as input data and circuit structure. Here we show that sp-QCNN improves the measurement efficiency by \(\mathcal{O}(n)\) times in the quantum phase recognition task. To this end, we calculate how the efficiency changes during the training process examined in Sec. V.2. The measurement efficiency is quantified by the ratio of the variances in splitting and nonsplitting circuits, as shown in Fig. 5 (we assume that the nonsplitting circuit consists of the same unitary \(V_{i}\) as the splitting one). We investigate the efficiency for three typical input states: SPT, PM, and AFM states, which are the eigenstates of \(H\) for \((h_{1},h_{2})=(0,0),(+\infty,0)\), and \((0,-\infty)\), respectively. We also explore the efficiency of measuring the loss gradient for the first parameter. Figures 7(a)-(d) show the changes in the relative measurement efficiency \(r\) during training for the three inputs and loss gradient. For the PM and AFM states ((b) and (c)), the efficiency \(r\) is high at the beginning of learning and does not significantly decreases during training. For the SPT state and loss gradient ((a) and (d)), \(r\) is initially high but decreases as training, finally converging to a small value (\(r=2\)-5). These results imply that the improvement rate of measurement efficiency strongly depends on the input data, what we measure, and the stage of learning. Even for the SPT state and loss gradient, the final efficiency is higher than one, indicating that the measurement in sp-QCNN is more efficient than that in nonsplitting QCNN. Figures 8(a) and (b) show the relative measurement efficiency with varying the number of qubits \(n\) at 0 and 200 epochs for the four cases (cf. Fig. 7). At 0 epoch Figure 6: (a) Changes in loss function for several numbers of qubits \(n\). The lines and shaded areas depict the median and the 10th–90th percentiles for 200 sets of initial parameters, respectively. (b) Phase diagram predicted by trained sp-QCNN for \(n=16\) qubits. The color denotes the average magnitude of \(\langle Z_{\text{avg}}\rangle\) for 200 sets of initial parameters. The gray dots and dashed green lines denote our training data and phase boundaries computed by DMRG, respectively. In these simulations, we set the depth of each layer as \(d=10\). (a), the data points are nearly aligned on a straight line. This result supports that measurement efficiency is improved by \(\mathcal{O}(n)\) times in the early stages of learning and is consistent with the previous argument based on randomness. Even at 200 epoch (b), we can fit the data points in straight lines within their error bars, suggesting that the efficiency is also improved by \(\mathcal{O}(n)\) times in the final stages of learning. In other words, compared with nonsplitting QCNN, sp-QCNN can reduce the number of shots required to achieve a certain estimation accuracy of expectation values by a factor of \(\mathcal{O}(1/n)\) throughout the learning process. We also investigate the measurement efficiency for predicting the phase diagram by trained sp-QCNN with \(n=8\) and 16 qubits (see Figs. 8(c) and (d)). By comparing these figures, we notice that the efficiency \(r\) for \(n=16\) is more than twice that for \(n=8\) in most areas. This result implies an improvement of \(\mathcal{O}(n)\) times for prediction. We also observe that the efficiency is low in the SPT phase but relatively high in the PM and AFM phases, a trend evident in Fig. 7 as well. We infer that this phenomenon is due to the following reason. For the SPT state, the expectation value of \(Z_{j}\) after training is almost one because we have assigned the label as \(y_{i}=1\) for the SPT phase in the loss function, which means that \(U\left|\phi_{\text{SPT}}\right\rangle\sim\left|0\cdots 0\right\rangle\). Given that the splitting circuit has no advantages for measuring \(\left|0\cdots 0\right\rangle\), the measurement efficiency is not significantly improved. For complete understanding, additional analyses must be conducted in future works. ### Training with limited measurement resources So far, we have shown that sp-QCNN makes the measurement efficient and the efficiency \(r\) depends on the input state and the learning stage. In actual experiments, we cannot know \(r\) in advance, and controlling the shot number every epoch by feeding back measurement outcomes might be technically difficult. Therefore, we investigate how sp-QCNN enhances the performance of machine learning in training with a fixed small num Figure 8: (a), (b) Relative measurement efficiency \(r\) with varying the number of qubits \(n\) for SPT, PM, AFM, and loss gradient at (a) 0 and (b) 200 epoch in Fig. 7. The four straight lines fit the corresponding types of data points. The error bars denote the standard deviations for 200 sets of initial parameters. (c), (d) Relative measurement efficiency \(r\) on \(h_{1}\)-\(h_{2}\) plane after 200 epochs for (c) \(n=8\) and (d) 16 qubits. The color denotes the magnitude of the relative measurement efficiency. The gray dots and dashed green lines denote our training data and phase boundaries computed by DMRG, respectively. Figure 7: Changes in relative measurement efficiency during training. (a)–(c) show the efficiency \(r\) for different inputs, SPT, PM, and AFM states, whereas (d) shows the efficiency for measuring loss gradient by the first parameter. At each epoch, we simulate experiments with 1000 shots 10000 times, estimate \(\sigma_{0}\) and \(\sigma_{\text{sp}}\), and calculate the efficiency \(r=(\sigma_{0}/\sigma_{\text{sp}})^{2}\). The solid lines and shaded areas depict the mean values and standard deviations, respectively, for 200 sets of initial parameters. Except for evaluating the efficiency, we optimize the circuit using the exact expectation value (i.e., without statistical errors). ber of shots. In situations with limited measurement resources, statistical errors in estimating the gradient of a loss function would disturb learning and degrade classification performance. As the sp-QCNN ansatz, we use the unitary layer shown in Fig. 2 with \(d=5\), where the total number of parameters is \(60\) for \(n=8\) and \(80\) for \(n=16\). We also compare sp-QCNN with conventional QCNN depicted in Fig. 1(a), where the convolutional and fully-connected layers consist of two-qubit unitary gates parametrized as \(\prod_{j=1}^{15}e^{-i\theta_{j}P_{j}}\) (\(P_{j}=IX,IY,\cdots,ZZ\)). Since the gates acting in parallel share the same parameters, the number of independent parameters in conventional QCNN is \(75\) for \(n=8\) and \(105\) for \(n=16\). When measuring the gradient in the simulation, we match the shot number per parameter in conventional and sp-QCNNs for each layer. Here, we use \(2m_{\theta}N_{\text{shot}}\) shots for parameter \(\theta\) in sp-QCNN and set \(N_{\text{shot}}=5\). We show that sp-QCNN suppresses statistical errors and accelerates the learning process. Figure 9 displays the changes in loss function with and without statistical errors in conventional and sp-QCNNs. In the absence of statistical errors (dashed lines), the loss function quickly converges for both QCNNs (the final value of the loss function in sp-QCNN) is comparable, or even superior, compared with that of the conventional one, indicating that our ansatz has sufficient expressivity). In the presence of statistical errors (solid lines), the loss convergence becomes substantially slow in conventional QCNN but is relatively modest in sp-QCNN. This fast convergence in sp-QCNN stems from its high measurement efficiency. Whereas significant statistical errors disturb the rapid and stable optimization in conventional QCNN, the high measurement efficiency in sp-QCNN suppresses the statistical errors, stabilizing and accelerating the optimization. As shown in the figure, this improvement effect is more pronounced for \(n=16\) than for \(n=8\) because the measurement efficiency is improved by a factor of \(\mathcal{O}(n)\). The fast convergence of training is highly effective for near-term quantum devices where a long optimization run is impractical due to limited computational resources. ## VI Conclusions In this study, we have proposed a new QNN architecture, sp-QCNN, which reduces measurement costs by exploiting the translational symmetry of data as prior knowledge. In sp-QCNN, we symmetrize and split the QCNN circuit to parallelize the computation, thus improving the measurement efficiency. We have demonstrated the advantage of sp-QCNN for the quantum phase recognition task: it has high classification performance for this task and can improve the measurement efficiency by \(\mathcal{O}(n)\) times. In a realistic setting where measurement resources are limited, sp-QCNN can enhance the speed and stability of the learning process. These results present a new possibility for the symmetry-based architecture design of QNN and bring us one step closer to achieving the quantum advantages of QCNN in near-term quantum devices. This work offers some research directions for the future. First, finding practical, applicable problems is crucial for quantum advantages since sp-QCNN can be used only for translationally symmetric data. The most promising application of sp-QCNN is the research of solids, where it could shed light on unsolved mysteries in condensed matter physics. The second direction is further studies of symmetry-based architecture design to reduce measurement costs. Although this work has provided a new approach for QML, its coverage is limited to data with translational symmetry. Hence, generalization to other symmetries, such as the space group, is intriguing and fruitful and may be applied to chemical molecules as well as solid-state materials. The third direction is to find a better ansatz. Although this work establishes the basis of sp-QCNN, the best \(V_{i}\) for a given problem remains unclear. Finding a more compact and expressive ansatz would be helpful to experimentally realize sp-QCNN. Finally, we provide several open issues on sp-QCNN. This work has shown that sp-QCNN has sufficient expressivity, trainability, and generalization to solve the phase recognition task. However, whether it can solve other complicated tasks remains unclear. In particular, the Figure 9: How statistical errors affect the learning process in conventional and sp-QCNNs for \(n=8\) (top) and \(16\) qubits (bottom). The orange (blue) solid and dashed lines denote the loss function with and without statistical errors in sp-QCNN (conventional QCNN), respectively. The shaded areas are the \(10\)th–\(90\)th percentiles of the loss function for \(50\) random initial parameters at each epoch. We match the number of shots per parameter to obtain the gradient in both QCNNs. translational symmetry of \(V_{i}\) could suppress expressivity and limit solvable tasks. Uncovering the possibilities and limitations of sp-QCNN is an important open issue. For trainability, elucidating whether barren plateaus exist in sp-QCNN is crucial. In conventional QCNN, barren plateaus do not appear due to its unique architecture: the logarithmic circuit depth and the locality of unitary operations and observables [44, 45, 46, 20]. Considering that sp-QCNN shares these properties with conventional QCNN, we think no barren plateaus will appear even in sp-QCNN. The results in this paper show that the training of sp-QCNN works well up to \(n=18\) qubits, supporting our hypothesis. More thorough analyses are necessary for complete verification. ## Acknowlegments Fruitful discussions with Masatoshi Ishii, Tomochika Kurita, Yuichi Kamata, and Yasuhiro Endo are gratefully acknowledged. ## Appendix A The number of eigenstates of \(Z_{\text{avg}}\) Here, we derive Eq. (18), where the number of eigenstates of \(Z_{\text{avg}}\) for translationally symmetric states is \[D_{n}(s)\sim\frac{1}{(1+s^{2})^{n/2}} \tag{19}\] with an eigenvalue \(s\). In sp-QCNN, we measure \(Z_{\text{tot}}=\sum_{j}Z_{j}\) whose eigenvalues are \(\pm n,\pm(n-2),\cdots\) and obtain one of the eigenvalues every shot (for convenience, we consider \(Z_{\text{tot}}\) rather than \(Z_{\text{avg}}=Z_{\text{tot}}/n\)). In addition, the output state is translationally symmetric in sp-QCNN. Hence, for simplicity, we now focus on the \(T\)-invariant eigenspace of \(Z_{\text{tot}}\) with an eigenvalue \(z\), \(V_{z}\) (i.e., \(T\left|\phi\right\rangle=\left|\phi\right\rangle\) and \(Z_{\text{tot}}\left|\phi\right\rangle=z\left|\phi\right\rangle\) for any \(\left|\phi\right\rangle\in V_{z}\)). Below, we investigate the dimension of \(V_{z}\). To this end, we introduce a cyclic group generated by \(T\), \[G_{n}=\{I,T,T^{2},\cdots,T^{n-1}\}. \tag{20}\] Let \(M_{z}\) be the set of the eigenstates of \(Z_{\text{tot}}\) with an eigenvalue \(z\) in the computational basis (e.g., \(M_{n-2}=\{\left|10\cdots 00\right\rangle,\cdots,\left|00\cdots 01\right\rangle\}\)). Then, we define an equivalence relation \(\sim\) by \(G_{n}\) in \(M_{z}\): for \(\left|a\right\rangle,\left|b\right\rangle\in M_{z}\), \(\left|a\right\rangle\sim\left|b\right\rangle\) holds if and only if \(g\left|a\right\rangle=\left|b\right\rangle\) with \({}^{\exists}g\in G_{n}\). We also define the equivalence class of \(\left|a\right\rangle\in M_{z}\) as \(\left|a\right\rangle=\left|\left\langle x\right\rangle\in M_{z}\left|\left. \left|x\right\rangle\sim\left|a\right\rangle\right\right\}\) and the quotient set as \(M_{z}/G_{n}=\{\left|a\right|\left|a\right\rangle\in M_{z}\}\). The elements of \(M_{z}/G_{n}\) correspond one-to-one to the bases of \(V_{z}\), such that \(\left|\Psi_{i}\right\rangle=\sum_{\left|\phi\right\rangle\in\left|\phi\right\rangle }\left|\phi\right\rangle/\mathcal{N}\), where \(\left|\Psi_{i}\right\rangle\) is the base of \(V_{z}\), \(\left|\Phi_{i}\right\rangle\) is the element of \(M_{z}/G_{n}\), and \(\mathcal{N}\) is the normalization factor \((T\left|\Psi_{i}\right\rangle=\left|\Psi_{i}\right\rangle)\) can be easily checked). Therefore, \(\text{dim}V_{z}=\left|M_{z}/G_{n}\right|\) holds, where \(\left|A\right|\) is the number of elements in \(A\). Using Burnside's lemma [58], we have \(\text{dim}V_{z}\) as follows: \[\text{dim}V_{z}=|M_{z}/G_{n}|=\frac{1}{\left|G_{n}\right|}\sum_{g\in G_{n}}|M _{z}^{g}|, \tag{21}\] where \(M_{z}^{g}=\{\left|\phi\right\rangle\in M_{z}\mid g\left|\phi\right\rangle= \left|\phi\right\rangle\}\). Then, the following theorem holds. **Theorem 1**.: _For \(z\neq\pm n\), the following relation holds in the limit of \(n\rightarrow\infty\):_ \[F_{z}\equiv\text{dim}V_{z}\bigg{/}\frac{1}{n}\binom{n}{\ell_{z}}\xrightarrow{ n\rightarrow\infty}1 \tag{22}\] _with \(\ell_{z}=(n+z)/2\). Here \(\binom{\cdot}{\cdot}\) denotes the binomial coefficient. This theorem states that the asymptotic form of \(\text{dim}V_{z}\) is \(\binom{n}{\ell_{z}}/n\)._ Proof.: Since \(F_{z}=F_{-z}\) trivially holds, we focus on \(-n+2\leq z\leq 0\), or \(1\leq\ell_{z}\leq\lfloor n/2\rfloor\) (\(\lfloor\cdot\rfloor\) is the floor function). Figure 10: Illustration for calculating \(|M_{z}^{g}|\) with \(n=12,\ell_{z}=4\), and \(g=T^{3}\). Each white (black) circle indicates a single qubit state of \(\left|0\right\rangle\) (\(\left|1\right\rangle\)), and \(n=12\) and \(\ell_{z}=4\) mean that there are eight (\(=n-\ell_{z}\)) white and four (\(=\ell_{z}\)) black circles in total. Given that the order of \(g\) is four (‘\(\cdot\)’ \(g^{4}=I\)), we first divide the qubits into four sets, each consisting of three (\(=n/\chi(g)\)) qubits. In all the sets, the configuration of white and black circles must be the same as each other because of the condition that \(g\) does not change the state. Therefore, each set has two white and one (\(=\ell_{z}/\chi(g)\)) black circles, and there are three (\(=\binom{n/\chi(g)}{\ell_{z}/\chi(g)}\)) possible configurations shown in the figure. We first rewrite Eq. (17) as \[\text{dim}V_{z} =\frac{M_{z}^{I}}{|G_{n}|}+\frac{1}{|G_{n}|}\sum_{g\in G_{n}\setminus \{I\}}|M_{z}^{g}|\] \[=\frac{1}{n}\binom{n}{\ell_{z}}+\frac{1}{n}\sum_{g\in G_{n}\setminus \{I\}}|M_{z}^{g}|. \tag{18}\] Therefore, \(F_{z}\) is reduced to \[F_{z}=1+\sum_{g\in G_{n}\setminus\{I\}}|M_{z}^{g}|\Bigg{/}\binom{n}{\ell_{z} }\;. \tag{19}\] We will evaluate the second term in this equation. To calculate \(|M_{z}^{g}|\), we define the order of \(g\in G_{n}\), \(\chi(g)\), as the number of elements in the subgroup generated by \(g\) (i.e., \(\{g^{0},g^{1},g^{2},\cdots,g^{k-1}\}\) with \(g^{k}=I\)). Note that \(\chi(g)\) is a divisor of \(n\). Thereby, \(|M_{z}^{g}|\) is written as follows: \[|M_{z}^{g}|=\begin{cases}0&\ell_{z}/\chi(g)\notin\mathbb{Z}\\ \binom{n/\chi(g)}{\ell_{z}/\chi(g)}&\ell_{z}/\chi(g)\in\mathbb{Z}.\end{cases} \tag{20}\] Figure 10 shows a graphical description of Eq. (20) as an example for \(n=12,\ell_{z}=4\), and \(g=T^{3}\). Based on Eq. (20), one can straightforwardly show that the second term in Eq. (19) vanishes in the limit of \(n\to\infty\) for \(\ell_{z}=1,2\) by noticing that \(\chi(g)=1\) only for \(g=I\) and \(\chi(g)=2\) only for \(g=T^{n/2}\). Thus, we focus on \(3\leq\ell_{z}\leq\lfloor n/2\rfloor\). Because of \(\chi(g)\geq 2\) for \(g\neq I\), we have \[|M_{z}^{g}|\leq\binom{\lfloor n/2\rfloor}{\lfloor\ell_{z}/2\rfloor}. \tag{21}\] This inequality can be shown by considering the properties of binomial coefficients: \(\binom{a}{b}<\binom{a^{\prime}}{b}\) (\(a<a^{\prime}\)) and \(\binom{a}{0}<\binom{a}{1}<\cdots<\binom{a}{\lfloor a/2\rfloor}\) (note that \(3\leq\ell_{z}\leq\lfloor n/2\rfloor\)). Using Eq. (21), the second term in Eq. (19) is bounded as follows: \[0\leq\sum_{g\in G_{n}\setminus\{I\}}|M_{z}^{g}|\Bigg{/}\binom{n}{\ell_{z}}\; \leq\;n\binom{\lfloor n/2\rfloor}{\lfloor\ell_{z}/2\rfloor}\Bigg{/}\binom{n} {\ell_{z}}\;\equiv\;A_{\ell_{z}}. \tag{22}\] The right-hand side of this inequality, \(A_{\ell_{z}}\), approaches zero in \(n\to\infty\) for \(3\leq\ell_{z}\leq\lfloor n/2\rfloor\), which can be proven by showing \(A_{3}\xrightarrow{n\to\infty}0\) and \(0<A_{\lfloor n/2\rfloor}<\cdots<A_{4}<A_{3}\) by definition of \(A_{\ell_{z}}\). Therefore, we have \[\sum_{g\in G_{n}\setminus\{I\}}|M_{z}^{g}|\Bigg{/}\binom{n}{\ell_{z}}\; \xrightarrow{n\to\infty}0. \tag{23}\] As mentioned above, this limit holds true even for \(\ell_{z}=1,2\). Substituting Eq. (23) to Eq. (19), we obtain \[F_{z}\xrightarrow{n\to\infty}1. \tag{24}\] with \(\ell_{z}\neq 0,n\). This theorem states that \(\text{dim}V_{z}\) asymptotically approaches \(\binom{n}{\ell_{z}}/n\) (except for \(\text{dim}V_{n}=\text{dim}V_{-n}=1\)). Using Stirling's formula (\(n!\sim\sqrt{2\pi n}(n/e)^{n}\)), we have \[\text{dim}V_{z}\sim 2^{n}\sqrt{\frac{2}{\pi n^{3}}}D_{n}(s), \tag{25}\] where we have defined \[D_{n}(s)\equiv\left[(1+s)^{1+s+\frac{1}{n}}(1-s)^{1-s+\frac{1}{n}}\right]^{-n /2} \tag{26}\] with \(s=z/n\). For large \(n\), \(D_{n}(s)\) rapidly decreases to vanish away from the origin. Therefore, we expand the denominator of \(D_{n}(s)\) in \(s\), obtaining \[D_{n}(s) =\left(1+(1+\mathcal{O}(1/n))s^{2}+\mathcal{O}(s^{4})\right)^{-n/2}\] \[\sim\frac{1}{(1+s^{2})^{n/2}}, \tag{27}\] for sufficiently small \(s\). The width of \(D_{n}(s)\) is \(\mathcal{O}(1/\sqrt{n})\) in \(n\to\infty\), leading to the \(\mathcal{O}(n)\) times improvement of measurement efficiency in sp-QCNN (see Sec. IV.3). Finally, we remark that this discussion is approximately valid for large but finite \(n\) while this appendix considers the limit of \(n\to\infty\). In fact, in Sec. V.3, we have observed the clear \(\mathcal{O}(n)\) scaling for \(n=18\) at the beginning of training where the output state is almost random.
2303.13773
Graph Neural Networks for the Offline Nanosatellite Task Scheduling Problem
This study investigates how to schedule nanosatellite tasks more efficiently using Graph Neural Networks (GNNs). In the Offline Nanosatellite Task Scheduling (ONTS) problem, the goal is to find the optimal schedule for tasks to be carried out in orbit while taking into account Quality-of-Service (QoS) considerations such as priority, minimum and maximum activation events, execution time-frames, periods, and execution windows, as well as constraints on the satellite's power resources and the complexity of energy harvesting and management. The ONTS problem has been approached using conventional mathematical formulations and exact methods, but their applicability to challenging cases of the problem is limited. This study examines the use of GNNs in this context, which has been effectively applied to optimization problems such as the traveling salesman, scheduling, and facility placement problems. More specifically, we investigate whether GNNs can learn the complex structure of the ONTS problem with respect to feasibility and optimality of candidate solutions. Furthermore, we evaluate using GNN-based heuristic solutions to provide better solutions (w.r.t. the objective value) to the ONTS problem and reduce the optimization cost. Our experiments show that GNNs are not only able to learn feasibility and optimality for instances of the ONTS problem, but they can generalize to harder instances than those seen during training. Furthermore, the GNN-based heuristics improved the expected objective value of the best solution found under the time limit in 45%, and reduced the expected time to find a feasible solution in 35%, when compared to the SCIP (Solving Constraint Integer Programs) solver in its off-the-shelf configuration
Bruno Machado Pacheco, Laio Oriel Seman, Cezar Antonio Rigo, Eduardo Camponogara, Eduardo Augusto Bezerra, Leandro dos Santos Coelho
2023-03-24T03:17:28Z
http://arxiv.org/abs/2303.13773v2
# A Graph Neural Network Approach to Nanosatellite Task Scheduling: ###### Abstract This study investigates how to schedule nanosatellite tasks more efficiently using Graph Neural Networks (GNN). In the Offline Nanosatellite Task Scheduling (ONTS) problem, the goal is to find the optimal schedule for tasks to be carried out in orbit while taking into account Quality-of-Service (QoS) considerations such as priority, minimum and maximum activation events, execution time-frames, periods, and execution windows, as well as constraints on the satellite's power resources and the complexity of energy harvesting and management. The ONTS problem has been approached using conventional mathematical formulations and precise methods, but their applicability to challenging cases of the problem is limited. This study examines the use of GNNs in this context, which has been effectively applied to many optimization problems, including traveling salesman problems, scheduling problems, and facility placement problems. Here, we fully represent MILP instances of the ONTS problem in bipartite graphs. We apply a feature aggregation and message-passing methodology allied to a ReLU activation function to learn using a classic deep learning model, obtaining an optimal set of parameters. Furthermore, we apply Explainable AI (XAI), another emerging field of research, to determine which features - nodes, constraints - had the most significant impact on learning performance, shedding light on the inner workings and decision process of such models. We also explored an early fixing approach, obtaining an accuracy above 80% both in predicting the feasibility of a solution and the probability of a decision variable value being in the optimal solution. Our results point to GNNs as a potentially effective method for scheduling nanosatellite tasks and shed light on the advantages of explainable machine learning models for challenging combinatorial optimization problems. keywords: Scheduling, Graph Neural Network, Combinatorial Optimization, Nanosatellite, Quality of Service. ## 1 Introduction Nanosatellites are gaining popularity for various applications, including Earth observation and scientific research. Due to its limited computational and energy resources, this spacecraft standard is associated with difficulties in mission planning despite the format's clear advantages, such as low cost and fast development time. Scheduling tasks is essential to mission planning, as it maximizes resource usage and increases data quality, cost savings, and mission success. The Offline Nanosatellite Task Scheduling (ONTS) problem is crucial in developing, deploying, and operating nanosatellites in orbit. It involves finding the best schedule for tasks execution in orbit, taking into account Quality-of-Service (QoS) factors such as priority, minimum and maximum activation events, execution time-frames, periods, and execution windows, as well as the limitations of the satellite's power resources and complexities of energy harvesting and management systems. Traditional mathematical formulations and exact algorithms have been proposed to solve the ONTS problem, starting from Integer Programming (IP) [1] to Mixed Integer Programming (MILP) [2; 3] and Continuous-Time techniques [4]. More recently, given the difficulty in solving complex instances of the ONTS problem, Rigo et al. [5] proposed a Dantzig-Wolfe decomposition and a branch-and-price (B&P) methodology to build a unique column-based formulation for producing feasible and optimal schedules. They also explored the Dynamic Programming (DP) technique to find optimal columns. Their computational experiments significantly improved overall solution time compared to a commercial MILP solver, with a 70% reduction in computation time. The evolution of these formulations and methodologies highlights the continuous efforts to find the most efficient and effective solutions to the problem. Meanwhile, several recent investigations have considered machine learning tools to address combinatorial optimization problems [6; 7], such as the single machine problem [8], resource-constrained project scheduling [9], and knapsack problems [10]. Graph Neural Networks (GNN), in particular, has gained popularity in recent years to solve combinatorial optimization problems when the underlying structure may be represented as a graph [11]. The problem's graph structure is employed to transfer information between nodes, and several iterations of message passing update the node representations. The final representations of nodes can be utilized to generate predictions or solve optimization problems. GNNs are well-suited for handling combinatorial optimization problems because they can simulate complex problem structures and convey information across nodes. They have been successfully used for many optimization problems, such as traveling salesman problems [12], scheduling tasks, and facility location problems. A popular approach has been the application of GNNs to learn variable selection for branching, directly [13], with the help of Markov Decision Process [14] or multi-layer perceptrons [15]. For instance, the authors of [16] were the first to suggest a novel graph convolutional neural network model that uses the inherent variable-constraint bipartite graph representation of MILPs, which is trained via imitation learning from a strong branching expert method. They show, using complex problems, that the technique can beat expert-designed branching rules applied in cutting-edge solvers, generating policies that outperform state-of-the-art machine-learning approaches for branching. They concluded that their model was a superior design choice before branching in MILP, and subsequent work demonstrated the feasibility of their strategy on a larger range of combinatorial problems tested with graph-based reinforcement learning techniques. In the MILP framework, [15] presents a novel hybrid architecture for efficient branching on GPUs. For branching, they suggested an architecture to combine the capabilities of GNN with computationally cheap multi-layer perceptrons (MLP). The authors tested their technique on four classes of MILP problems and found that it reduced solver running time by up to 26% when compared to state-of-the-art solutions without a GPU, even when extrapolated to more complex problems than it was trained on. Further works explore GNNs in exact algorithms to select which cutting plane to add [17] or even learn a parallel Lagrangean decomposition by encoding the duals on a bipartite graph [18]. Several researchers have also explored GNNs to learn and improve on heuristic approaches for solving combinatorial optimization problems [19], such as the large neighborhood search [20; 21]. In [22], a Neural Improvement (NI) model for graph-based problems is presented that can efficiently guide various hill-climbing algorithms. The model leverages information stored in both nodes and edges and may be used to replace classic local-search algorithms while requiring less processing effort. The authors examine advanced models in order to avoid becoming locked in local optima, as well as to construct models for population-based metaheuristics. Experiments reveal that the NI model outperforms traditional variants for the preference ranking, traveling salesman, and graph partitioning problems. A general framework for augmenting MILP solvers with data-driven insights by predicting variable biases using GNN topologies is proposed by [23]. The predicted biases are used to steer the solver, substituting heuristic components by storing the variable-constraint interactions as bipartite graphs. The framework is demonstrated to significantly enhance the solver's performance on two classes of difficult binary MILPs, and it is extendable to additional important solver components. The work of [24] explores learning to fix variables early in iterative approximation approaches applied to IP problems. The authors model the early fixing as a Markov decision process and train it through imitation learning. They undertake comprehensive experiments on three typical IP applications and demonstrate that their technique may dramatically accelerate prior approximation methods by up to ten times in most situations while generating comparable or even better solutions. The authors also analyze the use of their suggested learning-based early fixing approach and potential prospects for increasing its efficacy. Beyond the pure application of GNN into optimization problems, this subfield of ML has been rapidly evolving as well, where unique new techniques have been proposed recently on how to explore the graph architecture in neural networks better [25], such as gated graphs [26], large graphs [27], and directional graphs [28]. In [29], a new pre-training strategy for graph datasets is introduced, named Graph Isomorphism Network with Edge Features (GINEConv). It involves training a GNN at both node and graph levels to learn good local and global representations simultaneously. The authors systematically studied pre-training on multiple graph classification datasets. Their proposed strategy significantly improves out-of-distribution generalization, achieving state-of-the-art performance. In contrast, [30] introduced the now widely used Graph Attention Network (GAT) technique later improved in [31]. GAT uses masked self-attention layers to allow nodes to attend to their neighbor's features and implicitly assign weights to different nodes without costly matrix operations or prior knowledge of the graph structure. Taking advantage of all this recent progress in GNN research and its successful application to optimization problems, this study proposes a novel solution methodology to the ONTs problem. By representing the problem as a bipartite graph, we leverage the robust representation learning capabilities of GNNs. The parameters of the MILP problem generate feature vectors fed into the model, allowing us to encode both the structure and parameters of each instance of the problem. GNNs can handle graphs of arbitrary size to handle optimization problems with varying numbers of variables and constraints. Our method employs two fully-connected, single-layer multilayer perceptron (MLP) networks with ReLU activations to encode the features of variables and constraints into hidden features that are updated using a two-step message-passing mechanism. The parameters can then be optimized similarly to conventional deep learning models. The proposed GNN model is then shown here to accelerate task scheduling by efficiently learning the relationships between tasks and resources and optimizing mission planning. Furthermore, research in Artificial Intelligence (AI) as a whole has recently focused on Explainable Artificial Intelligence (XAI), which analyzes the factors that impact solution quality and their interconnections in AI methodologies [32; 33], such as those of traffic classification [34] of variable selection [35]. Similarly, Explainable Graph Neural Networks (XGNN) can offer insights into the underlying mechanisms of these black-box models and assist researchers and users in understanding better how these models are producing predictions [36; 37] through parameterized explanations [38], probabilistic explanations [39], or attribution evaluation [40]. A model-neutral method for explaining the predictions of any Graph Neural Network (GNN) on any graph-based machine learning is presented in [41]. The method uses the recursive neighborhood-aggregation methodology of GNNs to pinpoint significant graph paths and pertinent node characteristic data sent along each edge. The approach uses relational structures, including rich node-featured graphs, and offers an interface for analyzing GNN predictions, troubleshooting GNN models, and spotting systematic error patterns. In the study, GnnExplainer is defined as an optimization job that optimizes the mutual information between the prediction of a GNN and the distribution of potential subgraph structures. In [42], the authors create graph analogs of three well-known explainability techniques for GNNs, including gradient-weighted CAM (Grad-CAM) and contrastive EB. These techniques include contrastive gradient-based saliency maps, class activation mapping, and excitation backpropagation (c-EB). Moreover, they examine the important sub-graphs derived from the explanations and note recurring trends. Our work also aims to explore explainability to understand better how a GNN and its attributes can contribute to the final quality of ONTS problem solutions, providing insights into this field of knowledge. This paper is organized as follows: Section two describes the problem statement in detail, providing context and background information. The third section describes how the problem was approached and the methods employed for the investigation. The computational experiments are described in detail in the fourth section, along with a succinct summary of the findings. This study finishes with section five, summarizing and discussing the key findings. ## 2 Problem Statement Given a set of jobs \(\mathcal{J}=\{0,...,J\}\) that represent a mission and a set of time units \(\mathcal{T}=\{0,...,T\}\) that represents the orbit period, the objective function (1) represents the goal of maximizing the mission quality of service (QoS) metric, which is represented as the sum of the priority values \(u_{j,t}\) for all the jobs \(j\) over all the periods \(t\). \[QoS:\ \max_{x_{j,t}}\ \underbrace{\sum_{j=1}^{J}\sum_{t=1}^{T}u_{j,t}x_{j,t}}_{ \text{Quality of Service}} \tag{1}\] Variable \(x_{jt}\) represents the binary decision of scheduling job \(j\) at time \(t\), which takes on value \(1\) if job \(j\) is scheduled to run at time \(t\) and \(0\) otherwise. In constraints (2a) to (2d), the variable \(\phi_{j,t}\) is used to describe the period between task executions. In essence, these equations enforce the relationship between \(\phi_{j,t}\) and \(x_{j,t}\) such that \(\phi_{j,t}\) assumes value 1 only in the time step the task \(j\) started running and is later used to reflect the desired period between task executions in the scheduling problem. \[\phi_{j,t}\geq x_{j,t}, \tag{2a}\] \[\phi_{j,t}\leq x_{j,t}-x_{j,(t-1)}, \forall j\in\mathcal{J},\,\forall t\in\mathcal{T}:t>1 \tag{2b}\] \[\phi_{j,t}\leq x_{j,t}, \forall j\in\mathcal{J},\,\forall t\in\mathcal{T}\] (2c) \[\phi_{j,t}\leq 2-x_{j,t}-x_{j,(t-1)}, \forall j\in\mathcal{J},\,\forall t\in\mathcal{T}:t>1\] (2d) \[\sum_{t=1}^{w_{j}^{\min}}x_{j,t}=0, \forall j\in\mathcal{J}\] (2e) \[\sum_{t=w_{j}^{\max}+1}^{T}x_{j,t}=0, \forall j\in\mathcal{J}\] (2f) \[\sum_{l=t}^{t+t_{j}^{\min}-1}x_{j,l}\geq t_{j}^{\min}\phi_{j,t}, \forall t\in\{1,...,T-t_{j}^{\min}+1\}, \forall j\in\mathcal{J}\] (2g) \[\sum_{l=t}^{t+t_{j}^{\max}}x_{j,l}\leq t_{j}^{\max},\,\forall t \in\{1,...,T-t_{j}^{\max}\}, \forall j\in\mathcal{J}\] (2h) \[\sum_{l=t}^{T}x_{j,l}\geq(T-t+1)\phi_{j,t},\forall t\in\{T-t_{j}^ {\min}+2,...,T\}, \forall j\in\mathcal{J}\] (2i) \[\sum_{l=t}^{t+p_{j}^{\min}-1}\phi_{j,l}\leq 1,\forall t\in\{1,...,T-p_ {j}^{\min}+1\}, \forall j\in\mathcal{J}\] (2j) \[\sum_{l=t}^{t+p_{j}^{\max}-1}\phi_{j,l}\geq 1,\forall t\in\{1,...,T-p_ {j}^{\max}+1\}, \forall j\in\mathcal{J}\] (2k) \[\sum_{t=1}^{T}\phi_{j,t}\geq y_{j}^{\min}, \forall j\in\mathcal{J}\] (2l) \[\sum_{t=1}^{T}\phi_{j,t}\leq y_{j}^{\max}, \forall j\in\mathcal{J}\] (2m) \[\phi_{j,t}\in\{0,1\}, \forall j\in\mathcal{J},t\in\mathcal{T}\] (2n) \[x_{j,t}\in\{0,1\}, \forall j\in\mathcal{J},t\in\mathcal{T} \tag{2o}\] The constraints (2e) and (2f) are related to the execution of tasks in a given time window. The first, (2e), states that the sum of the binary variables \(x_{j,t}\) over the time interval \([1,w_{j}^{\min}]\) must be equal to zero, for all \(j\in\mathcal{J}\). Here, \(w_{j}^{\min}\) is the time when task \(j\) can start execution, meaning that it cannot run before this time. The second type of constraints, (2f), states that the sum of the binary variables \(x_{j,t}\) over the time interval \([w_{j}^{\max}+1,T]\) must also be equal to zero, for all \(j\in\mathcal{J}\). Here, \(w_{j}^{\max}\) is the maximum allowed time window for task \(j\), and \(T\) is the total number of time steps in the scheduling horizon. This means that if task \(j\) starts, it must finish before the time point \(w_{j}^{\max}\); otherwise the task cannot be executed. These constraints enforce that the tasks are executed only within the specified time windows, which can be used to ensure that a payload, for instance, runs only when passing above a certain territory. Now, constraints (2g) ensure that if \(\phi_{j,t}\) is 1, meaning that task \(j\) started running at time \(t\), then at least \(t_{j}^{\min}\) units of \(x_{j,l}\) in the corresponding time window must also be \(1\). Similarly, (2h) ensures that the number of \(x_{j,l}\) values equal to \(1\) in the corresponding time window is limited by \(t_{j}^{\max}\). Complementary, (2i) ensures that if \(\phi_{j,t}\) is 1, then all \(x_{j,l}\) values from \(t\) to the end of the time horizon must also be 1 so that, if a task starts at the end of the orbit, then it executes until the final time step. Therefore, (2g) to (2i) ensure the task running time requirements are met. Constraints (2j) and (2k) states that the sum \(\phi_{j,t}\) over a window of size \(p_{j}^{\min}\) or \(p_{j}^{\min}\) must be equal to 1, ensuring that the period of execution of this task is respected. Constraint (2l) specifies that the sum of all values of \(\phi_{j,t}\) for job \(j\) must be greater than or equal to a lower limit \(y_{j}^{\min}\). This means that the job must be performed a minimum number of times within the given time period. Constraint (2m) specifies that the sum of all values of \(\phi_{j,t}\) for job \(j\) must be less than or equal to an upper limit \(y_{j}^{\max}\). This means that the job must be performed at most the specified maximum number of times within the given time period. Regarding the energy management formulations, equation (3b) calculates the balance energy at time step \(t\), \(b_{t}\) by subtracting the total energy generated from the solar panels (\(q_{j}x_{j,t}\)) from the total energy demand (\(r_{t}\)). The second equation, (3c), calculates the energy required from or delivered to the battery (\(i_{t}\)) at time step \(t\). \[\sum_{j=1}^{J}q_{j}x_{j,t}\leq r_{t}+\gamma\:V_{b}, \forall t\in\mathcal{T} \tag{3a}\] \[b_{t}=r_{t}-\sum_{j\in\mathcal{J}}q_{j}x_{j,t}, \forall t\in\mathcal{T}\] (3b) \[i_{t}=\frac{b_{t}}{V_{b}}, \forall t\in\mathcal{T}\] (3c) \[\text{SoC}_{t+1}=\text{SoC}_{t}+\frac{i_{t}\:e}{60\:Q}, \forall t\in\mathcal{T}\] (3d) \[\text{SoC}_{t}\leq 1, \forall t\in\mathcal{T}\] (3e) \[\text{SoC}_{t}\geq\rho, \forall t\in\mathcal{T} \tag{3f}\] Equation (3d) establishes the state of charge (SoC) of the battery at every unit of time in the orbit. It is given as the sum of the state of charge at time \(t\) and the energy balance in this time step resulting from the current flowing in or out of the battery, expressed in terms of the battery capacity (\(Q\)). It also considers the battery charge and discharge efficiency (\(e\)). Constraints (3e) state that the State of Charge (SoC) at any time must be less than or equal to \(1\). This means that the battery can never be overcharged, Complementary constraints (3f) state that the SoC at any time must be greater than or equal to \(\rho\) -- it is a typical practice in such sensitive applications to impose large margins of safety. Finally, constraints (3a) ensure that the power demand does not exceed power availability. The battery can provide up to \(\gamma\cdot V_{b}\) Watts of power. ## 3 Methodology The most traditional approach to use a deep learning model in a task in which the input is an optimization problem is to vectorize all problem parameters and feed them to a traditional neural network such as a multilayer perceptron (MLP) [43]. However, this might not even be possible, as different instances of a problem might have different constraints and/or variables and, thus, would yield vectors of varying sizes to the deep learning model. Furthermore, even if the dimensions are fixed, traditional neural networks do not exploit the symmetries that exist in an optimization problem. There is no inherent ordering in the constraints nor in the variables of an optimization problem, i.e., an instance is not changed if we change the order of the rows (columns) of \(A\) along with the elements of \(b\) (\(c\)). By vectorizing these elements, we impose an order, upon which the output of the neural network will depend. Therefore, these symmetries are not embedded in the structure of a traditional deep learning model, and would need to be enforced during training, i.e., the model would need to learn these symmetries. This might slow down significantly the training and will not give any guarantees that the model will generalize with the given knowledge. ### Graph Neural Networks Graph neural networks, or message-passing neural networks, are generalizations of convolutional neural networks from grid-structured data to graphs. GNNs work by propagating features between neighboring nodes recurrently. Let \(G=(V,E)\) be a graph and \(H^{(0)}\in\mathbb{R}^{n\times d}\) an initial feature matrix associated with the nodes, in which each row \(h_{v}^{(0)}\in\mathbb{R}^{d}\) is the feature vector of node \(v\in V\). At each layer \(l\in\{1,...,L\}\) of the GNN, and for each node \(v\in V\), we first compute the messages \(m_{u,v}^{(l)}\) propagated by its neighbors \(u\in\mathcal{N}(v)\) based on their features \(h_{u}^{(l-1)}\), \[m_{u,v}^{(l)}=M_{l}(h_{u}^{(l-1)}),\:u\in\mathcal{N}(v), \tag{4}\] where \(\mathcal{N}(v)\) represents the set of neighbors of \(v\), and \(M_{l}(\cdot)\) is the message function of layer \(l\). Then, the features of node \(v\) are updated with the information from these messages \[h_{v}^{(l)}=U_{l}\left(h_{v}^{(l-1)},\texttt{Aggregation}\left(\{m_{u,v}^{(l) }:u\in\mathcal{N}(v)\}\right)\right), \tag{5}\] where \(U_{l}(\cdot)\) is the update function of layer \(l\), and \(\texttt{Aggregation}\) is a function that receives multiple message vectors and returns a single vector. The most usual choice for \(\texttt{Aggregation}\) is the sum, but many are the possibilities, such as \[\texttt{Aggregation}=\begin{cases}\frac{1}{|\mathcal{N}(v)|}\sum\limits_{u\in \mathcal{N}(v)}m_{u,v}^{(l)},&\text{if mean}\\ \max\limits_{u\in\mathcal{N}(v)}m_{u,v}^{(l)},&\text{if max}\\ \sum\limits_{u\in\mathcal{N}(v)}m_{u,v}^{(l)},&\text{if sum}\\ \sum\limits_{u\in\mathcal{N}(v)}\alpha_{u,v}m_{u,v}^{(l)},&\text{if attention}\end{cases} \tag{6}\] where \(\alpha_{u,v}\) is the attention weight for node \(u\) given node \(v\). The attention weights can be learned using a neural network or other techniques. Furthermore, the message function can easily be extended to consider the edge weight (or even edge features) along with the feature vectors of the neighbors. A common approach is to define the message functions \(M_{l},l=1,\ldots,L\) as linear operators over the hidden features of the neighbors, aggregate these messages by summing, and use the ReLU activation function with a bias as the update functions \(U_{l},l=1,\ldots,L\). We use the approach of [44] as a reference point and write \[\begin{split} m_{u,v}^{(l)}&=\frac{1}{c_{vu}}W^{(l) }h_{u}^{(l-1)},\,u\in\mathcal{N}(v)\\ h_{v}^{(l)}&=\text{ReLU}\left(b^{(l)}+\sum\limits_{u\in \mathcal{N}(v)}m_{u,v}^{(l)}\right)\end{split} \tag{7}\] where \(c_{vu}=\sqrt{|\mathcal{N}(u)|}\sqrt{|\mathcal{N}(v)|}\) with \(|\mathcal{N}(v)|\) denoting the number of neighbors, and \(W^{(l)}\in\mathbb{R}^{d\times d},b^{(l)}\in\mathbb{R}^{d}\) are (learnable) parameters. A more recent method was proposed by [27] and named SAGE (SAmple and aGgrEgate). The authors propose to directly aggregate the features of the neighbors, i.e., to use the identity as the message function and apply a linear operator with a nonlinear activation as the update function. Putting it into terms, \[\begin{split} m_{u,v}^{(l)}&=h_{u}^{(l-1)},\,u\in \mathcal{N}(v)\\ h_{v}^{(l)}&=\text{ReLU}\left(b^{(l)}+W_{1}^{(l)}h_ {v}^{(l-1)}+W_{2}^{(l)}\texttt{Aggregation}(m_{u,v}^{(l)},\,u\in\mathcal{N}( v))\right)\end{split} \tag{8}\] where \(W_{1}^{(l)},W_{2}^{(l)}\in\mathbb{R}^{d\times d},b^{(l)}\in\mathbb{R}^{d}\) are the parameters. The authors suggest using more complex aggregation operators, such as an LSTM and a fully-connected single-layer neural network followed by a pooling operation (element-wise maximum). After recurrent message passing operations through the \(L\) layers of a GNN, \(H^{(L)}\) can be further aggregated to generate a single feature vector of the entire graph. The GNN can be trained end-to-end by minimizing a prediction loss based on its outputs, optimizing its parameters (_e.g._, \(W^{(l)}\) and \(b^{(l)}\) of (7)) in the same way as a traditional deep learning model. ### GNNs for Combinatorics Given a linear problem, we can build a graph \(G=(V,E)\) in which we add one node for each variable of the problem, one node for each constraint, and connect each variable node to constraint nodes whenever the coefficient of the respective variable is not null in the respective constraint. More precisely, given the problem of the form \[\begin{split}\max&\quad c^{T}x\\ \text{s.t.:}&\quad Ax\geq b\end{split} \tag{9}\] where \(x\in\mathbb{R}^{n}\) and \(b\in\mathbb{R}^{m}\), we can build a graph \(G=(V_{\text{var}}\cup V_{\text{con}},E)\), in which \(|V_{\text{var}}|=n\), \(|V_{\text{con}}|=m\), and \(E=\{(v_{var,i},v_{con,j}):A_{i,j}\neq 0\}\). Intuitively, the graph represents the structure of the problem at hand, the relationship between variables and constraints. Note that this approach yields a bipartite graph, that is, a graph in which the nodes are separated into two disjoint sets, \(V_{\text{var}}\) and \(V_{\text{con}}\), with edges connecting only nodes from different sets. For illustration purposes, consider an optimization problem like Eq. 9 with \[c=\begin{bmatrix}1\\ 2\\ 3\end{bmatrix};\;A=\begin{bmatrix}1&2&0\\ 0&1&-1\\ 3&0&1\end{bmatrix};\;b=\begin{bmatrix}2\\ 1\\ 4\end{bmatrix} \tag{10}\] and \(x=[x_{1},x_{2},x_{3}]^{T}\). The bipartite graph can be represented as in Figure 1. By representing an optimization problem as a graph, we can feed it to a GNN. The parameters of the optimization problem can be used to generate the feature vectors fed to the model, enabling us to codify not only the structure but also the parameters of any given instance of the problem. Because of the convolutional nature of the message-passing iterations of the GNNs, the model can deal with arbitrary-sized graphs, which enables us to handle optimization problems with varying variables and constraints with the same GNN. Furthermore, the message function is invariant to the ordering of the neighboring nodes (see Eq. (27)), which are precisely the symmetries of the optimization problem (order of variables and constraints). ### SatGNN We name _SatGNN_ the network that serves as a basis for the experiments reported in the next section. To encode the features associated with the variables and the constraints into the hidden features of the first layer \(H^{(0)}\in\mathcal{R}^{(n+m)\times d}\), we use two fully-connected, single-layer MLPs, \(\text{NN}_{\text{var}}\) and \(\text{NN}_{\text{con}}\) with ReLU activations. We can write \[h_{v}^{(0)}=\begin{cases}\text{NN}_{\text{var}}(f_{v}),&v\in V_{\text{var}}\\ \text{NN}_{\text{con}}(f_{v}),&v\in V_{\text{con}}\end{cases},\] where \(f_{v}\) is the vector of features associated with each constraint or variable node. In our experiments, for a node \(v_{var,i}\in V_{\text{var}}\) associated with variable \(x_{i}\), the feature vector is \(f_{v_{var,i}}=(\hat{x}_{i},c_{i})\), where \(\hat{x}\) is a candidate solution and \(c_{i}\) is the weight of variable \(x_{i}\) in the objective function. Likewise, for a constraint node \(v_{con,i}\in V_{\text{con}}\) associated with the \(i\)-th constraint, \(f_{v_{\text{var}},i}=(b_{i},s_{i})\), where \(s_{i}\in\{=,\geq,\leq\}\) models the constraint type. At the core of the model are \(MP\) operators, which perform the update of the hidden features of the nodes through message-passing, as described in Section 3.1, \[H_{\mathcal{N}(v)}^{(l)}=\text{MP}_{l}(h_{v}^{(l-1)},H_{\mathcal{N}(v)}^{(l-1) },w_{\mathcal{N}(v)}),\] where \(H_{\mathcal{N}(v)}^{(l-1)}=\{h_{u}^{(l-1)}:u\in\mathcal{N}(v)\}\) is the set of hidden features of the neighbors of the target node and \(w_{\mathcal{N}(v)}=\{w_{u,v}:u\in\mathcal{N}(v)\}\) is the set of edge weights. However, we generalize this operator to apply multiple convolutions at each node instead of applying a single convolution, considering the same context. The application of an MP operator with \(K\) convolutions is illustrated in Algorithm 1. This allows for more complex features to be extracted at each model layer. ``` Data: Target node \(v\), node features \(h_{v}\), neighbors' features \(H_{\mathcal{N}(v)}\), edge weights \(w_{u,v},\forall u\in\mathcal{N}(v)\). Result: Updated node features \(h_{v}^{*}\). \(h^{(0)}\gets h_{v}\) for\(k\gets 1\)to\(K\)do \(h^{(k)}\gets U^{(k)}\left(h^{(k-1)},\texttt{Aggregation}^{(k)}\left(M^{(k)}(h_{u},w_{u,v}):u\in \mathcal{N}(v)\right)\right)\) end for\(h_{v}^{*}\gets h^{(K)}\) ``` **Algorithm 1**Application of an \(MP\) operator with \(K\) convolutions to update the features of a node \(v\) through message-passing. The message-passing is split into two steps, one for each set of nodes, similar to the approach of [16]. At each layer, the messages are propagated first from the variable nodes to the constraint nodes and then from the constraint nodes to the variable nodes, exploiting the bipartite nature of the graph. Algorithm 2 describes this process with further detail. Figure 1: Bipartition graph representation of Eq. (10) Finally, the output of model is generated from the last hidden feature vectors of the variable nodes, generating an output in the same shape as the problem's variable vector. The hidden features are fed to an MLP with two hidden layers and ReLU activations \(\text{NN}_{\text{out}}\), which maps each \(d\)-dimensional vector into a single output, _i.e._, \[\hat{y}_{v}=\text{NN}_{\text{out}}(h_{v}^{(L)}),\forall v\in V_{\text{var}}.\] Figure 2 shows an overview of the architecture. ### XAI for GNN Explainable Artificial Intelligence (XAI) refers to the development of AI models that can explain their predictions and human-comprehensible decisions. As AI models are increasingly utilized in high-stakes domains, such as healthcare and finance, where understanding why a model makes a specific prediction is critical, XAI is gaining importance. XAI can be performed in the context of GNNs by employing interpretable models or interpretability approaches that reveal the inner workings of the GNN model. For example, feature importance analysis can be used to determine which graph features are most important for the prediction made by the GNN. The intermediate representations learned by the GNN can also be visualized using the activations visualization technique. These illustrations help show how the GNN analyzes data and generates predictions. For instance, the authors in [45] introduce GNNExplainer, a new approach for explaining Graph Neural Network (GNN) predictions in graph-based machine learning tasks. GNNs are powerful, but explaining their predictions is challenging due to their complexity. GNNExplainer identifies a compact subgraph, and a subset of node features important for GNN predictions, allowing for consistent and concise explanations across instances. The approach maximizes the mutual information between GNN predictions and subgraph structures. Experiments show that GNNExplainer outperforms other methods and can identify important graph structures and node features, providing interpretability and insights into faulty GNNs. Figure 2: Overview of the components of _SatGNN_ and the operations it performs given an optimization problem (embedded as a bipartite graph \(G\)). \(H\) variables represent the sets of hidden features of the nodes. The connection of \(G\) and both \(MP\) operators represents both the weights of the edges as well as the neighborhood information. ### Hyperparameter Optimization Hyperparameter optimization is an essential task in machine learning that involves adjusting the hyperparameters of a model so that its performance is maximized. Hyperparameters are not learned during the training and must be defined before the training begins, e.g., learning rate, number of layers, number of learnable parameters, and regularization factor. Adjusting the hyperparameters can be done manually, through trial-and-error iterations, using an expert's intuition, or automatically treating the relationship between the hyperparameters and the model performance as a black-box function. Optuna is a Python library that provides a flexible and efficient platform for hyperparameter optimization using a variety of algorithms, including TPE (Tree-structured Parzen Estimator). TPE is a Bayesian optimization variant that models the hyperparameter distribution using a tree-structured Parzen estimator. Let \(\mathbf{x}=(x_{1},x_{2},...,x_{d})\) denote a set of \(d\) hyperparameters, and let \(f(\mathbf{x})\) be the cost function to be optimized. The TPE algorithm divides the hyperparameter space into two regions: a region containing the hyperparameters that have been observed to result in a good performance, denoted by \(\mathbf{u}\), and a region containing the hyperparameters that have not been observed to result in a good performance, denoted by \(\mathbf{v}\). TPE constructs the acquisition function \(a(\mathbf{x})\) as follows: \[a(\mathbf{x})=\frac{p_{\mathbf{u}}(\mathbf{x})}{p_{\mathbf{v}}(\mathbf{x})}, \tag{11}\] where \(p_{\mathbf{u}}(\mathbf{x})\) and \(p_{\mathbf{v}}(\mathbf{x})\) are the density functions estimated by the tree-structured Parzen estimator for the regions \(\mathbf{u}\) and \(\mathbf{v}\), respectively. Specifically, \(p_{\mathbf{u}}(\mathbf{x})\) and \(p_{\mathbf{v}}(\mathbf{x})\) are estimated as follows: \[p_{\mathbf{u}}(\mathbf{x})=\frac{1}{|S_{\mathbf{u}}|}\sum_{\mathbf{y}\in S_{\mathbf{u}}}K\left( \frac{\mathbf{x}-\mathbf{y}}{\sigma_{\mathbf{u}}}\right), \tag{12}\] and \[p_{\mathbf{v}}(\mathbf{x})=\frac{1}{|S_{\mathbf{v}}|}\sum_{\mathbf{y}\in S_{\mathbf{v}}}K\left( \frac{\mathbf{x}-\mathbf{y}}{\sigma_{\mathbf{v}}}\right), \tag{13}\] where \(S_{\mathbf{u}}\) and \(S_{\mathbf{v}}\) are the sets of hyperparameters observed in regions \(\mathbf{u}\) and \(\mathbf{v}\), respectively, \(K(\cdot)\) is a kernel function, and \(\sigma_{\mathbf{u}}\) and \(\sigma_{\mathbf{v}}\) are bandwidth parameters. The TPE algorithm selects the next set of hyperparameters to evaluate by maximizing the acquisition function: \[\mathbf{x}_{\text{next}}=\arg\max_{\mathbf{x}}a(\mathbf{x}). \tag{14}\] This process is repeated until the optimal set of hyperparameters is found, or a stopping criterion is met. ### Data To supply the models with data suitable for learning the tasks of interest, we generate new instances of the OTS problem on demand, including the energy input and task QoS parameters. These methodologies are briefly presented in the following topics and were previously published in [46]. #### 3.6.1 Power Input Vector An analytical model has been used to determine the power input vector of each instance of the OTS problem considered in this study. Once orbits are stable and solar flux constant - \(1360W/m^{2}\) - one can calculate this vector by knowing the spacecraft orbit, attitude - its kinematics - and size. We have taken the FloripaSat-I mission as a parameter for orbital data, which has an altitude of 628 kilometers and an orbital period of 97.2 minutes [47]. The attitude considered here is the Nadir, in which the satellite turns at the same rate around the Earth, so one side (or axis) always faces the Earth's surface. This analytical model then utilizes a rotation matrix to simulate the satellite's dynamics and can be adapted for larger or different geometries by adjusting the normal vectors representing the body. For this study, we considered a 3U nanosatellite size. The power generated by photovoltaic panels on each of the CubeSat's six sides depends on the efficiency of the cells, the area of the cells, the view factor of the surface to the Sun, and a step function that accounts for the satellite's location concerning Earth's shadow, as: \[P_{k}=\eta A_{pv_{k}}I_{Sun}F_{k\to Sun}\Psi, \tag{15}\] where \(\eta\) and \(A_{pv}\) are the solar cell efficiency and area, respectively; \(I_{Sun}\) is the solar flux; \(F_{k\to Sun}\) is the cell projection to the Sun; and \(\Psi\) is a step function that, when the spacecraft is in the shade of the Earth, takes a value of zero. More details about the equations used can be found in [48]. #### 3.6.2 Tasks Parameters For any particular mission size and orbital length, the main objective is to generate a realistic ONTS case using random data. The number of tasks or time units can be increased to make the instances distinct. Algorithm 3 presents an instance generator technique that has been used here to accomplish this. It requires two inputs: the number of tasks (\(J\)) and the time units (\(T\)) to be taken into account. The process produces nine variables for each task, including \(u_{j}\), \(q_{j}\), \(y_{j}^{\min}\), \(y_{j}^{\max}\), \(t_{j}^{\min}\), \(t_{j}^{\max}\), \(p_{j}^{\min}\), \(p_{j}^{\max}\), \(w_{j}^{\min}\), and \(w_{j}^{\max}\).max. These parameters completely describe an instance of the ONTS regarding the QoS aspects. ``` Input : Number of jobs \(J\), number of time periods \(T\) Output : Initial values for \(u_{j},q_{j},y_{j}^{\min},y_{j}^{\min},t_{j}^{\min},t_{j}^{\max},y_{j}^{\min},p_{ j}^{\max},w_{j}^{\min},w_{j}^{\max}\) for\(j\gets 1\)to\(J\)do \(u_{j}\leftarrow\text{U}(1,J)\); \(q_{j}\leftarrow\text{U}(3,2,5)\); \(y_{j}^{\min}\leftarrow\text{U}[1,[T/45]]\); \(y_{j}^{\max}\leftarrow\text{U}[y_{j}^{\min},[T/15]]\); \(t_{j}^{\min}\leftarrow\text{U}[1,[T/10]]\); \(t_{j}^{\max}\leftarrow\text{U}[t_{j}^{\min},[T/4]]\); \(p_{j}^{\min}\leftarrow\text{U}[t_{j}^{\min},[T/4]]\); \(p_{j}^{\max}\leftarrow\text{U}[p_{j}^{\min},T]\); \(w_{j}^{\min}\leftarrow\text{U}[0,[T/5]]\); \(w_{j}^{\max}\leftarrow\text{U}[1-[T/5]],T]\); end while ``` **Algorithm 3**Instance Generator Algorithm ## 4 Computational Experiments The experiments for this paper were conducted in Python, using PyTorch and the DGL libraries, and the Gurobi solver, on a server with an Intel i7-12700 16-Core (12 cores, 20 threads), 16 GB of RAM, and Ubuntu 22.04.1 LTS 64 bits. An NVIDIA RTX A4000 was used to speed up the DGL library calculations. In the following sections, we present three experiments that utilize GNN for different optimization problems regarding ONTS instances. In the first experiment, we propose a GNN-based approach to classify the feasibility of candidate solutions given problem instances. In the second experiment, we take a step further and propose a GNN to predict the consistency of each variable from a candidate solution with the maximization of the objective. Finally, in the last experiment, we used the GNN from experiment 2 to generate candidates suitable for early fixing binary variables of the problem. ### Experiment 1 - Feasibility Classification In the first experiment, we aim to predict the feasibility of a candidate solution using GNNs. First, we tackle the ONTS problem for a single job, without constraints (3a) to (3f), which turns it into an example of a task scheduling problem. Then, we generalize this approach to the complete ONTS problem. #### 4.1.1 Single Job Scheduling For an instance \(I\) of the ONTS problem, we train SatGNN on data from multiple jobs \(j\in\mathcal{J}\), to learn the problem's underlying structure and to generalize to unseen jobs. As detailed in Section 3.3, we represent the task scheduling problem for a given job in a CubeSat as a bipartite graph. More specifically, we focus on an instance of the ONTS problem with 97-time steps and 9 jobs; therefore, each of the nine job scheduling problems has 194 variables and several constraints ranging from X to Y. The bipartite graph for a given job of the instance and the feature vectors are fed to the SatGNN. We implement the model with a single regular convolution (see Eq. 7) in the message-passing operators (\(K=1\)), weight sharing between the operators (\(MP_{\text{con}}=MP_{\text{var}}\)), and a single message-passing iteration (\(L=1\)). Furthermore, as the task at hand requires a classification of the entire candidate solution, we aggregate the outputs into a single value \[\hat{y}=\sigma\left(\frac{1}{n}\sum_{v\in V_{\text{var}}}\hat{y}_{v}\right),\] where \(\sigma:\mathbb{R}\rightarrow[0,1]\) is the sigmoid function. We first build a dataset of random candidate solutions alongside their feasibility to train the model. More precisely, we build a dataset \(\mathcal{D}\) composed of tuples \((\hat{x},j,y)\in\mathbb{Z}^{n}\times\mathcal{J}\times\{0,1\}\) in which \(\hat{x}\) is a candidate solution, and \(y\) takes the value \(1\) whenever \(\hat{x}\) is feasible for the task scheduling problem defined by job \(j\), being \(0\) otherwise. For each of the 9 jobs of the selected instance of the OTS problem, we generate 1000 pseudo-random candidate solutions, half of which are feasible. The feasible candidate solutions are generated by solving the optimization problem with Gurobi and retrieving a sample of solutions near the optima. The infeasible candidate solutions are generated by perturbing the decision variables (and updating the non-decision variables accordingly) of the feasible solutions until they violate some constraint. Therefore, \(\mathcal{D}\) is a balanced dataset with 9000 elements. For training, we first split the dataset into training, validation, and test sets \(\mathcal{D}_{\text{train}}\cup\mathcal{D}_{\text{val}}\cup\mathcal{D}_{\text{ test}}=\mathcal{D}\) in such a way that \(\mathcal{D}_{\text{test}}\) contains all elements of \(\mathcal{D}\) associated to one of the jobs, \(\mathcal{D}_{\text{val}}\) contains all elements associated to a different job, and \(\mathcal{D}_{\text{train}}\) contains the elements associated to the remaining 7 jobs, i.e., no job is present in both datasets. We then optimize the parameters of the GNN to minimize the binary cross-entropy between the predicted feasibility and the actual feasibility of the candidate solutions \[\sum_{(\hat{x},j,y)\in\mathcal{D}_{\text{train}}}-y\log(\text{GNN}(\hat{x},j) )-(1-y)\log(1-\text{GNN}(\hat{x},j)),\] where \(\text{GNN}(\hat{x},j)\) is the model's predicted probability of \(y=1\), and we evaluate the models based on their accuracy on data unseen during training. The training is performed with the Adam optimizer [49] with a budget of 100 epochs. We observed that the model's performance was highly dependent on the initialization. Therefore, multiple models were trained, with random Glorot uniform weight initialization, as described in [44]. We then select the best model on the validation set and evaluate it on the test set. The performance of the model on \(\mathcal{D}_{\text{test}}\) can be seen in Table 1. #### 4.1.2 Ots Problem Seeing that the feasibility classifier GNN could learn and perform on unseen samples of the task scheduling problem, we generalize the approach described above to the complete OTS problem. The graph is constructed in the same way, but now considering all jobs and all constraints described in Section 2. Therefore, the problem now has 9 times more integer variables and several others necessary for the coupling constraints (3a) to (3f). The architecture is also identical, with the only exception being the output aggregation. Once the continuous variables can be determined entirely from the values of the binary variables, the prediction is made solely with features from the nodes that correspond to the binary variables. In other words, let \(V_{\text{var}}=V_{\text{int}}\cup V_{\text{cont}}\) where \(V_{\text{int}}\) is the set of nodes associated with the integer variables and \(V_{\text{cont}}\) the set of nodes associated with the continuous variables. Then, the output is computed as \[\hat{y}=\sigma\left(\frac{1}{n}\sum_{v\in V_{\text{in}}}\hat{y}_{v}\right)\] The dataset is built with 21 different instances of the OTS problem. For each instance, we generate 1000 pseudo-random candidate solutions following the same approach as previously. Once again, we split the data into training, validation, and test sets, with the samples from two instances on the test set, the samples from two other instances on the validation set, and the samples from the remaining 17 instances on the training set. As a similar initialization impact was observed, we followed the same procedure for model selection based on validation performance. The performance of the final model can be seen in Table 1. For an additional analisys, as can be seen in Figure 3, the more relevant variables for the decision process were regarding \(\phi\), which determine the exactly startup time of a task; since \(x\) is heavily dependent on \(\phi\), this aids the model to maker better overall choices. #### 4.1.3 XAI on Feasibility Classification In this sub-experiment, we investigate which inputs had the greatest impact on classifying a solution as feasible or not feasible for both the single job problem and the complete OTS problem. First, GNNExplainer was used \begin{table} \begin{tabular}{l c c} \hline \hline Task & Accuracy & F1 score \\ \hline Single Job Feasibility Classification & 83.0\% & 0.8172 \\ Full ONTS Feasibility Classification & 75.2\% & 0.7956 \\ \hline \hline \end{tabular} \end{table} Table 1: SatGNN’s test set performance on the feasibility classification task. on the trained SatGNN model to determine which variable nodes, constraint nodes or even features in the bipartite graph had the most influence on the feasibility categorization of a solution. By identifying the most pertinent nodes and restrictions, we better understood the GNN model's decision-making process, which could improve the model's interpretability and lead to the design of more efficient solutions for the task scheduling problem. Considering the graph explanation, more specifically regarding edges types "var2con" and "con2var", it was possible to notice that no edge type had significant more importance than the other; also, no linear correlation was observed between them, as shown in Figure 5. On the other hand, when trying to summarize the most important nodes and edges involved in the output decision as a sub-graph, the number of edges can be reduced from \(7160\) to \(5722\) in the resulting sub-graph, and the number of variables from \(194\) to \(139\); which once again emphasizes how most of the variables and connections are important in the final decision. Figure 4: Edge mask considering graph explanation for “var2con” and “con2var”, considering the correlation between the two edges types in a mean of 50 instances. Figure 3: Feature importance regarding the model decisions variables; it is possible to observe that \(\phi\) is more easily distinguished by the GNN, which directly influence the feasible region of \(x\). ### Experiment 2 - Optimality Classification In our second experiment, we aim to predict the probability of each integer variable in a given candidate solution being consistent with the maximization of the result for the ONTS problem. In other words, given a candidate solution \(\hat{x}\in\{0,1\}^{n}\), the ideal output would be a vector \(y\in[0,1]^{n}\) in which \(y_{i}=\mathbf{1}(x_{i}^{*}=\hat{x}_{i})\), where \(x^{*}\) is the optimal solution. We represent the problem as a bipartite graph and apply SatGNN to classify the variable. Differently from the experiments above, we do not need to aggregate the output of SatGNN. Instead, we apply the sigmoid function directly to the output of each node associated with the integer variables. We build a dataset with the same 21 instances of the ONTS problem as in the previous experiment. We generate pseudo-random candidate solutions and their label for each instance, which is computed given the optimal solution. The optimal solution for each of the 21 instances was found using the Gurobi solver. The dataset was divided into training, validation, and test sets, with validation and test having the data from two instances each and the training set with the data from the remaining 17 instances. The instances in the test set will be referred to as instances A and B. Multiple models with different random initializations were trained on the validation set, which was also used to perform hyperparameter tuning, as described in sec. 3.5. The hyperparameters selected for tuning alongside the best configuration found can be seen in table 2. The importance of each hyperparameter was assessed by training a random forest on the task of predicting the performance measure based on the values of the hyperparameters, upon which the importance is taken as the Gini importance of each hyperparameter [50]. The hyperparameter importance can be seen in Figure 6, while the coordinate plot of the hyperparameters interconnections is presented in Figure 7. On the test set, the best model could correctly predict, on average, 85.4% of the variables (with a standard deviation of 2.0 p.p.) and achieved an average F1 score of 0.8531 (standard deviation of 0.02). A summary of the best model's performance can be seen in Table 3. We also analyze the output of the model for each variable. Overall, the histogram of the model output of Figure 7(a) indicates that the prediction is usually close to the interval's limits, i.e., it is approximately binary. Furthermore, we evaluate the accuracy for each of the 1746 variables on all candidate solutions of instances A and B. These accuracies can be seen in Figure 7(b). The model could correctly predict the optimality of most variables in both instances of the test set. Furthermore, we evaluate the out-of-distribution generalization capacity of the SatGNN model in the optimality classification task by feeding it with larger instances of the problem. More specifically, we generate two new instances: instance C has the same time horizon but 11 jobs instead of 9; instance D has the same number of jobs but requires scheduling over 120-time steps instead of 97. Then, we generate new random candidate solutions in the same way as previously described. The performance of SatGNN on the larger instances can be seen in Table 3. Not only was the model able to handle the new instances without any modifications, but it also achieved an average accuracy of 80.8% and an average F1 score of 0.8037 on instance C (more jobs), and an average accuracy of Figure 5: When trying to explain the output with a resulting subgraph, the number of edges can be reduced from \(7160\) to \(5722\) in the resulting sub-graph, and the number of variables from \(194\) to \(139\), which once again emphasizes how most of the variables and connections are important in the final decision. 73.6% and an average F1 score of 0.7405 on instance D (more time steps). More details on the output of SatGNN model for the larger instances can be seen in Figure 9. ### Experiment 3 - Early Fixing Using the outcomes of the previous experiment, we tackle the task of early fixing variables of the ONTS problem. For this, we point out that it is possible to recover the optimal solution to the problem given any \(\hat{x}\) candidate solution along with its associated \(y\) label. Therefore, we get a predicted optimal solution by using the predicted optimality of the candidate solution (output of the SatGNN model from experiment 2). Additionally, as the predicted optimal solution can be generated from any random candidate solution, we use a set of random candidates and average the predicted optimals. Figure 10 illustrates how to use SatGNN for early fixing. Therefore, given a set \(\hat{X}\) of random candidate solutions for a given problem instance, we compute \[\hat{x}^{*}=\frac{1}{|\hat{X}|}\sum_{\hat{x}\in\hat{X}}\hat{x}\odot\hat{y}(\hat {x})+(1-\hat{x})\odot(1-\hat{y}(\hat{x})),\] \begin{table} \begin{tabular}{l l l} \hline Hyperparameter & Value range & Final value \\ \hline \# of MP operations & 1..3 & 3 \\ Convolutions’ type & [Regular, SAGE] & Regular\(\rightarrow\) SAGE\(\rightarrow\) Regular \\ SAGE aggregation & [lstm, pool] & pool \\ SAGE feature drop & 0.0..0.5 & 0.09 \\ \# of hidden features & 2..20 & 19 \\ Share weights between \(MP_{\text{var}}\) and \(MP_{\text{con}}\) & yes/no & no \\ \# layers & 1..3 & 1 \\ \# random samples per instance & \(2^{6}..2^{10}\) & \(2^{9}\) \\ batch size & \(2^{2}..2^{7}\) & \(2^{2}\) \\ \hline \end{tabular} \end{table} Table 2: Hyperparameters of SatGNN selected for hyperparameter optimization using Optuna, along with the values of the best model found. For “Type of each MP operation”, _Regular_ represents the convolution using a linear operator to combine the neighbor features and a ReLU activation function, as in eq. 7; _SAGE_ represents the convolution operator used in SageGNN, as described in sec. 3.1. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Accuracy} \\ Task & Instance A & Instance B & Instance C & Instance D \\ \hline Optimality Classification & 83.4\% & 87.3\% & 80.8\% & 73.6\% \\ Early Fixing & 82.5\% & 87.9\% & 66.9\% & 70.9\% \\ \hline \hline \end{tabular} \end{table} Table 3: SatGNN’s performance on new instances (unseen during training) of the ONTS problem. For “Optimality Classification”, average accuracy over all samples is reported. Figure 6: Importance of SatGNN hyperparameters tuned for the optimality classification. The score is calculated from the Gini importance of a random forest fitted on predicting the model’s performance from the hyperparameters’ values. The values were normalized to sum 1. where \(\odot\) is the element-wise product and \(\hat{y}(\hat{x})\) is the predicted optimality of candidate solution \(\hat{x}\in\hat{X}\) generated using the model from the previous experiment. In the results reported Furthermore, we can say that the closer a given predicted optimal variable \(\hat{x}_{i}^{*}\) is to 1 (resp. 0), the more certain the model is that that variable should be fixed at 1 (resp. 0). Therefore, we use the model's certainty to select the Figure 8: (a) Histogram of the outputs given the samples from the test set. Each dimension (variable node) of the output is treated as a different occurrence; (b) Accuracy for each binary variable of the problem over all candidate solutions on the test set. Orange and blue (with transparency) were used to distinguish between the two instances in the test set. Over 99% accuracy was observed for the same 1208 variables on both instances A and B (darker rows), while less than 1% accuracy was observed on both instances for 52 variables (white rows), over 1746 variables in total. Figure 7: Hyperparameters interconnections during the trials. variables to be fixed; that is, if we want to fix 50 binary variables, we will choose the 50 variables that the model is most certain of. We evaluate the accuracy of the SatGNN for early fixing as a function of the number of fixed variables on the two instances of the test set. These results can be seen in figure 11. As expected, the accuracy decreases as we include variables for which the model is less certain, to the limit of 82.5% and 87.9% accuracy on the two instances, which is the accuracy of the predicted optimal solution over the 1745 variables. A summary of the model's performance when fixing all variables can be seen in Table 3. In face of these results, we evaluate how early fixing using SatGNN impacts the optimization performance both in terms of runtime and maximum objective value. Specifically, we solve the two instances of the ONTS problem on the test set using Gurobi under an increasing number of fixed variables. The results can be seen in figure 12. As expected, correctly fixing the variables positively impacts the optimization, while wrongly fixing variables may decrease the runtime but often impacts the objective negatively. However, we see that, at the limit, a substantial runtime reduction is achieved (90% and 28%, for instances 6 and 9, resp.) with a negligible objective cost (1.3% and 0.3%, resp.). Beyond that, fixing more than 500 variables for instance 6 and more than 200 for instance Figure 10: Early fixing with SatGNN. Figure 9: SatGNN optimality classification output for larger instances. Over 99% accuracy was observed for 1556 and 1230 variables (solid rows), and less than 1% accuracy was observed for 227 and 178 variables (white rows) for instances C and D, respectively. Instance C has 2134 binary variables, while instance D has 2160. 9 deemed the problems infeasible within a 5 minutes budget. Given that the SatGNN model could generalize the optimality classification for larger instances of the problem, we also evaluate the impact of early fixing based on our model for the same two larger instances used in the previous experiment. The performance on the larger instances can be seen in Figure 13 and in Table 3. Even though these instances' sizes were not seen during training (not even during validation), the model was still able to handle them and provide sensible early fixing candidates. The performance drop is significant in terms of accuracy and optimization performance. In most configurations, however, the model was still able to reduce the runtime with little to no objective value reduction. ## 5 Conclusion This work has proposed a novel approach to tackle the ONTS problem using graph neural networks. Our experiments showed that our proposed architecture, SatGNN, is successful in classifying the feasibility and the optimality of candidate solutions to varied instances of the ONTS problem. Not only the model was able to generalize to unseen instances, but it also showed promising results on out-of-distribution instances, which were larger than the ones seen during training. This shows how the inherent symmetries of graph neural networks make them suitable for dealing with the structures of optimization problems. By leveraging on the optimality classification results, we used the SatGNN to generate candidate solutions to the binary variables of the problem. Through these candidate solutions, we were able to fix the variables, reducing the size of the problem and, consequently, the time the solver takes to converge. This approach for early fixing outperformed using the Gurobi solver alone, even when no tolerance is permitted for the optimal. Furthermore, generalization to larger instances is still a challenge for early fixing, even though promising results were observed. In summary, this work has shown how graph neural networks can be used to improve nanosatellite task scheduling. As we propose a supervised learning method, our approach is still limited to the availability of labeled data, which can be costly to obtain. Nonetheless, our results suggest that using graph neural networks for combinatorial optimization problems holds excellent promise and opens up new avenues for future research. Figure 11: Early fixing accuracy for the two instances of the ONTS problem in the test set. Figure 12: Optimization results of the two ONTS instances with SatGNN-based early fixing. The objective is plotted with respect to the maximum of the original problem (without any fixed variables). Accuracy is measured with respect to the optimal value of the fixed variables. ## Acknowledgments The authors acknowledge support from CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico) under grant number 150281/2022-6, 404576/2021-4 as well as FAPESC under grant number 2021TR001851.
2302.04126
Predicting the performance of hybrid ventilation in buildings using a multivariate attention-based biLSTM Encoder-Decoder neural network
Hybrid ventilation is an energy-efficient solution to provide fresh air for most climates, given that it has a reliable control system. To operate such systems optimally, a high-fidelity control-oriented modesl is required. It should enable near-real time forecast of the indoor air temperature based on operational conditions such as window opening and HVAC operating schedules. However, physics-based control-oriented models (i.e., white-box models) are labour-intensive and computationally expensive. Alternatively, black-box models based on artificial neural networks can be trained to be good estimators for building dynamics. This paper investigates the capabilities of a deep neural network (DNN), which is a multivariate multi-head attention-based long short-term memory (LSTM) encoder-decoder neural network, to predict indoor air temperature when windows are opened or closed. Training and test data are generated from a detailed multi-zone office building model (EnergyPlus). Pseudo-random signals are used for the indoor air temperature setpoints and window opening instances. The results indicate that the DNN is able to accurately predict the indoor air temperature of five zones whenever windows are opened or closed. The prediction error plateaus after the 24th step ahead prediction (6 hr ahead prediction).
Gaurav Chaudhary, Hicham Johra, Laurent Georges, Bjørn Austbø
2023-02-08T15:24:17Z
http://arxiv.org/abs/2302.04126v2
Predicting the performance of hybrid ventilation in buildings using a multivariate attention-based biLSTM Encoder - Decoder ###### Abstract Hybrid ventilation is an energy-efficient solution to provide fresh air for most climates, given that it has a reliable control system. To operate such systems optimally, a high-fidelity control-oriented modes is required. It should enable near-real time forecast of the indoor air temperature based on operational conditions such as window opening and HVAC operating schedules. However, physics-based control-oriented models (i.e., white-box models) are labour-intensive and computationally expensive. Alternatively, black-box models based on artificial neural networks can be trained to be good estimators for building dynamics. This paper investigates the capabilities of a deep neural network (DNN), which is a multivariate multi-head attention-based long short-term memory (LSTM) encoder-decoder neural network, to predict indoor air temperature when windows are opened or closed. Training and test data are generated from a detailed multi-zone office building model (EnergyPlus). Pseudo-random signals are used for the indoor air temperature setpoints and window opening instances. The results indicate that the DNN is able to accurately predict the indoor air temperature of five zones whenever windows are opened or closed. The prediction error plateaus after the 24th step ahead prediction (6 hr ahead prediction). ## 1 Introduction Buildings are responsible for over 40% of global energy use and 36% of greenhouse gas emissions, with heating, ventilation, and air conditioning (HVAC) operation accounting for almost half of it [1]. Reducing the overall building energy demand and footprint has thus become an urgent task to meet the current sustainability goals and tackle current energy crises. To that end, natural ventilation is seen as one of the most effective passive energy-saving measures for buildings [2]. HVAC systems in buildings coupled with natural ventilation (hybrid ventilation) can theoretically provide the most energy efficient system for any climate, given that they have a fast and reliable control system. For such systems, a high-fidelity control-oriented prediction model is required. It should be able to forecast building dynamics in near-real time for given operational conditions such as window opening and HVAC operational schedules. This is, however, challenging due to the time-varying building dynamics, disturbances from occupants, lighting and plug-in loads, and external factors like outdoor weather. Developing efficient prediction models accounting for these building dynamics has been a bottleneck to implementing predictive-based control strategies [3]. Buildings with hybrid ventilation is particularly challenging for dynamic modelling. When natural ventilation occurs, e.g., when opening a window, the indoor temperature variation depends on many parameters, such as the indoor-outdoor temperature difference, the window opening configuration and effective opening area, the HVAC mode, and the internal loads. Such building dynamics can be fully modelled using well-established laws of physics (i.e., white box approach) [3] or these laws provide the model structure while meausurement data is used to calibrate the model parameters (i.e., grey box approach) [4]. However, a typical white box modelling tool like EnergyPlus or IDA-ICE requires expert efforts to define, set and adjust the multiple model parameters. A grey box model, such as a resistor-capacitor (RC) network, requires a robust estimation of its parameters. Reinforced by the massive amount of data induced by the deployment of metering and sensing technologies in buildings, the data-driven black box approach for building dynamics prediction has increased in popularity in recent years [5]. Black box models can have the advantage of low development costs and scalability. However, such models usually require a large amount of training data to perform adequately. This, however, can be solved by using transfer learning methods that couple data generated from white box modelling tools [6] and system identification techniques. A black box model pre-trained on various operating conditions and scenarios simulated with the white box model of a generic building could, in theory, be suitable for real building applications after only tuning the former with a very small dataset [6, 7, 8]. Following that principle, it is hypothesized that deep neural networks (DNNs) can be employed as accurate black box models for the prediction of indoor environment. DNNs are suitable for complex building dynamics as they can handle non-linear multivariable modelling situations. It was shown that a neural network with enough hidden layers can approximate arbitrary continuous functions defined on a closed and bounded set [9]. DNNs based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) like long short-term memory (LSTM) [10] and gated recurrent units (GRUs) [11] have been widely used in applications like speech recognition [12], natural language processing [13] and computer vision [14]. Like building energy models, these applications use data in the form of time series. In RNNs such as LSTMs or GRUs, each input corresponds to an output for the same time step. However, in many real cases, there is a need to predict an output sequence given an input sequence of different lengths without correspondence between each input and output. This situation is called sequence-to-sequence Mapping (also known as Encoder-Decoder models)and lies behind commonly used applications like machine translation, question answering, chat question answering, chat-bots, and text summarization. The most commonly used neural network unit in Encoder-Decoder models is the LSTM unit. However, they seem to suffer from short-term memory over long time series sequences. Advancements like the attention mechanism [15] and the transformer [16] used in conjunction with RNNs have improved their prediction performance [17]. The attention mechanism improve the model's accuracy by giving higher weights to relevant parts of the sequence and vice versa for irrelevant parts [18]. The multi-head attention (MHA) module introduced in the transformer model [16] runs through the attention mechanism several times in parallel, attending to different parts of the sequence differently. Compared to LSTMs, MHA retains direct connections to all previous timestamps in the sequence, allowing information to propagate over much longer sequences. To summarize, the RNNs are excellent at capturing the local temporal characteristics of a sequence, while the transformer model can learn long-term dynamics. Standard DNNs are deterministic in nature and always, in theory, produce uncertain results due to model uncertainty and data uncertainty. Model uncertainty accounts for uncertainty in tunable parameters in a model whereas data uncertainty accounts for noisy and out-of-distribution data. Probabilistic DNNs account for such uncertainty in the final results by producing prediction intervals. In line with other time series forecasting models [19], the DNN model developed for this study also generates prediction intervals on top of point forecasts. This is done by simultaneous prediction of various percentiles (50\({}^{\text{th}}\), 90\({}^{\text{th}}\), 95\({}^{\text{th}}\) and 99\({}^{\text{th}}\)) at each time step using quantile regression [20]. This paper investigates if and how a DNN can be trained to predict the effect of the window opening on the indoor air temperature dynamics in a conditioned building. The training data for this DNN was generated using the white box building energy simulation tool EnergyPlus. ## 2 Model architecture The encoder-decoder model developed for this study employs both RNNs, specifically bi-directional LSTMs (biLSTM) [21], and several components of the transformer model such as Self-MHA, Cross-MHA and Gated Residual Networks (GRNs) [22]. A biLSTM is a sequence processing model that consists of two LSTMs: one taking the input in a forward direction (past to future) and the other in a backwards direction (future to past).). This effectively improves the contextual information of the data dynamics. The Self-MHA components in both the Input Encoder and the Input Decoder are used to determine long-term relationships within input data, producing attention scores for the biLSTMs. The Cross-MHA takes the representation of both the encoder input sequence and decoder input sequence coming from biLSTMs and learns the relationships for larger periods producing an attention score for Output Decoder biLSTM. This improves the context for short-term dependencies. Residual connection [23] in the form of GRN [22] is applied over each module by first applying component gating layers based on Gated Linear Units (GLUs) [24], followed by layer normalization [25]. GLUs allow the model to control the extent to which the residual connection mechanism contributes to the original input. In this study, the model takes the past seven days of data as input and predicts 24 hours into the future. The data has a temporal granularity of 15 minutes, so the model takes 672 data points from the past and predicts 96 data points. The model takes two sets of inputs: "_Known past inputs_" for the input encoder and "_Known future inputs_" for the input decoder. "_Known past inputs_" are the past seven days of weather data, time information, and zone-specific information like occupancy, external loads, temperature setpoints in the zones, and actual indoor air temperatures (IATs) of these zones. The input data also includes the opening sensor signals of the windows. "_Known future inputs_" are the future 24 hours of weather forecast, the control variables which are the heating setpoints of HVAC systems, and the window opening signals. These inputs are similar to what a zone-level controller can access to condition a zone. Here, the weather forecast is created from the actual weather data, with a added gaussian noise of zero mean and standard deviation of 0.01 \({}^{\circ}\)C. The model's output is the future 24 hours of IATs for all zones. A schematic of the model structure is shown in Figure 1. ## 3 Study case description and data generation Training data was generated from a small-size office building modelled with EnergyPlus v22.1.0. The building model is a generic 5-zone EnergyPlus example file geometry. The building is a single-floor rectangle of dimensions 30 m x 15 m, with a ceiling height of 2.4 m. It has four exterior zones and one interior zone (see Figure 2). There are windows on all four facades, and glass doors on the south-west Figure 1: Encoder-decoder model for predicting building dynamics using both recurrent networks and components of the transformer model. and north-east facades. Overhangs shade the south-facing window and door. There is no internal opening between the zones. The U-values of the internal and external walls are 1.6 W/m\({}^{2}\)K and 2.8 W/m\({}^{2}\)K, respectively. All fenestrations are high-performance windows with a U-value of 0.7 W/m\({}^{2}\)K. To reduce overheating, automatic window shading control lowers the interior shade when the outside temperature exceeds 23 \({}^{\circ}\)C. The building uses a variable refrigerant flow HVAC system for conditioning the zones, whereas the ventilated air for the building is delivered by a dedicated outdoor air system. The schedules for occupancy and miscellaneous electric loads are generated by an agent-based stochastic occupancy simulator [26]. The lighting schedule is based on standard working hours from 07:00 to 19:00. To "excite" the DNN for all possible changes in heating setpoints during training (i.e., create sufficient variability in key input variables of the training dataset), a multi-pseudo random sequence (m-PRS) input signal is applied to the temperature heating setpoints of the five zones. During occupied hours, the m-PRS signals change randomly between 18 and 22 \({}^{\circ}\)C (with 0.5 \({}^{\circ}\)C intervals) and stay at that value for a random amount of time. The cooling setpoint is 5 \({}^{\circ}\)C above the heating setpoint. For non-occupied hours, the heating and cooling setpoints drop to a setback of 15 and 30 \({}^{\circ}\)C, respectively.The signal excitation method is a system identification method aiming to excite one input with a signal that is not correlated with other inputs. The DNN thus learns the underlying dynamics of thermal setpoints [27]. The windows' opening/closing is modelled with the _ZoneVentilation:WindandStackOpenArea_ EnergyPlus object. It allows to define the limits on the outdoor conditions (temperature, wind speed) that determine whether the window is open or closed. The equation used to calculate the wind-driven natural ventilation rate is based on the _"Wind and Stack with Open Area"_ model. Pseudo-random binary sequence (PRBS) signals are used to actuate the opening and closing of the windows to excite the DNN. In the PRBS signal a random 1 represents that a window is opened for the next 30 minutes. Figure 3 shows the indoor air temperature of Space 1-1 along with the mPRS input signal for heating and cooling setpoints, the outdoor air temperature and the PRBS window opening/closing signal. The effect of opening windows can be observed as a sharp decline followed by a gradual rise of the IAT. Figure 2: 5-zone office building simulated in EnergyPlus for generating dataset. To predict the IAT, essential features that could be commonly available in an office building management system are selected as inputs (see Table 1). In contrast, zone-specific features like equipment and occupancy can be deduced from CO\({}_{2}\) concentration monitoring [28]. The hour of the day, the day of the week, the month of the year, and the holiday schedule were also used as inputs. The hour input leads to knowing the difference between the temperature profile during the occupied and unoccupied time and understanding the daily dynamics of the building. Day and holiday input leads to distinguishing between business and weekend days. ## 4 Training procedure The 12 months of data is split into training (60 %), validation (20 %) and testing (20 %). Deep learning models perform better when numerical input variables are scaled to a standard range. For this study all the input variables in the dataset were scaled to the range [-1.0, 1.0] using _MinMaxScaler_. The minimum and maximum value of the scaling is mentioned in Table 1 in the interval column. The DNN is implemented with _Tensorflow 2.11.0_ and the _Keras_ library in _Python 3.10.0_. The hyperparameters of the model and training are listed in Table 2. **Table 2.** Hyperparameters for the deep neural network \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Input variables (_Known past inputs_ )** & & **Interval** & **Input variables (_Known future inputs_ )** & & **Interval** \\ \hline \(T^{\text{out}}\) & Outside Dry-bulb temperature & °C & [ -30.0, 40.0] & \(T^{\text{out}}\) & Forecasted Outside Dry-bulb temperature & °C & [ -30.0, 40.0] \\ \(H^{\text{out}}\) & Relative humidity of air & \% & [ 0, 100] & \(H^{\text{out}}\) & Forecasted Relative humidity of air & \% & [ 0, 100] \\ \(W^{\text{out}}\) & Wind speed & m/s & [ 0, 25] & \(W^{\text{out}}\) & Forecasted Wind speed & m/s & [ 0, 25] \\ \(l\_{Norm}^{\text{out}}\) & Direct normal radiation & W/m\({}^{2}\) & [ 0, 1300.0] & \(l\_{Norm}^{\text{out}}\) & Forecasted Direct normal radiation & W/m\({}^{2}\) & [ 0, 1300.0] \\ \(l\_{Hor}^{\text{out}}\) & Diffuse radiation on horizontal surface & W/m\({}^{2}\) & [ 0, 1300.0] & \(l\_{Hor}^{\text{out}}\) & Forecasted Diffuse radiation on horizontal surface & W/m\({}^{2}\) & [ 0, 1300.0] \\ \(h\) & Sine and Cosine of the Hour of the day & - & [ -1.0, 1.0] & \(h\) & Sine and Cosine of the Hour of the day & - & [ -1.0, 1.0] \\ \(d\) & Sine and Cosine of the Day of the week & - & [ -1.0, 1.0] & \(d\) & Sine and Cosine of the Day of the week & - & [ -1.0, 1.0] \\ \(m\) & Sine and Cosine of the Month of the year & - & [ -1.0, 1.0] & \(m\) & Sine and Cosine of the Month of the year & - & [ -1.0, 1.0] \\ \(hol\) & Holiday & Boolean [ 0, 1] & \(hol\) & Holiday & Boolean [ 0, 1] \\ \(E_{i}\) & Equipment load of the zone _i(1-5)_ & W & [ 0, 0, 1000.0] & \(WS_{i}\) & Window opening signal of zone window _i(1-4)_ & Boolean [ 0, 1] \\ \(Occu_{i}\) & Occupancy in the zone _i(1-5)_ & - & [ 0, 30] & \(SP_{i}\) & Heating Setpoint of zone _i(1-5)_ & °C & [ 15.0, 30.0] \\ \(WS_{i}\) & Window opening signal of zone window _i(1-4)_ & Boolean [ 0, 1] & \(SP_{i}\) & Heating Setpoint of zone _i(1-5)_ & °C & [ 15.0, 30.0] \\ \(SP_{i}\) & Heating Setpoint of zone _i(1-5)_ & °C & [ 15.0, 30.0] & \(\underline{\text{Output variables (Unknown future outputs) = y }_{\text{in, }}\text{to}\text{)}}\) & & \\ \(T^{\text{in}}\) & Indoor temperature of zone _i(1-5)_ & °C & [ 10, 40.0] & \(\underline{\text{Output variables (Unknown future outputs) = y }_{\text{in, }}\text{to}\text{)}}\) & & \\ \hline \hline \end{tabular} \end{table} Table 1: Multivariate inputs-output variables used for the study Figure 3: The indoor air temperature of Space 1-1 along with mPRS input signal for heating and cooling setpoints, outdoor air temperature and opening/closing signal. The coefficient of variance of the root mean squared error (CVRMSE) is used as the accuracy evaluation metric to compare the predicted IATs and the actual ones. ## 5 Results For the testing, 96 steps, i.e. 24 hr ahead, prediction was done for 5112 instances. Figure 4 shows 3 selected instances for two of the zones (SPACE1-1 and SPACE2-1). Each subplot shows the actual IAT, heating setpoint and window opening signal (presented as a pink start marker) and probabilistic forecast of IAT for that instance. Each forecast has 90%, 95% and 99% confidence intervals presented in decreasing opacities of green, and the 50th quantile in red. The various instances of prediction are selected to have a mix of both bad and good IAT forecasts, window opening signals both few and many steps ahead, and window opening signals with low and high influence on the IAT. While Figure 4 shows the instances of prediction for the whole 96 steps, Figure 5 focuses on the 24th step (six hours) ahead for all instances of the same two zones. The results for other zones can be seen in the supplementary material. Figure 4: The real vs predicted indoor air temperature of two zones at various instances of prediction. Extended high resolution results: [https://github.com/gaurav306/NSB23-Predicting-the-performance-of-hybrid-ventilation-in-buildings-using-a-multivariate-attention-/blob/main/Figure-4.png](https://github.com/gaurav306/NSB23-Predicting-the-performance-of-hybrid-ventilation-in-buildings-using-a-multivariate-attention-/blob/main/Figure-4.png) These qualitative results show that a DNN can predict well the IAT with a random heating setpoint signal and a sharp decrease and gradual rise of temperature when a window is opened. In some instances where the error is high, like Instance:1300 of SPACE1-1 and Instance:2550 of SPACE2-1, the prediction is good when the control signal is closer to t=0 step ahead. As observed in Figure 4, window opening severely affected the IAT of SPACE2-1. This may be due the wind predominantly flowing from southeast to northwest for the testing period, adding more randomness to the indoor temperature dynamics. This could potentially be solved by adding wind direction to the input data. The quantitative error in the predictions is given for all zones in Figure 6. It shows how the error has changed for certain steps into the future for all instances. The CVRMSE (%) shown in the plot is the error between the actual IAT and the 50th quantile of the predicted IAT. The error for all the zones plateaus after the 24th step ahead, which can be a good sign of prediction stability. However, the error of all zones is higher at the initial steps, which can be due to sudden inaccuracies of forecasted weather used as input data. Figure 5: The real indoor air temperature vs 24th step prediction done at all instances of prediction for SPACE1-1 and SPACE2-1. Some of the bad predictions when the window is opened are shown in various zoomed subplots. The high-resolution version of the figure above along with wider figures for 1st, 24th, 48th, 72nd and 96th step prediction over time, is available at: [https://github.com/gaurav306/NSB23-Predicting-the-performance-of-hybrid-ventilation-in-buildings-using-a-multivariate-attention-/blob/main/Figure-5.pdf](https://github.com/gaurav306/NSB23-Predicting-the-performance-of-hybrid-ventilation-in-buildings-using-a-multivariate-attention-/blob/main/Figure-5.pdf) ## 6 Discussion and conclusions The DNN model presented in this paper is fairly advanced in comparison to other time series prediction models and sequence-to-sequence neural networks commonly found in the literature about building energy simulation. Other statistical time series models, such as ARX ARMAX, are linear and time-variant in nature. However, they perform poorly when presented with nonlinearities and sudden uncertainties in the system. Through tests it was seen that sequence-to-sequence neural networks using RNNs in their basic form, although nonlinear in nature, are not able to capture varying building dynamics. Both the above-mentioned models are also not able to take input such as heating setpoints and window opening signals separately. The complexity of the final model structure presented in the current article is the result of many model iterations developed after analyzing the limitations of other models, and the will to improve control-oriented model building applications. Using transformer model components enables to capture long-term building dynamics with high accuracy. This paper indicates that a deep learning-based neural network can be used as an estimator of building IAT given a heating setpoint and control signal for the window opening. It was observed that the prediction model can predict the incoming drop in IAT when a window is expected to be open. This can reversed: window opening can be actuated based on the model's predictions to accurately regulate the indoor thermal comfort with minimum energy use. The training of this DNN can be extended to predict other indoor comfort criteria-related features like the relative humidity or the CO\({}_{2}\) concentration, as well as energy demand. Such models can also be employed in model predictive control [29] or a reinforcement learning architecture where all controllable building system parameters are optimized to maximize indoor comfort and minimize the costcosts based on a penalty signal (e.g.,emissions, energy use). This paper also showed that the transfer learning approach can be a very effective method to train DNN models for various building energy and environment applications. The prediction model was tested on boundary conditions, occupancy schedules, heating setpoints and window opening signals that were not seen before during the training phase. This transfer learning approach for system identification can be extended where merely a simple representative energy model of a building is required. ## Acknowledgements The authors acknowledge the support from the strategic research program ENERSENSE at the Norwegian University of Science and Technology (NTNU).
2310.12555
Probing Three-Dimensional Magnetic Fields: II -- An Interpretable Convolutional Neural Network
Observing 3D magnetic fields, including orientation and strength, within the interstellar medium is vital but notoriously difficult. However, recent advances in our understanding of anisotropic magnetohydrodynamic (MHD) turbulence demonstrate that MHD turbulence and 3D magnetic fields leave their imprints on the intensity features of spectroscopic observations. Leveraging these theoretical frameworks, we propose a novel Convolutional Neural Network (CNN) model to extract this embedded information, enabling the probe of 3D magnetic fields. This model examines not only the plane-of-the-sky magnetic field orientation ($\phi$), but also the magnetic field's inclination angle ($\gamma$) relative to the line-of-sight, and the total magnetization level (M$_A^{-1}$) of the cloud. We train the model using synthetic emission lines of $^{13}$CO (J = 1 - 0) and C$^{18}$O (J = 1 - 0), generated from 3D MHD simulations that span conditions from sub-Alfv\'enic to super-Alfv\'enic molecular clouds. Our tests confirm that the CNN model effectively reconstructs the 3D magnetic field topology and magnetization. The median uncertainties are under $5^\circ$ for both $\phi$ and $\gamma$, and less than 0.2 for M$_A$ in sub-Alfv\'enic conditions (M$_A\approx0.5$). In super-Alfv\'enic scenarios (M$_A\approx2.0$), they are under $15^\circ$ for $\phi$ and $\gamma$, and 1.5 for M$_A$. We applied this trained CNN model to the L1478 molecular cloud. Results show a strong agreement between the CNN-predicted magnetic field orientation and that derived from Planck 353 GHz polarization data. The CNN approach enabled us to construct the 3D magnetic field map for L1478, revealing a global inclination angle of $\approx76^\circ$ and a global M$_A$ of $\approx1.07$.
Yue Hu, A. Lazarian, Yan Wu, Chengcheng Fu
2023-10-19T08:03:38Z
http://arxiv.org/abs/2310.12555v2
# Probing Three-Dimensional Magnetic Fields: II - An Interpretable Convolutional Neural Network ###### Abstract Observing 3D magnetic fields, including orientation and strength, within the interstellar medium is vital but notoriously difficult. However, recent advances in our understanding of anisotropic magnetohydrodynamic (MHD) turbulence demonstrate that MHD turbulence and 3D magnetic fields leave their imprints on the intensity features of spectroscopic observations. Leveraging these theoretical frameworks, we propose a novel Convolutional Neural Network (CNN) model to extract this embedded information, enabling the probe of 3D magnetic fields. This model examines the plane-of-the-sky magnetic field orientation (\(\phi\)), the magnetic field's inclination angle (\(\gamma\)) relative to the line-of-sight, and the total magnetization level (M\({}_{A}^{-1}\)) of the cloud. We train the model using synthetic emission lines of \({}^{13}\)CO (J = 1 - 0) and C\({}^{18}\)O (J = 1 - 0), generated from 3D MHD simulations that span conditions from sub-Alfvenic to super-Alfvenic molecular clouds. Our tests confirm that the CNN model effectively reconstructs the 3D magnetic field topology and magnetization. The median uncertainties are under 5\({}^{\circ}\) for both \(\phi\) and \(\gamma\), and less than 0.2 for M\({}_{A}\) in sub-Alfvenic conditions (M\({}_{A}\approx\) 0.5). In super-Alfvenic scenarios (M\({}_{A}\approx\) 2.0), they are under 15\({}^{\circ}\) for \(\phi\) and \(\gamma\), and 1.5 for M\({}_{A}\). We applied this trained CNN model to the L1478 molecular cloud. Results show a strong agreement between the CNN-predicted magnetic field orientation and that derived from Planck 353 GHz polarization. The CNN approach enabled us to construct the 3D magnetic field map for L1478, revealing a global inclination angle of \(\approx\) 76\({}^{\circ}\) and a global M\({}_{A}\) of \(\approx\) 1.07. keywords: ISM:general--ISM:structure--ISM:magnetic field--turbulence ## 1 Introduction In the vast interstellar medium (ISM), magnetic fields are pervasive powers that significantly influence various astrophysical phenomena. These fields serve as invisible balancers against gravitational forces within the ISM, intricately maintaining its equilibrium (Wurster & Li, 2018; Abbate et al., 2020). They are instrumental in directing gas flows towards galactic nuclei, playing a crucial role in their sentence and the dynamic processes unfolding therein (Kim & Stone, 2012; Roche et al., 2018; Busquet, 2020; Whittingham et al., 2021; Hu et al., 2022). Magnetic fields also govern the trajectories of cosmic rays, affecting the energy distribution and overall dynamics of ISM (Fermi, 1949; Jokipii, 1966; Yan & Lazarian, 2002, 2004; Ferrand & Marcowith, 2010; Xu & Yan, 2013; Xu & Lazarian, 2020; Hopkins et al., 2021; Hu et al., 2022). Furthermore, they are deeply involved in the star formation processes within molecular clouds, influencing both the rate and nature of star births (Mestel, 1965; Mac Low & Klessen, 2004; McKee & Ostriker, 2007; Lazarian et al., 2012; Federrath & Klessen, 2012; Hu et al., 2021). Despite their pivotal roles, our understanding of these magnetic fields remains far from complete. Our primary challenge lies in the formidable task of probing a three-dimensional (3D) magnetic field in 3D spatial space. Current approaches -- such as polarized dust emission (Lazarian, 2007; Andersson et al., 2015; Planck Collaboration et al., 2015, 2020; Fissel et al., 2016; Li et al., 2021; Liu et al., 2023), polarized synchrotron emission (Xiao et al., 2008; Planck Collaboration et al., 2016; Guan et al., 2021), provide 2D measurements of the plane-of-sky (POS) magnetic field direction, while Zeeman splitting (Crutcher, 2004, 2012), and Faraday rotation (Haverkorn, 2007; Taylor et al., 2009; Oppermann et al., 2012; Xu & Zhang, 2016; Tahani et al., 2019) provide line-of-sight (LOS) components of the magnetic field. Yielding valuable insights, these techniques probe into distinct and typically different regions of the multiphase ISM. Thus, despite their individual strengths, merging these insights into a coherent, full 3D magnetic field vector, which includes both the 3D orientation and total strength, presents a non-trivial task. A significant advance in probing the 3D magnetic fields in molecular clouds has been made by leveraging polarized dust emission, drawing on the depolarization effect induced by different magnetic field orientations (see Chen et al., 2019) and by accounting for the properties of turbulent magnetic fields Hu & Lazarian 2023a,c). As a separate development, Tahani et al. (2019, 2022) has succeeded in employing the synergy of Faraday rotation and dust polarization to infer a helical 3D magnetic field topology across the Orion A, Orion B, Perseus, and California clouds. Subsequently, Hu et al. (2021) and Hu et al. (2021c) proposed the use of anisotropic properties of magnetohydrodynamic (MHD) turbulence, inherited by young stellar objects (Ha et al., 2022) and spectroscopic lines (Lazarian & Pogosyan, 2000; Kandel et al., 2016; Hu et al., 2023), to obtain the LOS and POS components of the magnetic field's orientation and total magnetization simultaneously. Importantly, the underlying theory of Hu et al. (2021c)'s approach demonstrates that spectroscopic observations embody the anisotropy of MHD turbulence (Lazarian & Pogosyan, 2000; Kandel et al., 2016; Hu et al., 2023), i.e., turbulent eddies elongate along the 3D direction of the magnetic field (Goldreich & Sridhar, 1995; Lazarian & Vishniac, 1999). The spatial features present in these observations imprint the anisotropy and thus carry detailed information about the magnetic fields. This implies that, given an extensive amount of training data, machine learning algorithms have the potential to capture these features and produce accurate measurements. This strategy has been employed to map the 2D POS magnetic field orientation using velocity channel maps from spectroscopic observations (Xu et al., 2023). The theoretical basis remains the anisotropy of the MHD turbulence, a principle previously utilized to trace magnetic fields via velocity gradients (Lazarian & Yuen, 2018; Hu et al., 2018, 2022c; Alina et al., 2022; Liu et al., 2022a; Schmaltz et al., 2023). However, Hu et al. (2021c) made the crucial discovery that anisotropy in velocity channel maps harbors not only information about the POS magnetic field orientation, but also the total magnetization and the magnetic field's inclination angle with respect to the LOS. This additional information paves the way for constructing the full 3D magnetic field vector from spectroscopic observations. By leveraging the capabilities of Convolutional Neural Networks (CNN; LeCun et al., 1998)--a type of deep learning model excelling in image and signal processing--we aspire to develop a novel method that can probe the 3D magnetic field. Earlier studies of the CNN explored the possibility to distinguish sub-Alfvenic and super-Alfvenic turbulence (Peek & Burkhart, 2019) and predict the POS magnetic field orientation (Xu et al., 2023). Our study, however, targets the simultaneous extraction of LOS and POS magnetic field orientations and the total magnetization. The foundation of our CNN model is the anisotropic MHD turbulence exhibited in spectroscopic observations (Lazarian & Pogosyan, 2000; Kandel et al., 2016; Hu et al., 2023), a theoretical underpinning that allows us to interpret the CNN model accurately. In other words, it enables us to discern the specific features that convey information about the magnetic field, the reasons why they are informative, and their underlying physical meanings. The effectiveness of training a CNN is highly dependent on the availability of comprehensive numerical simulations that accurately represent realistic ISM. In this research, we employ 3D MHD supersonic simulations that portray a range of ISM environments, spanning sub-Alfvenic scenarios (i.e., strong magnetic field), trans-Alfvenic, and super-Alfvenic conditions (i.e., weak magnetic field). We further post-process these simulations by incorporating the radiative transfer effect, which enables us to generate mock emission lines of \({}^{13}\)CO and C\({}^{18}\)O from diffuse molecular clouds. Through this trained CNN model, we present the 3D magnetic field map of the molecular cloud L1478. This paper is organized as follows. In SS 2, we briefly review the basic concepts of MHD turbulence anisotropy in spectroscopic observations and their correlation with 3D magnetic field orientation and total magnetization. In SS 3, we give details of the 3D MHD simulations and mock observations used in this work, as well as our CNN model. We use mock observations to train the CNN model and present the results of numerical testing in SS 4. We further apply the trained CNN model to predict the 3D magnetic field in the molecular cloud L1478. In SS 5, we discuss the uncertainty and prospects of the machine learning approach, as well as implications for various astrophysical problems. We summarize our results in SS 6. ## 2 Theoretical consideration ### Anisotropy of MHD turbulence: revealing magnetic field orientation and magnetization The earliest model of MHD turbulence was proposed to be isotropic (Iroshnikov, 1963; Kraichnan, 1965). However, this model underwent subsequent revisions through a series of theoretical and numerical studies, revealing that MHD turbulence exhibits anisotropy under sub-Alfvenic conditions and isotropy at large-scale, super-Alfvenic conditions (Montgomery & Turner, 1981; Shebalin et al., 1983; Higdon, 1984; Montgomery & Matthaeus, 1995). A significant advance in this field was the introduction of the "critical balance" condition, i.e., equating the cascading time \((k_{\perp}v_{l})^{-1}\) and the wave periods \((k_{\parallel}v_{A})^{-1}\), proposed by Goldreich & Sridhar (1995), hereafter GS95. Here \(k_{\parallel}\) and \(k_{\perp}\) represent the components of the wavevector parallel and perpendicular to the magnetic field, respectively, while \(v_{l}\) denotes the turbulent velocity at scale \(l\), and \(v_{A}=B/\sqrt{4\pi\rho}\) represents the Alfven speed. Here \(B\) is the magnetic field strength and \(\rho\) is the gas mass density. Taking into account Kolmogorov-type turbulence, i.e. \(v_{l}\propto l^{1/3}\), the GS95 anisotropy scaling can be straightforwardly derived. \[k_{\parallel}\propto k_{\perp}^{2/3}, \tag{1}\] which reveals the anisotropic nature of turbulence eddies, implying that the eddies are elongated along the magnetic fields. However, it should be noted that the considerations of GS95 are based on a global reference frame, where the direction of the wavevectors is defined relative to the mean magnetic field. Scale-dependent anisotropy was later introduced via the study of fast turbulent reconnection by Lazarian & Vishniac 1999 (hereafter LV99), which proposed a local reference frame. This frame is defined relative to the magnetic field passing through an eddy at scale \(l\). According to LV99, the motion of eddies perpendicular to the direction of the local magnetic field adheres to the Kolmogorov law (i.e. \(v_{L,\perp}\propto l_{\perp}^{1/3}\)), since this is the direction in which the magnetic field offers minimal resistance. Applying the "critical balance" condition in the local reference frame: \(v_{L,\perp}l_{\perp}^{-1}\approx v_{A}l_{\parallel}^{-1}\), the scale-dependent anisotropy scaling is then given by: \[l_{\parallel}=L_{\rm inj}(\frac{l_{\perp}}{L_{\rm inj}})^{\frac{2}{3}}{\rm M }_{A}^{-4/3},\ \ \ {\rm M}_{A}\leq 1, \tag{2}\] where \(l_{\perp}\) and \(l_{\parallel}\) represent the perpendicular and parallel scales of eddies with respect to the local magnetic field, respectively. \(L_{\rm inj}\) denotes the turbulence injection scale and \({\rm M}_{A}=v_{\rm inj}/v_{A}\) is the Alfven Mach number. \({\rm M}_{A}^{-4}\) gives magnetization level of the medium. Eq. 2 provides two critical insights: (1) **Turbulent eddies stretch along the local magnetic field (i.e., \(l_{\parallel}\gg l_{\perp}\))**, and (2) **the degree of anisotropy, defined as \(l_{\parallel}/l_{\perp}\), depends on the magnetization \({\rm M}_{A}^{-1}\). As we illustrated in Fig. 1, this indicates that eddies become increasingly anisotropic in a strongly magnetized medium. For the case where \({\rm M}_{A}\gg 1\), turbulence is essentially isotropic due to the predominance of hydrodynamic turbulence. However, the essence of turbulence lies in the cascading of energy from larger injection scales to smaller ones, which leads to a decrease in turbulent velocity. Eventually, at the transition scale \(l_{a}=L_{\rm inj}/M_{A}^{3}\), the strength of the magnetic field becomes comparable to the turbulence (i.e., the Alfven Mach number at \(l_{a}\) is unity, see Lazarian 2006), and anisotropy starts to manifest. Furthermore, **(3) changes in \(\mathrm{M}_{A}\) are distinctly reflected in the magnetic field topology.** Within a strongly magnetized medium, the magnetic field lines exhibit minimal variation due to the presence of weaker fluctuations, resulting in more straightened field lines. In contrast, in the context of a weaker magnetic field, which corresponds to a larger value of \(\mathrm{M}_{A}\), fluctuations in the magnetic field direction intensify significantly. This leads to the field lines adopting a more curved configuration (Yuen & Lazarian, 2020). As turbulent eddies extend along the local magnetic field, the topological changes induced by \(\mathrm{M}_{A}\) become evidently imprinted within these eddies. ### Obtaining velocity information from spectroscopic observation The anisotropy outlined in Eq. 2 pertains to turbulent velocity fluctuations, and the turbulent eddy refers to velocity fluctuation contour. This suggests that anisotropy manifests in turbulent velocity fields. Such anisotropic velocity can be obtained from the velocity channel map of spectroscopic observations, due to the velocity caustics effect (Lazarian & Pogosyan, 2000). We briefly review this concept. In position-position-velocity (PPV) space, the observed intensity distribution of a given spectral line is determined by both the density of emitters and their velocity distribution along the LOS. If coherent velocity shear -- for instance, from galactic rotation -- can be disregarded 1, the LOS velocity component, \(v\), becomes the sum of the turbulent velocity, \(v_{\mathrm{t}\mathrm{u}t}(x,y,z)\), and the residual component attributable to thermal motions. This residual thermal velocity, \(v-v_{\mathrm{t}\mathrm{u}t}(x,y,z)\), has a Maxwellian distribution, \(\phi(v,x,y,z)\). For emissivity proportional to density, it provides PPV emission density \(\rho_{s}(x,y,z)\) as (Lazarian & Pogosyan, 2004): Footnote 1: The impact of galactic rotation on velocity caustics was explored by Lazarian & Pogosyan (2000). It demonstrated that its effects are insignificant (Hu et al., 2023). \[\rho_{s}(x,y,v) =\kappa\int\rho(x,y,z)\phi(v,x,y,z)dz, \tag{3}\] \[\phi(v,x,y,z) \equiv\frac{1}{\sqrt{2\pi c_{s}^{2}}}\exp[-\frac{[v-v_{\mathrm{t }\mathrm{u}t}(x,y,z)]^{2}}{2c_{s}^{2}}], \tag{4}\] where \(\kappa\) is a constant that correlates the number of emitters to the observed intensities. \(c_{s}=\sqrt{\gamma_{a}k_{\mathrm{B}}T/m}\) is the sound speed, with \(m\) being the mass of atoms or molecules, \(\gamma_{a}\) the adiabatic index, \(k_{\mathrm{B}}\) being the Boltzmann constant, and \(T\) the temperature, which can vary from point to point if the emitter is not isothermal. However, the variation of temperature has only a marginal contribution to the distribution of \(\rho_{s}(x,y,v)\)(see Hu et al., 2023). By integrating \(\rho_{s}(x,y,v)\) over a defined velocity range or channel width \(\Delta v\), we obtain a velocity channel: \[p(x,y,v)=\int_{v-\Delta v/2}^{v+\Delta v/2}\rho_{s}(x,y,v^{\prime})dv^{\prime }. \tag{5}\] By separating the 3D density into the mean density and zero-mean fluctuations, \(\rho(x,y,z)=\bar{\rho}+\bar{\rho}\phi(x,y,z)\), the channel intensity can be represented as the sum of two terms, \(p(x,y,v)=p_{vc}(x,y,v)+\bar{\rho}\phi(x,y,z)\). Figure 1: Illustration of how the observed intensity structures in channel map regulated by \(\mathrm{M}_{A}\) and \(\gamma\). Within all three panels, these intensity structures elongate along the POS magnetic field direction where \(l_{\parallel}>l_{\perp}\). Structures 1 and 2, depicted in panels (a) and (b), are projected onto the POS with identical inclination angles \(\gamma_{1}=\gamma_{2}\), yet exhibit different magnetizations with \(\mathrm{M}_{A,1}^{-1}>\mathrm{M}_{A,2}^{-1}\). Notably, the anisotropy observed, represented as \(l_{\parallel}/l_{\perp}\), in the weakly magnetized Structure 2 is less pronounced than in Structure 1. Structure 2 is less straightened because the weak magnetic field has more fluctuations. The curvature of the observed magnetic structures is suggested for magnetization studies by Yuen & Lazarian (2020). Comparatively, Structures 1 and 3—showcased in panels (a) and (c)—possess equivalent magnetizations \(\mathrm{M}_{A,1}^{-1}=\mathrm{M}_{A,3}^{-1}\), but divergent inclination angles with \(\gamma_{1}>\gamma_{3}\). The observed anisotropy decreases with smaller \(\gamma\), though it is crucial to note that the straightness of Structure 3 remains unaffected by this projection. It should be noted that here the projection effect is simplified. The intensity structures are predominantly created by the velocity caustics effect, due to MHD turbulence. The projection effect is applied to the velocity field and then subsequent intensity structures in velocity channels. \[p_{dc}(x,y,v)\text{(Hu et al., 2023)}\] \[p_{vc} \equiv\int_{\gamma-\Delta v/2}^{v+\Delta v/2}\!dv^{\prime}\int\bar{ \rho}\phi(v^{\prime},x,y,z)dz, \tag{6}\] \[p_{dc} \equiv\int_{v-\Delta v/2}^{v+\Delta v/2}\!dv^{\prime}\int\bar{ \rho}\delta(x,y,z)\phi(v^{\prime},x,y,z)dz. \tag{7}\] The first term, \(p_{vc}\), encompasses the mean intensity in the channel and carries fluctuations exclusively produced by velocity, called the velocity caustics effect (Lazarian & Pogosyan, 2000). The second term, \(p_{dc}\), reflects the inhomogeneities in the real 3D density. The relative importance of \(p_{vc}\) and \(p_{dc}\) depends on the channel width (Lazarian & Pogosyan, 2000; Kandel et al., 2016; Hu et al., 2023). The narrower the channel width, the greater the contribution from \(p_{vc}\). When the channel width \(\Delta v\) is less than the velocity dispersion \(\sqrt{\delta(v^{2})}\) of the turbulent eddies under investigation, that is, \(\Delta v<\sqrt{\delta(v^{2})}\), the intensity fluctuation in such a thin channel is predominantly due to velocity fluctuation. Consequently, \(p(x,y,v)\) inherits the anisotropy of MHD turbulence. The intensity structures within \(p(x,y,v)\) elongate along the POS magnetic fields, and their corresponding anisotropy degree, as well as the topology, is correlated with the magnetization and inclination angle. On the other hand, the dominance of \(p_{vc}\) ensures that the morphology of intensity fluctuation within \(p(x,y,v)\) is less sensitive to M\({}_{s}\), because the anisotropy in MHD turbulence's velocity field is not affected by M\({}_{s}\)(Kowal & Lazarian, 2010). It is important to note that Clark et al. (2019) questioned the validity of velocity caustics in the presence of thermal broadening in multiphase HI gas and suggested that the thin velocity channel is dominated by density fluctuations from cold filaments. The nature of the striations in channel maps was tested in Hu et al. (2023), by explicitly evaluating velocity and density contributions in velocity channels obtained from multi-phase HI simulations and GALFA-HI observations. This study confirmed that the velocity caustics were responsible for the observed striation. ### Anisotropy in thin velocity channels: dependence on the inclination angle of magnetic fields The anisotropy of the observed intensity in a PPV channel, represented by \(p(x,y,v)\), is also affected by the inclination angle \(\gamma\) of the magnetic field with respect to the LOS, due to the projection effect (Hu et al., 2021). For example, as illustrated in Fig. 1, we consider two magnetized structures (or eddies), \(s_{1}\), and \(s_{3}\), both having identical magnetization. Although these unprojected structures have the same anisotropy degree, their projections differ. Specifically, a projection with a smaller inclination angle results in a lower anisotropy degree by reducing the scale parallel to the magnetic fields. When \(\gamma=0\), the parallel scale of the eddy aligns with the LOS, making the anisotropy unobservable on the POS. However, as previously mentioned, the degree of anisotropy is also controlled by magnetization. As shown in Fig. 1, although two magnetized structures (\(s_{1}\) and \(s_{2}\)) share identical inclination angles, the projection of the weakly magnetized \(s_{2}\) shows less anisotropy. Importantly, the topology of \(s_{2}\) is further changed being less straightened. This is because a weak magnetic field has more deviations and exhibits significant curvature in terms of its POS orientation (Yuen & Lazarian, 2020). Consequently, the observed structure, as well as the structure's topology, in \(p(x,y,v)\) is governed by both M\({}_{\text{A}}\) and \(\gamma\)(Hu et al., 2021). To summarize succinctly, the thin channel maps \(p(x,y,v)\) from spectroscopic observations capture the anisotropy of MHD turbulence. This leads to the following important implications: 1. The intensity structures in \(p(x,y,v)\) align with the POS magnetic field. 2. The degree of anisotropy observed in these intensity structures is influenced by two distinct factors: M\({}_{\text{A}}\) and \(\gamma\). These factors contribute to the anisotropy: (a) \(\gamma\) introduces a projection effect that consequently decreases the anisotropy. (b) M\({}_{\text{A}}\) defines the magnetization level of the medium. A larger M\({}_{\text{A}}\) represents a weaker magnetic field, resulting in less pronounced anisotropy. 3. Additionally, changes in M\({}_{\text{A}}\) alter the topology of the magnetic field lines, as well as the observed intensity structure, manifesting itself as significant curvature. The interconnection between magnetic field topology and M\({}_{\text{A}}\) is vital to extracting accurate 3D magnetic fields. A subtle change in the degree of anisotropy responds sensitively to variations in both M\({}_{\text{A}}\) and \(\gamma\), leading to a degeneracy. This degeneracy necessitates the introduction of an additional feature that is sensitive to M\({}_{\text{A}}\) or \(\gamma\) to solve for these parameters, and the topology of the magnetic field conveniently provides this required information. Additionally, it is crucial to acknowledge that relying solely on anisotropy does not offer a clear distinction regarding the magnetic field's orientation along the LOS, specifically whether the field is directed towards or away from our observation point. Consequently, the value of \(\gamma\) is inherently restricted to a limited range between 0 and 90\({}^{\circ}\). ## 3 Numerical method ### Convolutional neural network (CNN) To construct a deep neural network for the purpose of tracing the 3D magnetic field from a spectroscopic map, we adopt a CNN-based (LeCun et al., 1998) architecture. CNNs have demonstrated significant success in processing multidimensional data. The typical CNN architecture, as illustrated in Fig. 2, consists of initial layers comprising a stack of convolutional layers followed by pooling layers. To facilitate faster convergence during the network training process using backpropagation of the loss and enhance the stability of learning, we introduce a batch normalization layer following each convolution layer. After several iterations of convolution and pooling layers, we extract a compressed image feature, which is then processed by the fully connected layers to predict the desired properties. In the following, we introduce the core modules in the CNN architecture as well as the training procedure for the CNN network. **Convolutional Layer:** Serving as the fundamental component of a CNN, the convolutional layer processes input data to produce feature maps (LeCun et al., 1989). In this layer, each neuron connects to a local region of the input feature map. This connection is achieved by applying a 2D convolutional kernel \(w_{I}\) to the input feature map. This process can be mathematically described as follows: \[a_{I}=\sigma(w_{I}*h_{I-1}+b_{I}), \tag{8}\] where \(h_{I-1}\) and \(a_{I}\) are the input and output feature map for the \(I-\)th convolutional layer, respectively, and \(w_{I}\) is the learnable convolution kernel, and \(*\) indicates the convolution operation. In addition, a learnable bias \(b_{I}\) is applied to the input feature map. To be more concrete, \[a_{I}(x,y)=\sigma(\sum_{i=-k}^{k}\sum_{j=-k}^{k}w_{I}(i,j)h_{I-1}(x-i,y-j)+b_{I}( x,y)). \tag{9}\] By applying the 2D convolution kernel \(w_{I}\in\mathbb{R}^{(2k+1)\times(2k+1)}\) to the input feature map \(h_{I-1}\in\mathbb{R}^{d^{\text{in}}\times d^{\text{in}}}\), we yield the output feature map with size \((d^{\text{in}}-k-1)\times(d^{\text{in}}-k-1)\). Here, \(d^{\text{in}}\) denotes the size of the input feature map and \(2k+1\) is the size of the convolution kernel. The resulting locally-weighted sum, once added to the learned bias, undergoes a non-linear transformation via the ReLU activation function \(\sigma(\cdot)\). To constrain the number of parameters that need to be learned in our network, we generally use small kernel sizes. While each layer has a limited receptive field focusing on local features through the utilization of small convolutional kernels, stacking multiple layers allows for the gradual expansion of this receptive field. Consequently, the network becomes capable of capturing global features within the image as the depth increases. **Batch Normalization Layer:** it is a technique frequently utilized in neural networks, playing a pivotal role in stabilizing them and hastening the convergence of the training loss during the backpropagation process (Ioffe and Szegedy, 2015). During each training iteration, it functions on a mini-batch of data. The layer normalizes each feature within the input data by centering its values around the mean and scaling based on the feature's standard deviation within the given batch. This normalization process is instrumental in mitigating the internal covariate shift -- a phenomenon where the distribution of inputs at each layer undergoes changes during training -- facilitating a more stable and efficient training process. Following the normalization, batch normalization introduces two learnable parameters per feature: a scaling parameter and a shifting parameter. These parameters allow the network to learn the optimal scale and shift for the normalized values autonomously, providing the model with the flexibility to modify the normalization if it learns that such reversal or adjustment is beneficial for its predictive performance. These dynamic adjustments, enabled by the introduced parameters, imbue the network with a degree of adaptability, allowing it to fine-tune the transformations applied to the features as needed during the training. **Pooling Layer:** following the detection of local features in the input feature maps by the convolution layer, a pooling layer is typically employed to merge similar local features into a singular feature (Sermanet et al., 2013). One common variant of the pooling layer is the _Max Pooling Layer_. This layer works by calculating the maximum value within a local patch of neurons and then outputting this maximum value as a single neuron. Importantly, the patches of input neurons for adjacent pooling units are shifted by more than one row or column, which effectively reduces the dimensionality of the feature representation. This process imparts the network with a degree of invariance to minor shifts and distortions in the input data, as it condenses the information in the feature maps while retaining the most salient features. This reduction not only helps in making the detection of features invariant to scale and orientation changes but also enhances computational efficiency by reducing the number of parameters and computations in the network. **Fully Connected Layer:** After sequential operations that involve multiple convolutional layers and aggregation, the network derives a lower-dimensional compressed image feature map. Subsequently, this 2D feature map undergoes a transformation, being flattened into a 1D vector. The fully connected layer then processes this vector (Goodfellow et al., 2016). The role of the layer is critical, as it integrates the high-level reasoning of the features extracted and flattened previously. The mechanism involves applying learned weights and biases to this flattened vector to predict the final output. Mathematically, this operation can be represented as: \[\mathbf{y}=\sigma(\mathbf{Wh+b}), \tag{10}\] In this equation, \(\mathbf{h}\in\mathbb{R}^{d_{\text{in}}}\) represents the flattened, compressed image feature vector, and \(\mathbf{y}\in\mathbb{R}^{d_{\text{out}}}\) symbolizes the predicted result. Here, \(\mathbf{W}\in\mathbb{R}^{d_{\text{out}}\times d_{\text{in}}}\) and \(\mathbf{b}\in\mathbb{R}^{d_{\text{out}}}\) denote the learnable weights and biases for the fully connected layer, respectively. \(d^{\text{out}}\) represents the size of the output feature map. These weights and biases are integral to the layer's functionality, providing the means for it to learn and adapt during the training phase, ultimately allowing for the accurate prediction of the desired output from the input images. **Network Training:** The trainable parameters within the CNN are optimized by adhering to a conventional neural network training methodology, where the mean-squared error of the 3D magnetic field prediction serves as the training loss for backpropagation, as outlined in the seminal work by Rumelhart et al. (1986). During the training process, we implement a strategy designed to enrich the diversity of the training dataset and consequently enhance the generalization capabilities of the deep neural network. Specifically, this involves augmenting the input images by subjecting them to random cropping operations, resulting in smaller patches of size \(22\times 22\) cells. Such augmentation introduces variability and randomness into the training data, which is instrumental in refining the network's ability to generalize from the training data to unseen data, thereby bolstering its predictive accuracy and robustness. In total, we generated \(\approx 1.7\times 10^{7}\) input \(22\times 22\)-cell maps, with 20% of them serving as a validation set, for each molecular species. Figure 2: Architecture of the CNN-model. The input image is a \(22\times 22\) pixel map cropped from the thin velocity channel map. The network outputs the prediction of \(\phi\), \(\gamma\), or M\({}_{A}\). ### MHD simulations The numerical simulations used in this study were executed using the ZEUS-MP/3D code (Hayes et al., 2006). We performed an isothermal simulation of a 10 pc cloud by solving the ideal MHD equations in an Eulerian frame under periodic boundary conditions: \[\begin{split}&\partial\rho/\partial t+\nabla\cdot(\rho\mathbf{v})=0, \\ &\partial(\rho\mathbf{v})/\partial t+\nabla\cdot\left[\rho\mathbf{v}\mathbf{v}^ {T}+(c_{s}^{2}\rho_{+}\frac{B^{2}}{8\pi})\mathbf{I}-\frac{\mathbf{B}\mathbf{B}^{T}}{4\pi} \right]=\mathbf{f},\\ &\partial\mathbf{B}/\partial t-\nabla\times(\mathbf{v}\times\mathbf{B})=0,\\ &\nabla\cdot\mathbf{B}=0,\end{split} \tag{11}\] where \(\mathbf{f}\) represents the stochastic forcing term used to drive turbulence. \(\rho\), \(\mathbf{v}\), and \(\mathbf{B}\) are mass density, velocity, and magnetic field, respectively. Given the isothermal equation of state, the sound speed \(c_{s}\) was held constant at approximately 187 m/s, corresponding to a gas temperature of 10 K. Purely turbulent scenarios were also considered, excluding the impact of self-gravity. Kinetic energy was solenoidoidally (i.e., the forcing term is divergence-free) injected at the wavenumber \(k=2\pi/l\approx 2\) (in the unit of \(2\pi/L_{\rm box}\), where \(L_{\rm box}\) is the length of simulation box) in Fourier space, where \(l\) is the length scale in real space, producing a Kolmogorov-like power spectrum. Turbulence was continuously stimulated until it reached a state of statistical saturation. The simulation was solved on a regular grid of \(792^{3}\) cells and the turbulence was numerically dissipated at scales of approximately 10 - 20 cells. The simulations were initialized with a uniform density field and a magnetic field, with the initial mean magnetic field oriented along the y-axis. Furthermore, we rotated the simulation cubes so that the mean angle of inclination with respect to the LOS (or z-axis) reached \(90^{\circ}\), \(60^{\circ}\), and \(30^{\circ}\). The sonic Mach number, \(\mathrm{M}_{\mathrm{s}}=v_{\rm inj}/c_{s}\), and the Alfvenic Mach number, \(\mathrm{M}_{\mathrm{A}}=v_{\rm inj}/v_{A}\), characterize MHD turbulence simulations. To model different ISM conditions, we used a typical mean number density of 300 cm\({}^{-3}\) and varied the initial uniform magnetic field and the injected kinetic energy to obtain a range of \(\mathrm{M}_{\mathrm{A}}\) and \(\mathrm{M}_{\mathrm{s}}\) values. In this paper, we refer to the simulations in Tab. 1 by their model name or key parameters. ### Emission lines of \({}^{13}\)CO and C\({}^{18}\)O We generate synthetic emission lines for two CO isotopologues: \({}^{13}\)CO (1-0) and C\({}^{18}\)O (1-0), following the procedures used in Hu & Lazarian (2021). This was achieved using the SPARX radiative transfer code (Hsieh et al., 2019). SPARX solves the radiative transfer equation (RTE) for finite cells, which means that it considers the emission from a homogeneous finite element. The equation of statistical equilibrium for molecular levels takes into account molecular self-emission, stimulated emission, and collisions with gas particles. Information on the distribution of molecular gas density with mean density \(\sim 300\) cm\({}^{-3}\) and LOS velocity was extracted from the MHD simulations mentioned above. The fractional abundances of the CO isotopologues \({}^{13}\)CO(1-0) and C\({}^{18}\)O(1-0) were set at \(2\times 10^{-6}\) and \(1.7\times 10^{-7}\), respectively. We derive the \({}^{12}\)CO-to-H\({}_{2}\) ratio of \(1\times 10^{-4}\) from the cosmic value of C/H = \(3\times 10^{-4}\) and the assumption that 15% of C is in molecular form. The abundance of \({}^{13}\) CO is determined using a \({}^{13}\)CO/\({}^{12}\)CO ratio of 1/69, as indicated by Wilson (1999), giving a \({}^{13}\) CO / H\({}_{2}\) ratio of approximately \(2\times 10^{-6}\). Using a \({}^{12}\)CO/C\({}^{18}\)O ratio of 500, as given by Wilson et al. (2013), we obtained a C\({}^{18}\)O-to-H\({}_{2}\) ratio of \(1.7\times 10^{-7}\). When generating these synthetic emission lines, we specifically focused on the lowest-transition J = 1-0 of the CO isotopologues, with the Local Thermodynamic Equilibrium (LTE) satisfied. ### Training images Our training input is a thin velocity channel map, \(p(x,y,v_{0})\), derived from either the \({}^{13}\)CO (1-0) or C\({}^{18}\)O (1-0) line, calculated from: \[p(x,y,v_{0})=\int_{v_{0}-\Delta v/2}^{v_{0}+\Delta v/2}T_{\mathrm{e}}(x,y,v) dv, \tag{12}\] where \(v_{0}\) is the velocity associated with the line's central peak, \(T_{\mathrm{e}}\) is the emission line's intensity, and \(\Delta v=\sqrt{\delta(v^{2})}\). Here \(\sqrt{\delta(v^{2})}\) is the velocity dispersion derived from the moment-1 map (velocity centroid map). The \({}^{12}\)CO line, a common diffuse cloud tracer, is not used in this work due to numerical limitations related to the saturation of the intensity of \({}^{12}\)CO in the channel centering at \(v_{0}\), which obliterates the spatial features of that channel (Hsieh et al., 2019). However, the CNN method could be extended to include wing channels centering at \(|v|<v_{0}\) to bypass this numerical saturation, a possibility we might explore in future work.2 Footnote 2: The use of wing channels has its own advantages through increasing the ratio of velocity to density fluctuations (Yuen et al., 2021; Hu et al., 2023). We generate \(p(x,y,v_{0})\) for the full cloud, a region of \(792\times 792\) cells, then randomly segment \(p(x,y,v_{0})\) into \(22\times 22\)-cell subfields for input into the CNN model. The choice of \(22\times 22\)-cell avoids that the features fall into the numerical dissipation range, in which the \begin{table} \begin{tabular}{c c c c c c c} \hline Run & \(\mathrm{M}_{\mathrm{s}}\) & \(\mathrm{M}_{\mathrm{A}}\) & \(\mathrm{min}(\mathrm{M}_{\mathrm{A}}^{\mathrm{sub}})\) & \(\mathrm{max}\{\mathrm{M}_{\mathrm{A}}^{\mathrm{sub}}\}\) & \(\mathrm{min}[\mathrm{M}_{\mathrm{s}}^{\mathrm{sub}}]\) & \(\mathrm{max}\{\mathrm{M}_{\mathrm{s}}^{\mathrm{sub}}\}\) \\ \hline \hline A0 & 5.33 & 0.20 & 0.03 & 0.28 & 2.97 & 7.84 \\ A1 & 5.38 & 0.41 & 0.10 & 0.81 & 2.90 & 7.24 \\ A2 & 5.40 & 0.61 & 0.21 & 1.00 & 3.15 & 7.33 \\ A3 & 5.20 & 0.79 & 0.29 & 1.37 & 3.10 & 6.55 \\ A4 & 5.23 & 0.95 & 0.30 & 1.99 & 3.00 & 7.18 \\ A5 & 5.12 & 1.13 & 0.32 & 2.49 & 3.17 & 6.80 \\ A6 & 5.38 & 1.09 & 0.41 & 3.37 & 3.13 & 6.96 \\ A7 & 5.23 & 1.39 & 0.40 & 4.13 & 3.19 & 7.41 \\ A8 & 5.16 & 1.46 & 0.39 & 4.94 & 3.21 & 6.76 \\ A9 & 5.08 & 1.43 & 0.48 & 6.06 & 2.87 & 7.10 \\ \hline \end{tabular} \end{table} Table 1: \(\mathrm{M}_{\mathrm{s}}\) and \(\mathrm{M}_{\mathrm{A}}\) are the sonic Mach number and the Alfvénic Mach number calculated from the global injection velocity, respectively. \(\mathrm{M}_{\mathrm{A}}^{\mathrm{sub}}\) and \(\mathrm{M}_{\mathrm{s}}^{\mathrm{sub}}\) are determined using the local velocity dispersion calculated along each LOS in a \(22\times 22\) cell sub-field. The expressions ”\(\mathrm{min}[...]\)” and ”\(\mathrm{max}[...]\)” denote the minimum and maximum value averaged over each \(22\times 22\) cell sub-field within the corresponding simulation. anisotropy of MHD turbulence is distorted by numerical diffusivity. In observation, the inertial range of MHD turbulence is much longer and the velocity channel map is not affected by the dissipation. The size of the sub-field, thus, could be smaller to achieve higher resolution. For each subfield, we also generate corresponding projected maps of \(\phi^{\rm sub}\), \(\gamma^{\rm sub}\), \(M_{A}^{\rm sub}\), and \(M_{s}^{\rm sub}\) as per the following: \[\phi^{\rm sub}(x,y) =\arctan(\frac{\int B_{y}(x,y,z)dz}{\int B_{x}(x,y,z)dz}), \tag{13}\] \[\gamma^{\rm sub}(x,y) =\arccos(\frac{\int B_{z}(x,y,z)dz}{\int B(x,y,z)dz}),\] \[\mathrm{M}_{A}^{\rm sub} =\frac{v_{\rm inj}^{\rm los}\sqrt{4\pi\left\langle\rho\right\rangle _{\rm los}}}{\left\langle B\right\rangle_{\rm los}},\] \[\mathrm{M}_{\rm s}^{\rm sub} =\frac{v_{\rm inj}^{\rm los}}{c_{s}},\] where \(B=\sqrt{B_{x}^{2}+B_{y}^{2}+B_{z}^{2}}\) is the total magnetic field strength, and \(B_{x}\), \(B_{y}\), and \(B_{z}\) are its \(x\), \(y\), and \(z\) components. \(\left\langle\rho\right\rangle_{\rm los}\) and \(\left\langle B\right\rangle_{\rm los}\) are the gas mass density and magnetic field strength averaged along the LOS. \(\mathrm{M}_{A}^{\rm sub}\) and \(\mathrm{M}_{\rm s}^{\rm sub}\) are defined using the local velocity dispersion for each LOS (i.e., \(v_{\rm inj}^{\rm los}\)), rather than the global turbulent injection velocity \(v_{\rm inj}\) used to characterize the full simulation. The ranges of \(\mathrm{M}_{A}^{\rm sub}\) and \(\mathrm{M}_{\rm s}^{\rm sub}\) averaged over the subfield in each simulation with different \(\gamma\) are listed in Tab. 1, while \(\gamma^{\rm sub}\) spans from 0 to 90\({}^{\circ}\). These values of \(\mathrm{M}_{A}^{\rm sub}\), \(\mathrm{M}_{\rm s}^{\rm sub}\), and \(\gamma^{\rm sub}\) cover typical physical conditions of diffuse molecular clouds (Hu & Lazarian, 2023c). ## 4 Results ### Numerical training and tests Fig. 3 provides a visualization detailing the influence of \(\mathrm{M}_{A}\) and \(\gamma\) on the anisotropy of intensity structures within thin velocity channels. In scenarios where both \(\mathrm{M}_{A}\) and \(\gamma\) values are small, the intensity structures distinctly manifest as slender strips, extending in alignment with the POS magnetic fields. These structures are produced predominantly by the turbulent velocity (Lazarian & Pogosyan, 2000), as demonstrated in Hu et al. (2023). As \(\mathrm{M}_{A}\) increases, representing a weakening in the magnetic field, the MHD turbulence begins to more closely resemble isotropic hydrodynamical turbulence. This shift brings about a marked change in the topology of intensity structures, making them less anisotropic. Alternatively, when dealing with smaller values of \(\gamma\), which imply that magnetic fields are oriented more proximally to the LOS, the inherent anisotropy is subdued due to the projection effect. Comparing \({}^{13}\)CO and \(\mathrm{C}^{18}\mathrm{O}\), \(\mathrm{C}^{18}\mathrm{O}\) is more sensitive to denser gas, so its associated intensity structures exhibit distinct characteristics. Despite these differences, the underlying physical principle of anisotropic MHD turbulence remains the same, suggesting \(\mathrm{M}_{A}\) and \(\gamma\) continue to shape the observed structural formations. Fig. 4 provides a comparative visualization between the actual 3D magnetic fields and those predicted through the utilization of the trained CNN model with \({}^{13}\)CO. This comparison is framed within two distinct conditions: sub-Alfvenic (simulation with \(\langle\mathrm{M}_{A}\rangle\approx 0.5\) and \(\left\langle\gamma\right\rangle\approx 90^{\circ}\)) and super-Alfvenic (simulation with \(\langle\mathrm{M}_{A}\rangle\approx 2.0\) and \(\left\langle\gamma\right\rangle\approx 30^{\circ}\)). Within these settings, the mean projected total Alfven Mach number on the POS is given as \(\langle\mathrm{M}_{A}\rangle\approx 0.5\) for sub-Alfvenic conditions and \(\langle\mathrm{M}_{A}\rangle\approx 2.0\) for super-Alfvenic ones. The visual segment displayed in Fig. 4 is constructed from the POS magnetic field's position angle, \(\phi\), and the inclination angle, \(\gamma\), with a superimposed color representation signifying the projected M\({}_{A}\). Upon comparison with the intrinsic magnetic field embedded within the simulation, a noteworthy observation is the alignment between the orientations of the CNN-predicted 3D magnetic field and the actual field, evident under both sub-Alfvenic and super-Alfvenic conditions. In the sub-Alfvenic case, the CNN-predicted M\({}_{A}\) is slightly larger (by \(\approx 0.1-0.2\)) than the actual values. Conversely, in the super-Alfvenic scenario, the predicted value is somewhat smaller, with a deviation ranging from \(\approx 0.5-1.0\). Another example with \(\langle\)M\({}_{A}\rangle\approx 0.15\) and \(\langle\gamma\rangle\approx 60^{\circ}\), is presented in Appendix A. Although this simulation shows an anisotropy degree similar to the case with \(\langle\)M\({}_{A}\rangle\approx 0.5\) and \(\langle\gamma\rangle\approx 90^{\circ}\), the CNN model effectively resolves the degeneracy in the correlation of the anisotropy degree with \(\gamma\) and M\({}_{A}\) (see SS 2), successfully recovering the 3D magnetic field (see Fig. A1). It should be noted that the predicted M\({}_{A}\) is still overestimated by approximately 0.1 - 0.2. Fig. 5 offers a similar visual comparison but focuses on the C\({}^{18}\)O line. This line is generally recognized as denser tracers compared to \({}^{13}\)CO. Despite these differences in tracer density, the CNN predictions for C\({}^{18}\)O lines maintain a general alignment with the actual 3D magnetic fields observed within the simulations. Moreover, there is less significant overestimation and underestimation in the CNN-predicted M\({}_{A}\). Figs. 6 and 7 present 2D histograms illustrating the correspondence between CNN predictions--\(\phi^{\rm CNN}\), \(\gamma^{\rm CNN}\), and M\({}_{A}^{\rm CNN}\)--and actual values obtained from two test simulations, A2 and A6. In sub-Alfvenic cases for both \({}^{13}\)CO and C\({}^{18}\)O molecules, we observe a close alignment between the CNN predictions and the real values. The scatter of the predictions, which includes \(\phi^{\rm CNN}\), \(\gamma^{\rm CNN}\), and M\({}_{A}^{\rm CNN}\), demonstrates a small deviation from the actual values, tightly cornegating near the one-to-one reference line. This minimal deviation suggests that the CNN model offers a high degree of accuracy and reliability when operating under sub-Alfvenic conditions. However, the scenario is a bit different in super-Alfvenic cases. Here, the scatter is noticeably more widespread, indicating that deviations from the real values increase in these conditions. The \(\phi^{\rm CNN}\) predictions, in particular, show a tendency for both overestimation and underestimation. In contrast, the \(\gamma^{\rm CNN}\) predictions are primarily characterized by overestimations, a trend that is especially prominent in cases involving C\({}^{18}\)O molecules. Meanwhile, the scatter related to the M\({}_{A}^{\rm CNN}\) predictions is distributed more uniformly around the reference line. This suggests predicting the 3D magnetic field under super-Alfvenic conditions is more challenging with higher uncertainty. Figure 4: An comparison of the CNN-predicted 3D magnetic fields using \({}^{13}\)CO in sub-Alfven (top, \(\langle\)M\({}_{A}\rangle\approx 0.5\) and \(\langle\gamma\rangle\approx 90^{\circ}\)) and super-Alfven (bottom, \(\langle\)M\({}_{A}\rangle\approx 2.0\) and \(\langle\gamma\rangle\approx 30^{\circ}\)) conditions. Each magnetic field segment is constructed by the POS magnetic field’s position angle (i.e., \(\phi\)) and the inclination angle \(\gamma\). Note that the magnetic field obtained is the projection along the LOS and averaged over 132\(\times\)132 pixels for visualization purposes. The third axis of the LOS is for 3D visualization purposes and does not provide distance information here. The total intensity map \(I\) is placed on the POS, i.e., the \(x-y\) plane. In these environments, the magnetic field exerts a weaker influence, and the turbulence observed more closely resembles that of hydrodynamic turbulence, thereby complicating the prediction process. Enhancing prediction accuracy is feasible through two strategies. First, it is possible to further refine and optimize the CNN model to improve its adaptability and responsiveness to the unique features of super-Alfvenic MHD turbulence. For instance, Peek & Burkhart (2019) put forth a CNN model designed specifically to differentiate between sub-Alfvenic and super-Alfvenic turbulence. This model, with its specialized focus, offers a promising avenue for enhancing the accuracy of predictions in super-Alfvenic environments. Second, enrich the data set to train the CNN model. By incorporating a broader and more diverse range of images, the model can be exposed to a wider array of scenarios and conditions, thereby reducing uncertainty and improving its ability to make accurate predictions across different environments and conditions. Figs. 8 and 9 plot the histograms of the deviation between the CNN-predicted and the actual 3D magnetic field. We calculate the absolute difference between \(\phi^{\text{CNN}}\) and \(\phi\), between \(\gamma^{\text{CNN}}\) and \(\gamma\), and between \(\text{M}^{\text{CNN}}_{\text{A}}\) and \(\text{M}_{\text{A}}\), respectively. These differences are denoted as \(\sigma_{\phi}\), \(\sigma_{\gamma}\), and \(\sigma_{\text{M}_{\text{A}}}\). In the sub-Alfvenic scenarios, we observed that the distributions of \(\sigma_{\phi}\) and \(\sigma_{\gamma}\) are relatively condensed, primarily falling within the 0 to 20\({}^{\circ}\) range. This concentration indicates a close alignment between the CNN predictions and the actual values in sub-Alfvenic environments, suggesting that the CNN model performs with high precision in these conditions. However, as \(\langle\text{M}_{\text{A}}\rangle\) increases, the distributions of \(\sigma_{\phi}\) and \(\sigma_{\gamma}\) broaden, spanning a more extensive range from 0 to 60\({}^{\circ}\). This dispersion is indicative of larger deviations between predicted and actual values under these conditions, implying that the CNN model may face challenges in accurately capturing the magnetic field dynamics when \(\langle\text{M}_{\text{A}}\rangle\) increases. Examining specific molecules, for \({}^{13}\)CO under sub-Alfvenic conditions, the median deviation values are relatively low: \(\sigma_{\phi}=3.26^{\circ}\), \(\sigma_{\gamma}=2.98^{\circ}\), and \(\sigma_{\text{M}_{\text{A}}}=0.16\). In contrast, under super-Alfvenic conditions, these values increase to \(12.32^{\circ}\), \(9.08^{\circ}\), and \(1.1\), respectively, highlighting an increase in prediction deviation as the environment transitions from sub- to super-Alfvenic. Similarly, for C\({}^{18}\)O, the median deviation values are \(2.22^{\circ}\), \(3.20^{\circ}\), and \(0.16\) under sub-Alfvenic conditions and \(12.08^{\circ}\), \(13.60^{\circ}\), and \(1.36\) under super-Alfvenic scenarios, underlining a consistent trend of increased deviation in super-Alfvenic environments across different molecules. ### Observational prediction For the observational tests, our target is the nearby L1478 cloud. We utilized \({}^{13}\)CO spectral line from a previous study Lewis et al. (2021). The data has a beam resolution of 38\({}^{\text{n}}\) and was regrid to a pixel resolution of 10\({}^{\text{n}}\), while achieving a velocity resolution of 0.3 km s\({}^{-1}\). The 1D velocity dispersion \(\sigma_{\text{V}}\) of the \({}^{13}\)CO line was reported within the range of 0.40 - 0.70 km s\({}^{-1}\)(Lewis et al., 2021). Assuming an isotropic velocity dispersion in 3D and uniform temperature of 10 K (corresponding to an isothermal sound speed of \(c_{s}\sim 0.187\) km s\({}^{-1}\), Figure 5: Same as Fig. 4, but for C\({}^{18}\)O. see Hu et al. 2021b), we find the sonic Mach number \(\mathrm{M}_{s}=\sqrt{3}\sigma_{\nu}/c_{s}\) ranges from 3.69 to 6.45, falling into the parameter regimes in our numerical simulations. With these refined data, we applied our aderply trained CNN model to the \({}^{13}\)CO channel map, aiming to predict the key 3D magnetic field parameters, denoted as \(\phi^{\mathrm{CNN}}\), \(\gamma^{\mathrm{CNN}}\), \(\mathrm{M}_{A}^{\mathrm{CNN}}\). For the purpose of validating the results yielded through our CNN application, we engaged in a comparative analysis with POS magnetic field orientations as predicted through Planck 353 GHz polarization data. The data harnessed for this comparative process was drawn from the third Public Data Release (DR3), provided by Planck's Figure 6: 2D histogram of the \({}^{13}\)CO CNN-predictions, i.e., \(\phi^{\mathrm{CNN}}\) (left), \(\gamma^{\mathrm{CNN}}\) (middle), and \(\mathrm{M}_{A}^{\mathrm{CNN}}\) (right) and the corresponding actual values in simulation (Top: sub-Alfvén, \((\mathrm{M}_{A})\approx 0.5\) and \(\langle\gamma\rangle\approx 90^{\circ}\). Bottom: super-Alfvén, \((\mathrm{M}_{A})\approx 2.0\) and \(\langle\gamma\rangle\approx 30^{\circ}\)). The dashed reference line represents the ideal scenario, where the predicted values and actual values match perfectly. Figure 7: Same as Fig. 6, but for \(\mathrm{C}^{18}\mathrm{O}\). High-Frequency Instrument (Planck Collaboration et al., 2020). The POS magnetic field orientation was inferred from Stokes parameters \(Q\) and \(U\) converted to IAU convention from HEALPix using the equation: \(\phi^{\rm Planck}=\frac{1}{2}\tan^{-1}(-U,Q)+\pi/2\). To enhance the signal-to-noise ratio, we smoothed the Stokes parameter maps from an angular resolution of \(5^{\prime}\) to \(10^{\prime}\) using a Gaussian kernel. As presented in Fig. 10, a remarkable alignment between the magnetic field orientations as predicted by both the CNN model and the Planck polarization data is observed, while we notice the difference is apparent in the northeast clump (see the zoom-in plot in Fig. 10). To quantify the agreement between CNN-prediction and polarization, we utilize the Alignment Measure (AM; Gonzalez-Casanova & Figure 8: Histograms of difference in CNN-predicted \(\phi^{\rm CNN}\) (left), \(\gamma^{\rm CNN}\) (middle), and \(\rm M_{A}^{\rm CNN}\) (right) and the actual values in simulations using \({}^{13}\)CO. Figure 10: Comparison of the POS magnetic fields predicted by CNN-\({}^{13}\)CO (red segment) for the L1478 cloud and inferred from Planck polarization (blue segment). The background image is the integrated \({}^{13}\)CO intensity map. Figure 9: Same as Fig. 8, but for \(\rm C^{18}\)O. Lazarian, 2017), expressed as: \[\mathrm{AM}=\langle\cos(2\theta_{t})\rangle, \tag{14}\] here \(\theta_{t}\) is the relative angle between the two measurements. An AM value of \(\approx\) 0.94 confirms the CNN-prediction has an excellent agreement with Planck polarization 3, corresponding to an overall deviation of \(\approx\) 10\({}^{\circ}\). Footnote 3: AM = 1 implies a perfect parallel alignment, while -1 indicates perpendicularity. A noteworthy advantage of our CNN model over traditional polarization methodologies is its ability to trace the 3D magnetic fields. This is achieved through the model's predictions regarding \(\gamma\) and M\({}_{A}\). These predictions are summarised in histograms within Fig.11. According to the histograms, the median \(\gamma\) and M\({}_{A}\) of the L1478 cloud are estimated at \(\approx\) 76\({}^{\circ}\) and \(\approx\) 1.07, respectively. These measurements suggest that the L1478 is a trans-Alfvenic could. In this state, there is an equilibrium between magnetic and turbulent kinetic energies within the cloud. The parameters derived from the CNN application have been instrumental in creating the first-ever 3D magnetic field map for L1478, which can be viewed in Fig. 12. ## 5 Discussion ### Comparison with earlier studies The realm of exploring magnetic fields within the ISM through CNN is experiencing swift advancements. As a pilot study presented by Xu et al. (2023), the Convolutional Approach to Structure Identification-3D (CASI-3D) model was employed to map the 2D POS magnetic field orientation. This is achieved similarly by using the velocity channel maps obtained from spectroscopic observations. The underlying physics principle is still founded on the anisotropic MHD turbulence. The training process underpinning this approach uses the emission lines of \({}^{12}\)CO and \({}^{13}\)CO (J = 1 - 0), generated through the RADMC-3D code (Dullemond et al., 2012). In this study, we introduce a new CNN model. This advanced model is designed with the aim of predicting not merely the orientation \(\phi\) of the POS magnetic field but extends to encompass the angle of field inclination, \(\gamma\), as well as the total Alfven Mach number M\({}_{A}\). This approach allows the construction of 3D magnetic field vectors. For training the CNN model, we have utilized emission lines from \({}^{13}\)CO and C\({}^{18}\)O (J = 1 - 0), with data generated from the SPARX code (Hsieh et al., 2019). We quantify the uncertainty of our CNN-predicted \(\phi\) and \(\gamma\). We found that the median value and the dispersion of uncertainty for Figure 11: Histograms of CNN-predicted (as well as Planck measured) \(\phi^{\mathrm{CNN}}\) (left), defined east from the north, \(\gamma^{\mathrm{CNN}}\) (middle), and M\({}_{A}^{\mathrm{CNN}}\) (right). Figure 12: An visilaration of the CNN-predicted 3D magnetic fields using \({}^{13}\)CO for the L1478 cloud. Each magnetic field segment is constructed by the position angle of the POS magnetic field (i.e., \(\phi\)) and the inclination angle \(\gamma\). Note that the magnetic field obtained is the projection along the LOS and averaged over 12\(\times\)12 pixels for visualization purposes. The third axis of the LOS is for 3D visualization purposes and does not provide distance information here. The total intensity map \(I\) is placed on the POS, i.e., the \(l-b\) plane. C\({}^{18}\)O are approximately \(\sim 2.22^{\circ}\) and \(\sim 3.20^{\circ}\) under sub-Alfv'enic conditions (\(\langle\mathrm{M_{A}}\rangle\approx 0.5\)). These values shift to \(\sim 12.08^{\circ}\) and \(\sim 13.60^{\circ}\) undersuper-Alfvenic conditions (\(\langle\mathrm{M_{A}}\rangle\approx 2.0\)). When compared to the CASI-3D model, our CNN model demonstrates higher accuracy, as CASI-3D exhibits a median uncertainty of \(\sim 6.2^{\circ}\) and \(\sim 18.4^{\circ}\) under comparable sub-Alfvenic and super-Alfvenic conditions, respectively. Through the application of our CNN model to the L1478 molecular cloud, we successfully constructed the first 3D magnetic field map. The corresponding CNN-predicted POS magnetic field orientation shows remarkable alignment with that inferred from Planck 353 GHz polarization data. It is crucial to acknowledge that despite the differences inherent between the CNN models used by Xu et al. (2023) and this study, the fundamental concept of utilizing spectroscopic channel maps for magnetic field investigation remains the same: (1) the intensity distribution observable in thin channel maps is predominantly influenced by turbulent velocity statistics, as outlined in (Lazarian and Pogosyan, 2000; Kandel et al., 2016; Hu et al., 2023); and (2) these channel maps capture the anisotropy intrinsic to MHD turbulence, thereby revealing the orientation of the POS magnetic field (Lazarian and Yuen, 2018; Hu et al., 2023). A crucial insight was provided by Hu et al. (2021c), highlighting that the degree of anisotropy in channel maps, as well as the magnetic field topology, is regulated by both the \(\gamma\) and the M\({}_{A}\). These are parameters that can be extracted efficiently using the CNN approach 4. Thus, drawing upon these foundational theoretical studies, we propose the use of the CNN model as an efficient tool for tracing 3D magnetic fields, providing convincing physical reasons for interpreting its feasibility. Footnote 4: Note that while the anisotropy and magnetic field topology, that are sensitive to \(\gamma\) and the M\({}_{A}\), are the most apparent features in channel maps, it is also possible the CNN extracts additional features to facilitate the prediction. ### Synergy with other methods Our newly proposed CNN model stands as a powerful complement to existing methodologies in the field. One notable technique, which involves utilizing polarized dust emission, has proven effective in tracing the 3D magnetic field orientation within diffuse clouds, where dust grains are perfectly aligned with magnetic fields (Chen et al., 2019; Hu and Lazarian, 2023a, c). However, this technique may encounter limitations within dense cloud environments - for example, those observed through tracing by C\({}^{18}\)O - where dust grains might not maintain perfect alignment (Lazarian, 2007; Andersson et al., 2015). This loss of alignment, resulting in a phenomenon known as the polarization hole (Seifried et al., 2019; Pattle et al., 2019; Hoang et al., 2021), introduces uncertainties when tracing 3D magnetic fields through polarized dust emission techniques. Unlike these traditional approaches, the CNN approach remains immune to the effects of the polarization hole. When the CNN model is supplied with emission lines from dense tracers like C\({}^{18}\)O, HNC, and NH\({}_{3}\), it proves highly adept at probing the 3D magnetic fields present within dense clouds effectively. Nonetheless, it's important to consider that within these dense cloud environments, the forces of self-gravity can become a significant factor. This gravitational influence might induce alterations in the anisotropy observed within channel maps (Hu et al., 2020b). Therefore, it becomes imperative to input the CNN model with carefully selected numerical simulations before applying it to observational data to ensure accurate and reliable results. Furthermore, it should be noted that the inclination angle predicted by the CNN model is inherently limited to the range of [0, 90\({}^{\circ}\)]. This limitation arises because the anisotropy within channel maps alone cannot definitively discern whether the magnetic field is oriented towards or away from the observer. However, recent advancements in the field, particularly in Faraday rotation measurements within molecular clouds (Tahani et al., 2019, 2022), offer promising avenues to resolve this degeneracy. Another relevant method worth discussing is the Velocity Gradient Technique (VGT; Gonzalez-Casanova and Lazarian, 2017; Lazarian and Yuen, 2018; Hu et al., 2018). Like our proposed CNN approach, the VGT is a technique that traces magnetic fields using spectroscopic observations. Importantly, both the CNN approach and VGT share a foundational physical principle: they rely on the anisotropy of MHD turbulence observed within thin channel maps. With VGT having undergone extensive and rigorous testing (Hu et al., 2019; Lu et al., 2020; Hu et al., 2021; Liu et al., 2022; Alina et al., 2022; Liu et al., 2023; Hu and Lazarian, 2023b; Tram et al., 2023; Schmaltz et al., 2023), it is established as an excellent benchmark for evaluating the accuracy of CNN models, especially in situations where polarization measurements are not readily available. This benchmarking is crucial when CNNs are deployed for tracing 3D Galactic Magnetic Fields, highlighting the important comparative and complementary roles these techniques play in advancing our understanding of magnetic fields in various astrophysical contexts. ### Prospects of the CNN method In the present study, we introduced a CNN model adept at predicting 3D magnetic fields within molecular clouds, utilizing spectroscopic observations of molecular gas. However, the potential applications of this CNN method extend far beyond, encompassing various astrophysical environments and contexts, including neutral hydrogen (HI) regions, ionized gas, the Central Molecular Zone (CMZ), external galaxies, and supernova remnants. In the following sections, we outline several promising applications of this methodology. #### 5.3.1 3D Galactic Magnetic Fields A deep and comprehensive understanding of the 3D Galactic Magnetic Field (GMF; Jansson and Farrar, 2012) is paramount for addressing a host of astrophysical inequities. These include identifying the origins of ultra-high energy cosmic rays (Farrar, 2014; Farrar and Sutherland, 2019) and refining models of Galactic foreground polarization (Kovetz and Kamionkowski, 2015; Planck Collaboration et al., 2016). Recent research indicates that thin channel maps of HI successfully capture the anisotropy inherent in MHD turbulence (Lazarian and Pogosyan, 2000; Lazarian and Yuen, 2018; Lu et al., 2020; Hu et al., 2023). Consequently, applying the CNN to HI channel maps constitutes a viable strategy for mapping 3D GMFs. Past efforts aimed at modeling the foreground polarization with HI primarily focused on mapping the POS magnetic field orientation (Clark and Hensley, 2019; Lu et al., 2020; Hu et al., 2020). These endeavors largely neglected the crucial depolarization factor, the inclination angle. However, the advent of sophisticated multi-phase HI simulations (Ho et al., 2021) has made it possible to train the CNN model for accurate predictions of 3D GMFs, yielding more realistic models of the foreground polarization. Our primary goal in this paper is to explore the magnetic fields of molecular clouds, for which the isothermal approximation is applicable. Multi-phase HI requires separate training of the neural network. For multi-phase HI where cooling and heating play a significant role, our general approach remains valid: intensity features/striations within channel maps continue to elongate along the POS magnetic field orientation. This is supported by several studies Lazarian and Yuen (2018); Clark and Hensley (2019); Lu et al. (2020); Hu et al. (2020, 2023). These intensity features/striations are also regulated by the Alfven Mach number (M\({}_{A}\)) and the projection effect associated with the inclination angle. However, additional physics, such as thermal instability, could modify the observed anisotropy, for instance, potentially leading to a smaller aspect ratio (Ho et al., 2023). The corresponding study employing our approach for multiphase HI will be provided elsewhere. #### 5.3.2 3D Magnetic Fields in CMZ and external galaxies Understanding the magnetic fields within cold molecular gas is essential for deciphering the processes of formation and fueling of Seyfert nuclei. Recent measurements of magnetic fields within the CMZ and in other Seyfert galaxies have been conducted using various techniques. These include far-infrared polarization observations from instruments like SOFIA/HAWC+ (Lopez-Rodriguez et al., 2021), JCMT (Pattle et al., 2021), and ALMA (Lopez-Rodriguez et al., 2020), as well as employing the VGT (Hu et al., 2022, 2023). However, these approaches primarily yield the POS magnetic field orientation, falling short of providing a comprehensive 3D perspective. Nevertheless, the successful application of VGT confirms the viability of using anisotropy in molecular emission channel maps as a tracer for magnetic fields in these environments. For instance, Hu et al. (2022) derived a POS magnetic field map surrounding Sgr A* using the [Ne II] emission line and Paschen-\(\alpha\) image observed with the Hubble Space Telescope (HST). Given these advances, extending the CNN methodology to incorporate optical/near-infrared observations from instruments like the HST and the James Webb Space Telescope is a feasible and promising approach for predicting 3D magnetic fields in both the Galactic center and external galaxies. ### Obtaining the full 3D magnetic field vector 3D magnetic fields, encompassing both orientation and strength, play a pivotal role in comprehending key astrophysical phenomena. These include processes such as star formation (Mestel, 1965; Mac Low and Klessen, 2004; McKee and Ostriker, 2007; Lazarian et al., 2012; Federrath and Klessen, 2012; Hu et al., 2021), the effects of stellar feedback (Pattle et al., 2022; Liu et al., 2023), as well as the acceleration and propagation of cosmic rays (Fermi, 1949; Jokipii, 1966; Yan and Lazarian, 2002; Xu and Yan, 2013; Xu and Lazarian, 2020; Hu et al., 2022; Beattie et al., 2022; Lazarian and Xu, 2023). Traditionally, to obtain the strength of these fields, the Davis-Chandrasekhar-Fermi (DCF) method is employed, which typically combines dust polarimetry with spectroscopic observations (see Davis, 1951; Chandrasekhar and Fermi, 1953). However, this often proves insufficient. the DCF method gives only the POS magnetic field strength, while the component along the LOS is missing. Other limitations of the DCF method have also been thoroughly dissected in the literature (Skadilidis et al., 2021; Lazarian et al., 2022; Chen et al., 2022; Liu et al., 2022). In light of this, an alternative approach has been proposed: the use of the Alfven Mach number M\({}_{A}\) with the sonic Mach number M\({}_{s}\) to derive the magnetic field's strength (Lazarian et al., 2020). This method, aptly termed MM2, can be used to obtain the total strength, particularly since the vital term M\({}_{A}\) is readily available with the CNN approach proposed in this study. The sonic Mach number M\({}_{s}\) can be procured either directly via spectroscopic line broadening or by leveraging a CNN approach similar to our current study. Coupled with the 3D magnetic field orientation, this equips us with the necessary tools to construct a 3D magnetic field vector. ## 6 Summary In this study, a CNN model was designed for the intricate task of probing 3D magnetic fields within molecular clouds. This model is not confined to determining the POS magnetic field orientation but extends its capabilities to accurately ascertain the field's inclination angle and the total Alfven Mach number, offering a more comprehensive understanding of the magnetic field in the observed regions. We summarize our major results below: 1. We developed a CNN model for probing the 3D magnetic fields, including the POS magnetic field orientation, inclination angle, and total Alfven Mach number. 2. The CNN model was trained using synthetic \({}^{13}\)CO and C\({}^{18}\)O (J = 1 - 0) emission lines, encompassing a range of conditions from sub-Alfvenic to super-Alfvenic. We quantified the uncertainties associated with the trained CNN model's predictions. Our findings revealed that the uncertainties are less than 5\({}^{\circ}\) for both \(\phi\) and \(\gamma\), and are smaller than 0.2 for M\({}_{A}\) under sub-Alfvenic conditions (with M\({}_{A}\approx 0.5\)). Under super-Alfvenic conditions (with M\({}_{A}\approx 2.0\)), the uncertainties increased slightly but remained below 15\({}^{\circ}\) for \(\phi\) and \(\gamma\), and were around 1.5 for M\({}_{A}\). 3. We implemented our trained CNN model to analyze the molecular cloud L1478. The CNN-predicted POS magnetic field orientation exhibited remarkable agreement with orientations inferred from Planck 353 GHz polarization data, with a marginal global difference of approximately 10\({}^{\circ}\). 4. This study facilitated the construction of the first 3D magnetic field map for the L1478 cloud. Through our analysis, we found that the cloud's global inclination angle is approximately 76\({}^{\circ}\), while the global total Alfven Mach number is close to 1.07. 5. We discussed the potential applications and future prospects of the CNN approach. Particularly, we discussed the feasibility and potential of utilizing the CNN model for predicting 3D Galactic Magnetic Fields. We also considered its application for understanding 3D magnetic fields in the CMZ and external galaxies. ## Acknowledgements Y.H. and A.L. acknowledge the support of NASA ATP AAH7546, NSF grants AST 2307840, and ALMA SOSPADA-016. Financial support for this work was provided by NASA through award 09_0231 issued by the Universities Space Research Association, Inc. (USRA). This work used SDSC Expansive CPU at SDSC through allocations PHY230032, PHY230033, PHY230091, and PHY230105 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. Y.H. acknowledges the very kind computational and technical support of Bowen Cao. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2301.09634
Inference of the optical depth to reionization $τ$ from $\textit{Planck}$ CMB maps with convolutional neural networks
The optical depth to reionization, $\tau$, is the least constrained parameter of the cosmological $\Lambda$CDM model. To date, its most precise value is inferred from large-scale polarized CMB power spectra from the ${\it Planck}$ High-Frequency Instrument (HFI). These maps are known to contain significant contamination by residual non-Gaussian systematic effects, which are hard to model analytically. Therefore, robust constraints on $\tau$ are currently obtained through an empirical cross-spectrum likelihood built from simulations. In this paper, we present a likelihood-free inference of $\tau$ from polarized ${\it Planck}$ HFI maps which, for the first time, is fully based on neural networks (NNs). NNs have the advantage of not requiring an analytical description of the data and can be trained on state-of-the-art simulations, combining information from multiple channels. By using Gaussian sky simulations and ${\it Planck}$ ${\tt SRoll2}$ simulations, including CMB, noise, and residual instrumental systematic effects, we train, test and validate NN models considering different setups. We infer the value of $\tau$ directly from $Q$ and $U$ maps at $\sim 4^\circ$ pixel resolution, without computing power spectra. On ${\it Planck}$ data, we obtain $\tau_{NN}=0.058 \pm 0.008$, compatible with current EE cross-spectrum results but with a $\sim30\%$ larger uncertainty, which can be assigned to the inherent non-optimality of our estimator and to the retraining procedure applied to avoid biases. While this paper does not improve on current cosmological constraints, our analysis represents a first robust application of NN-based inference on real data and highlights its potential as a promising tool for complementary analysis of near-future CMB experiments, also in view of the ongoing challenge to achieve a detection of primordial gravitational waves.
Kevin Wolz, Nicoletta Krachmalnicoff, Luca Pagano
2023-01-23T18:59:52Z
http://arxiv.org/abs/2301.09634v2
Inference of the optical depth to reionization \(\tau\) from _Planck_ CMB maps with convolutional neural networks ###### Abstract The optical depth to reionization, \(\tau\), is the least constrained parameter of the cosmological \(\Lambda\)CDM model. To date, its most precise value is inferred from large-scale polarized CMB power spectra from the high-frequency instrument (HFI) aboard the _Planck_ satellite. These maps are known to contain significant contamination by residual non-Gaussian systematic effects, which are hard to model analytically. Therefore, robust constraints on \(\tau\) are currently obtained through an empirical cross-spectrum likelihood built from simulations. In this paper, we present a likelihood-free inference of \(\tau\) from polarized _Planck_ HFI maps which, for the first time, is fully based on neural networks (NNs). NNs have the advantage of not requiring an analytical description of the data and can be trained on state-of-the-art simulations, combining the information from multiple channels. By using Gaussian sky simulations and _Planck_ & Boll2 simulations, including CMB, noise, and residual instrumental systematic effects, we train, test, and validate NN models considering different setups. We infer the value of \(\tau\) directly from Stokes \(Q\) and \(U\) maps at \(\sim 4^{\circ}\) pixel resolution, without computing angular power spectra. On _Planck_ data, we obtain \(\tau_{\rm NN}=0.0579\pm 0.0082\), compatible with current \(EE\) cross-spectrum results but with a \(\sim 30\%\) larger uncertainty, which can be assigned to the inherent non-optimality of our estimator and to the retraining procedure applied to avoid biases. While this paper does not improve on current cosmological constraints on \(\tau\), our analysis represents a first robust application of NN-based inference on real data, and highlights its potential as a promising tool for complementary analysis of near-future CMB experiments, also in view of the ongoing challenge to achieve the first detection of primordial gravitational waves. ## 1 Introduction Cosmic reionization, the period in cosmic history that accompanies the ignition of the first stars, is of great interest to both astrophysics and cosmology. At recombination, about 380,000 years after the Big Bang, free electrons were bound in hydrogen atoms, causing the decoupling of matter from the photon field that we observe today as the Cosmic Microwave Background (CMB). This is when the Universe entered the electrically neutral phase, called cosmic "dark ages". It is presumed that about 200 million years after, cold hydrogen gas had collapsed gravitationally in dark matter halos, forming the first stars. These earliest compact sources of UV radiation heated up the surrounding hydrogen gas, progressively ionizing the whole Universe via bubbles of expanding HII regions. The first experimental evidence for reionization was the discovery of the so-called _Gunn-Peterson trough_ in the absorption spectra of high-redshift quasars Gunn & Peterson (1965); Scheuer (1965), suggesting that the Universe must have undergone an electrically neutral phase. Modern quasar measurements predict reionization to have ended by \(z\sim 6\)(Fan et al., 2006; Schroeder et al., 2012; Becker et al., 2015; Villasenor et al., 2022; Dayal & Ferrara, 2018). Reionization plays a crucial role for cosmology, too. Photons emitted during recombination have a finite probability of Compton scattering with free electrons released during reionization. For us as observers, this has two effects: firstly, less CMB photons will reach us on Earth, having to traverse an optically thick interstellar medium. Secondly, a statistically relevant fraction of CMB photons will scatter _into_ our line of sight, carrying a nonzero net polarization. The first effect reduces the overall intensity of CMB emission (via both the unpolarized and polarized component) by a factor \(\exp(-2\tau)\), where \(\tau\) is the optical depth to reionization, defined as \[\tau=\int_{t(z_{\rm CMB})}^{t_{0}}n_{\rm e}\sigma_{\rm T}\,c\,{\rm d}t^{\prime}\,. \tag{1}\] Here, \(z_{\rm CMB}\approx 1100\) is the time of last scattering between photons and baryons, \(n_{\rm e}\) is the electron number density and \(\sigma_{\rm T}\) is the Thomson scattering cross section. The second effect gives rise to secondary anisotropies in the CMB polarization, adding power at very large angular scales \(\gtrsim 20^{\circ}\) (or multipoles \(\ell\lesssim 10\)). Only full-sky space missions have been able to measure this "reionization bump" through pixel-based and power-spectrum-based analysis methods. \(EE\) and \(TE\) power spectra from _WMAP_ 9-year release yield \(\tau=0.089\pm 0.014\)(Hinshaw et al., 2013), a value that later turned out to be biased high due to Galactic dust (Planck Collaboration XI, 2016; Natale et al., 2020). The _Planck_ low-frequency instrument (LFI) polarization data at 70 GHz contain less large-scale systematics than the HFI data at 100 GHz and 143 GHz, motivating the _Planck_ Collaboration to perform map-based analysis on LFI and \(EE\) cross-spectrum analysis on HFI. The 2018 legacy release data constrain reionization to \(\tau=0.063\pm 0.020\) for LFI and \(0.051\pm 0.009\) for HFI (Planck Collaboration V, 2020). The cross-spectrum analysis method of _Planck_ HFI data at 143 GHz and 100 GHz obtains the tightest constraint on \(\tau\), while avoiding the bias arising from uncorrelated noise in the individual channels. The _Planck_ 2018 legacy polarization data products at large scales are known to be affected by residual contamination from instrumental systematic effects. At 143 GHz and 100 GHz, these are (Planck Collaboration VI, 2014; Delouis et al., 2019): * detector-related temperature-to-polarization (\(T\)-to-\(P\)) leakage due the analog-to-digital converter nonlinearity (ADCNL), * uncertainties on the bolometers' polarization efficiency and detector orientation, * foregrounds-related \(T\)-to-\(P\) leakage due to bandpass mismatch and inaccurate foregrounds modelling, * the time transfer function associated with heat transfer to the bolometers. In general, these systematic effects possess non-Gaussian statistics and are expected to correlate among different channels, mainly because they are partially sourced by the temperature signal. Several updated map-making codes have been published that improve on the systematics cleaning, like SRol12(Delouis et al., 2019), and NPIPE(Planck Collaboration Int. LVII, 2020). The SRol12 algorithm, an upgraded version of the _Planck_ Collaboration's SRoll algorithm (Planck Collaboration Int. XLVI, 2016), iteratively cleans systematics from _Planck_'s time-ordered data products. Major improvements in SRol12 encompass a new gain calibration model that accounts for second-order ADCNL, updated foreground templates, and an internal marginalization over the polarization angles and efficiencies for each bolometer. The SRol12 data products contain a significantly lower level of spurious systematic effects and a dipole residual power reduced by 50% with respect to the _Planck_ 2018 legacy data, falling below the noise level. The SRol12 \(EE\) cross-spectrum is dominated by the cosmological signal at all scales that were considered in the analysis (\(2<\ell<30\)). In spite of the improved cleaning, a small residual contamination remains (mainly due to the second-order ADCNL effect), which may bias cosmological analyses. For their \(100\times 143\) GHz \(EE\) cross-spectrum analysis of the SRol12 data products, Pagano et al. (2020) use an empirical likelihood built from realistic simulations (Planck Collaboration V, 2020; Gerbino et al., 2020), motivating their choice by the expected non-Gaussianity of the maps and by the difficulty to model residual systematic effects analytically. They obtain \(\tau=0.0566^{+0.0053}_{-0.0062}\) from \(EE\) only and \(\tau=0.059\pm 0.006\) when combining with \(TT\) data. Compared with the \(EE\) results from the _Planck_ 2018 legacy release (\(\tau=0.051\pm 0.009\)), this reduces the uncertainty by \(\sim 40\%\) and increases the best-fit \(\tau\) value by up to \(0.9\sigma\). More recently, de Belsunce et al. (2021) applied various likelihood approximation schemes on \(EE\) cross-spectrum data from SRoll2 maps, finding results compatible with Pagano et al. (2020), though sightly larger by \(0.3\sigma\). In recent years, neural network (NN)-based approaches for undertaking likelihood-free inference underwent a rapid development in cosmology, showing potential as an alternative tool for parameter estimation that does not require the existence of an analytical description of data, but relies only on numerical simulations to train a regression model. In the general context of cosmology, a variety of machine learning (ML) techniques have been exploited and tested in recent years. Promising tools are being developed for many applications: from cosmic large-scale structure (LSS) simulations Villascusa-Navarro et al. (2022), to CMB lensing reconstruction (Calderia et al., 2019), kinetic SZ detection (Tanimura et al., 2022) or foreground cleaning and modelling (Jeffrey et al., 2022; Wang et al., 2022; Casas et al., 2022; Krachmalnicoff and Puglisi, 2021). NN-based inference of cosmological parameters has seen significant progress in the context of observations of the LSS, where the complexity of the cosmological and astrophysical signals, together with the difficulty in the definition of optimal summary statistics, challenge analytical methods. Up to now, this approach has been tested on simulations (see _e.g._, Villaescusa-Navarro et al., 2022), with applications on real data still limited in number, although leading to promising results (_e.g._ Fluri et al., 2019). In this context, CMB data analysis could also benefit from the application of NN-based inference, helping overcome the limitations of traditional methods. This is relevant, for example, for the estimation of parameters affecting the large angular scales, like the optical depth to reionization, critically harmed by the presence of spurious non-Gaussian signals, as outlined above. In this paper, we show the first map-level cosmological inference on CMB data that is entirely based on convolutional neural networks (CNNs). We use CNNs to infer the optical depth to reionization \(\tau\) and its statistical uncertainty from _Planck_ multi-frequency maps on the 100 and 143 GHz channels at scales \(\gtrsim 4^{\circ}\), having trained and validated our findings on the SRoll2 simulations. Using moment networks (Jeffrey and Wandelt, 2020), we infer \(\tau\) and its statistical uncertainty \(\sigma(\tau)\) from a single data set. In particular, we demonstrate: 1. When training the CNN on simulations with realistic, correlated Gaussian noise, we achieve unbiased estimates of \(\tau\) from maps. 2. Our CNN models can effectively combine multi-frequency information, recognizing common features across channels, not only to reduce statistical uncertainties but also to diminish the impact of noise and systematic effects. 3. Training on non-Gaussian data is necessary to obtain unbiased results on the SRoll2 test simulations and _Planck_ data. Limited by a low number of simulations that contain _Planck_ systematics, we are forced to build a retrained model, which increases error bars on \(\tau\) by \(\sim 30\%\) in exchange for unbiased results. We structure this paper as follows. The simulations and data used in this work are presented in Section 2, followed by the neural network inference method which we describe in Section 3. In order to validate this method, we apply it to a series of simulations and present the results in Section 4. The final results on the _Planck_ SRoll2 maps are shown and discussed in Section 5. We conclude the paper in Section 6. ## 2 Simulations and data The goal of our analysis is to build a NN model able to infer the value of the \(\tau\) cosmological parameter having as input _Planck_ low-resolution polarization maps. In particular, in this work we use the SRoll2 maps at 100 and 143 GHz. In order to achieve our goal we need a large number of simulations to perform NN training, testing and validation. We generate simulated maps that include CMB emission, noise and instrumental systematic effects, as well as possible spurious signals coming from our Galaxy. In this section, we describe the simulations, the data, and the sky masks needed to avoid the highly contaminated Galactic plane region. ### Simulated CMB maps Polarized CMB anisotropies, observed at the _Planck_ noise levels, can be sufficiently well represented by a spin-2 field with Gaussian statistics (Planck Collaboration IX, 2020). The \(TT\), \(TE\) and \(EE\) power spectra characterize the probability distribution of CMB temperature and polarization anisotropies and can be described by the six parameters of the \(\Lambda\)CDM model. Analyses of small-scale temperature data from the _Planck_ 2018 legacy release place a 0.5% constraint on the parameter combination \(10^{9}\,A_{s}\,e^{-2\tau}=(1.88\pm 0.01)\)(Planck Collaboration VI, 2020). Varying the two parameters (\(A_{s}\), \(\tau\)) simultaneously conditioned on \(10^{9}\,A_{s}\,e^{-2\tau}=1.884\), we use the Boltzmann solver CAMB1(Lewis et al., 2000) to generate a lookup table of \(EE\) power spectra computed with the \(\Lambda\)CDM model. To build the simulated CMB maps used to train and validate our NN models, we discretize \(\tau\in[0.01,\,0.13]\) with step size \(\Delta\tau=5\times 10^{-4}\), while the other five \(\Lambda\)CDM parameters are fixed to the _Planck_ 2018 legacy best-fit values \(H_{0}=67.32\) km/s/Mpc, \(\Omega_{b}h^{2}=0.02237\), \(\Omega_{c}h^{2}=0.1201\), \(n_{s}=0.9651\), \(m_{\nu}=0.06\). From the tabulated power spectra we uniformly draw 200,000 samples based on which we generate 200,000 pairs of full-sky Stokes \(Q\) and \(U\) maps using the HEALPix package (Gorski et al., 2005). We fix the \(Q\) and \(U\) maps' angular pixel resolution by choosing \(N_{\rm side}=16\) (or a pixel size of \(\sim 4^{\circ}\)) and smooth each map with a cosine beam window function (Benabed et al., 2009), in analogy with the procedure used to generate the _Planck_ SRoll2 maps (see Section 2.4). These large scales retained in our maps correspond to multipoles \(\ell\lesssim 50\), where the reionization peak leaves an observable imprint in the CMB \(EE\) spectrum. Footnote 1: [http://camb.info](http://camb.info) ### Simulated Gaussian noise _Planck_ maps contain Gaussian instrumental noise which, in pixel space, is well described by the FFP8 covariance matrices (Planck Collaboration XII, 2016). We draw samples from them for the _Planck_ 100 and 143 GHz polarization channels (Planck Collaboration VI, 2014; Planck Collaboration XIII, 2016), obtaining 200,000 Gaussian noise maps at pixel resolution \(N_{\rm side}=16\) for both channels, respectively. We coadd the training maps of CMB and noise to obtain 200,000 _Planck_-like simulations on the full sky, out of which we select 190,000 for training and 10,000 for validation. For the testing phase, we draw new noise samples in the same fashion as before, but coadd CMB simulations with fixed input values \(\tau=0.05,\,0.06\) and 0.07 and different seeds than the ones used for training and validation. In this way, we obtain 3 sets of 10,000 independent Gaussian test simulations with the fixed input cosmologies. ### SRoll2 simulations The SRoll2 simulations (Delouis et al., 2019) improve on the SRoll simulations published along with _Planck_'s third data release (Planck Collaboration Int. LVII, 2020). They are the result of applying the SRoll2 cleaning algorithm to a set of 500 _Planck_-like realistic sky simulations containing modeled noise, foregrounds and instrument systematics. We choose the SRoll2 simulations as our reference for systematic effects present in the SRoll2 _Planck_ data. All simulated maps are cleaned from Galactic foregrounds through a template fitting procedure, as described in Pagano et al. (2020). In order to produce our training set, we start with 400 out of the 500 original SRoll2 simulations containing pairs of \(Q\) and \(U\) full-sky maps at pixel resolution \(N_{\rm side}=16\) and two channels corresponding to 100 GHz and 143 GHz. To augment our original SRoll2 simulation set, we randomly draw SRoll2 100 GHz and 143 GHz maps from the original 400 maps (with repetition), keeping corresponding \(Q\) and \(U\) maps together. This allows us to assemble a total of 200,000 SRoll2 simulations. After coadding them with CMB simulations, we obtain a set of 200,000 polarized full-sky simulations, used for training and validation. For the testing phase, we make \(3\times 100\) copies of 100 unseen SRoll2 maps and coadd them with \(10,000\) CMB maps with fixed input \(\tau=0.05,\,0.06\) and 0.07, respectively. In this way we obtain a set of \(3\times 10,000\) full-sky SRoll2 test simulations. ### Planck maps The goal of this work is the analysis of the SRoll2 _Planck_ polarization data products (Delouis et al., 2019). They consist of Stokes \(Q\) and \(U\) maps at the 100 GHz and 143 GHz HFI frequency channels, stored at pixel resolution \(N_{\rm side}=16\). The _Planck_ maps are first smoothed with cosine beam window functions, and then cleaned from foreground contamination through a template fitting procedure (Pagano et al., 2020). Figure 1 shows the map products in Galactic coordinates. We note that close to the Galactic plane, \(Q\) and \(U\) on both channels are visibly contaminated by residual systematic effects, which we mask prior to the analysis in order to avoid bias. The arc-shaped features in the northern and southern Galactic hemisphere likely indicate residual gain variations caused by the ADCNL systematic effect. As shown by Delouis et al. (2019), these features show lower residual power than the CMB in the 100\(\times\)143 GHz \(EE\) cross-spectrum, but may still amount to a non-negligible bias in cosmological analyses. ### Masks At low Galactic latitudes, the Milky Way emits polarized foreground radiation which dominates the CMB signal in intensity and polarization. Even after component separation, residuals of this emission needs to be excluded from the analysis to avoid biasing cosmological analyses. We therefore apply masks to all maps described in the previous subsections. We consider two of the binary polarization masks published in Pagano et al. (2020), retaining sky fractions of \(f_{\rm sky}=\{50\%,60\%\}\). We smooth them with Gaussian beams of corresponding FWHM of \(\{15^{\circ},16^{\circ}\}\), and apply a binary threshold, setting all pixels with a value larger than 0.5 to one and all others to zero. This procedure allows us to avoid fuzzy borders and mitigate groups of isolated masked pixels. The smoothed masks are shown in Figure 2. Our baseline mask in this paper is the \(f_{\rm sky}=0.5\) smoothed mask, as it retains enough large-scale information to constrain \(\tau\), but avoids excessive levels of foregrounds in the Galactic plane. ## 3 NN inference In this work, we use CNNs to build simulation-based empirical models to perform cosmological inference. In the following, we describe our CNN implementation and give details on the procedures applied to train and test our model on simulations. ### CNN architecture for \(\tau\) estimation CNNs are the industry standard of pattern recognition in 2-dimensional images, performing both classification (_e.g._, identifying families of objects) and regression tasks (_e.g._, estimating continuous parameters on maps). The success of CNNs in extracting low-dimensional information from visual input is due to a multi-layer image filtering algorithm. This typically involves searching for distinct sets of local features in the original image (through convolution) and compressing the data (through so-called pooling layers), going to lower and lower resolution, before inferring the desired summary statistic. In our case, we want to retrieve information from data projected on the sphere, requiring convolutions on the spherical domain. To this end, we make use of the NNhealpix2 algorithm which allows to build deep spherical CNNs taking advantage of the HEALPix tessellation. In particular, NNhealpix performs convolution by looking at the first neighbors for each pixel on the map, and average pooling by downgrading the map resolution (_i.e._ by going to lower \(N_{\rm side}\) parameter). We refer to Krachmalnicoff and Tomasi (2019) for additional details on how the algorithm works, as well as its advantages and disadvantages. In this work, we use NNhealpix in combination with the public keras python package3 to build our deep CNN architecture, and to perform training, validation and testing. Footnote 2: [https://github.com/ai4cmb/NNhealpix](https://github.com/ai4cmb/NNhealpix) Footnote 3: [https://keras.io](https://keras.io) The first part of our CNN, performing image filtering, consists of four CNN building blocks, as illustrated in Figure 3. We accept \(N_{\rm map}\) input maps, which in our case represent one or two frequency channels and Stokes \(Q\) and \(U\) maps, hence \(N_{\rm map}=2\) or \(4\). Each convolutional layer introduces 32 filters with 9 trainable pixel weights, respectively, and is followed by a Rectified Linear Unit (ReLU) activation layer. Mathematically, this means each image pixel \(p_{i}\) undergoes a linear transformation \(f\) followed by a nonlinear transformation \(g\) \[p_{i}\mapsto p_{i}^{\prime}=(f\circ g)(p_{i})\,, \tag{2}\] \[f(p_{i})=p_{i}w_{0}+\sum_{j=1}^{N_{\rm single}(i)}p_{k_{j}(i)}w _{j}\,,\] (3) \[g(x)\equiv\max(0,x)\,, \tag{4}\] Figure 1: SRo112 data products of the _Planck_\(Q\) and \(U\) maps at frequencies 100 GHz and 143 GHz, post component separation, used in this work, displayed in Galactic coordinates. Figure 3: Schematic of the convolutional layers of the neural network used in this paper. This represents the first part of the architecture, performing image filtering. Figure 2: Smoothed version of the SRoll2 sky masks at sky fractions 50% and 60% used in this paper, displayed in Galactic coordinates. where \(k_{j}(i)\), \(j=1,\dots,\,N_{\rm neigh}(i)\) runs over the indices of all neighboring pixels in the HEALPix map (which can be either 7 or 8, depending on the pixel location). Then, an "average pooling" degradation layer reduces the map resolution from \(N_{\rm side}\) to \(N_{\rm side}/2\), assigning to every low-resolution pixel the average of its four children at the next higher resolution. Up to this point, the application of the four CNN building blocks transform the array of input maps at \(N_{\rm side}=16\) (or \(N_{\rm pix}=3072\) pixels) into an array of 32 filtered maps at \(N_{\rm side}=1\) (or \(N_{\rm pix}=12\) pixels). This represents the image filtering part, _i.e._, the transformation of the original inputs into 32 maximally compressed feature maps that, ideally, retain all the desired (cosmological) information. We still need to "learn" the mapping from these feature maps to the output numbers \(\tau_{\rm NN}\) and \(\sigma_{\rm NN}(\tau)\) described in the following section. Compression is done by two fully connected (or dense) layers. A fully connected layer is a linear map from \(M\)-dimensional input feature space to \(N\)-dimensional output feature space, and is commonly used for data compression (in which case \(N<M\)). A fully connected layer of output dimension \(N\) is said to contain \(N\) neurons associated to a vector of trainable weights that parameterize the layer. In each of its \(N\) neurons, a fully connected layer linearly contracts the input vector \(x\) of length \(M\) to a number by means of a weights vector \(v^{(i)}\), \[x_{i}\mapsto x_{i^{\prime}}^{\prime}=\sum_{j=1}^{M}v_{j}^{(i^{\prime})}x_{j}\;. \tag{5}\] The second part of our CNN, the data compression block, is shown in Figure 4 and contains a dropout and flattening layer, a fully connected layer with 48 neurons, a ReLU nonlinear activation layer, concluded by a final fully connected layer with 2 neurons that outputs \(\tau_{\rm NN}\) and \(\sigma_{\rm NN}(\tau)\) as described in the following section. The dropout layer acts as a selective off switch for parts of the following fully connected layer, deactivating at random 20% of its 48 neurons at a time, thus mitigating the overfitting problem common for neural networks (Srivastava et al., 2014). With the described architecture the total number of weights that need to be optimized during training is \(N_{\rm w}\approx 4.7\times 10^{4}\). ### Training When we train a neural network, we effectively tune its many free parameters until the task at hand, _e.g._ estimating parameters from maps, would be optimally performed on the training data. In the following, we describe this procedure in detail. At each training step we pass one batch of \(N_{\rm batch}=32\) training simulations through the network, meaning we simultaneously consider the results from all simulations that belong to a single batch. Input maps need to be masked with the same mask that is used in the analysis. The output values of the two neurons of the final layer, representing the estimated parameters \(\tau_{\rm NNj}\), \(\sigma_{\rm NN}(\tau)_{j}\) (\(j=1,\,\dots,\,N_{\rm batch}\)), as well as the truth values \(\tau_{j}\), are then inserted into the loss function (Jeffrey & Wandelt, 2020) \[\mathcal{L}\left[\tau,\,(\tau_{\rm NNj},\,\sigma_{\rm NN}(\tau)_{j})\right]=\] \[\sum_{j=1}^{N_{\rm batch}}\left[(\tau_{j}-\tau_{\rm NNj})^{2}+ \left((\tau_{j}-\tau_{\rm NNj})^{2}-\sigma_{\rm NN}(\tau)_{j}^{2}\right)^{2} \right]\;. \tag{6}\] We then update all \(N_{\rm w}\) network parameters subject to minimizing this loss function. For doing so, we use the Adam optimizer, a widely used stochastic gradient descent algorithm implemented in keras, for which we find an initial training rate of \(LR=10^{-3}\) and first- and second-moment exponential decay rates \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) to be appropriate. Repeating the described procedure for the entire training set of size \(N_{\rm train}=190,000\) makes up one training epoch4. We train on a maximum of 45 epochs, using the keras callback function ReduceLROnPlateau to allow for learning rates to decrease by a factor of 0.1 if the loss of the validation set has not improved over the course of 5 epochs. Moreover, the callback function EarlyStopping allows for training to stop after a minimum of 20 epochs without improvement in the validation loss. Using both of these callback functions allows a faster convergence and suppresses unwanted oscillations in the loss function during the training phase. Training on a 32-core Intel Xeon CPU node takes about 3 hours, while training on 8 NVIDIA Tesla V100 GPU cores takes about 30 minutes. Footnote 4: Among the total 200,000 simulations generated as described in Section 2, 190,000 are actually used to optimize the NN’s parameters, while the remaining 10,000 are used as validation set. ### Testing After training, the neural network parameters are fixed and the model building is in principle completed. However, trained NNs may not perform well for two main reasons: firstly, the loss function may have not converged to its global minimum, causing model predictions to be biased. Secondly, the model may overfit the input, meaning that the network learns the training set's features with an excellent accuracy, but fails to make correct predictions on similar, independent test sets. Both problems are addressed by testing the model's predictions on simulations that have not been fed into the network before. We use 2\(\times\)3 test sets of 10,000 sky simulations with fixed input \(\tau=\{0.05,\,0.06,\,0.07\}\), described in detail in Sections 2.2 and 2.3. Figure 4: Schematic of the fully connected layers of the neural network used in this paper. This represents the second part of the architecture, performing data compression. ## 4 Results on simulations Before arriving at the estimation of \(\tau\) from the _Planck_ SRoll2 data, we considered several setups to train our CNN model, increasing the complexity of the training simulations. This allowed us to get valuable insight into the learning process. In particular, we start by training the CNN on a set of simulations including CMB+Gaussian noise (see Section 2.2), either on a single frequency channel, or on two channels. We then move to simulations including non-Gaussian systematic effects (_i.e._ SRoll2 simulations), trying different possible strategies to obtain unbiased \(\tau\) estimates in the presence of complex residuals. Only once we achieve this, we apply our trained model to real _Planck_ data. In all the cases presented in this Section, the CNNs are trained and tested considering the \(f_{\rm sky}=0.5\) mask as our reference (see Figure 2). A summary of all the analysis cases, along with their corresponding results tables and figures, can be found in Table 1. ### Gaussian training As aforementioned, we first test the ability of our CNN to estimate the value of \(\tau\) considering only Gaussian noise. These simulations have noise amplitudes and pixel-pixel correlations directly estimated from _Planck_ maps, and therefore serve as a good description of the Gaussian noise present in real data. At the same time they lack for realism, since they do not include non-Gaussian residual systematic effects, contamination due to Galactic foregrounds, both known to be present on the _Planck_ SRoll2 maps. We therefore expect these models (which we refer to as "Gaussian models") to induce a bias on \(\tau\) when applied to the more realistic SRoll2 simulations, or to real _Planck_ data. #### 4.1.1 Single channel We begin by training our CNN on Stokes \(Q\) and \(U\) maps with Gaussian _Planck_-like noise and CMB at 143 GHz only, thus feeding \(N_{\rm map}=2\) maps to the network. In the left hand side of Table 2, we show the results of testing \(N_{\rm sims}=10,000\) Gaussian simulations of CMB and noise generated with fiducial \(\tau=0.05\), \(0.06\) and \(0.07\), respectively. The average learnt mean posterior values \(\overline{\rm\tau_{NN}}\) are close to unbiased and deviate at the \(0.2\sigma\)_level_. The average learnt posterior standard deviations \(\sigma_{\rm NN}(\tau)\) are within 5% agreement with the sample scatter across simulations, \(\sigma(\tau_{\rm NN})\). To assess the performance of the Gaussian model also on non-Gaussian _Planck_-like maps, we tested this model on 10,000 SRoll2 simulations generated with fiducial \(\tau=0.06\) (see Section 2.3). As illustrated in the upper right panel of Figure 5, this leads to a \(>1\sigma\)-bias on \(\tau_{\rm NN}\). These tests on a single frequency channel leave us with two conclusions: on the one hand, CNNs are able to correctly retrieve \(\tau\) and its statistical uncertainty from a single _Planck_-like simulation of the 143 GHz channel containing correlated Gaussian noise. On the other hand, systematic effects present in the _Planck_ SRoll2 simulations bias the single-channel CNN inference, as expected. To improve our results, we add another frequency channel to the inference pipeline, seeking to mitigate this bias. We expect that combining two channels should lead to a lower error bar and a lower bias on the SRoll2 simulations, in a similar way as cross-spectra achieve lower noise bias than auto-spectra. #### 4.1.2 Two channels As a second test, we add the HFI channel at 100 GHz in the training and testing procedures, simulated as CMB plus the corresponding Gaussian correlated noise, so that \(N_{\rm map}=4\) maps are fed into the neural network. The results from testing on Gaussian noise are shown in Table 2. We note two positive effects: firstly, the small bias observed for Gaussian noise on a single channel is further reduced to below 1% of a standard deviation. Secondly, the learnt \(\sigma_{\rm NN}(\tau)\) decreases by more than 10% when training on two frequency channels. Meanwhile, the prediction of the posterior standard deviation stays within 5% of the sample standard deviation of the inferred \(\tau_{\rm NN}\). The same results are visualized in Figure 5 for fiducial \(\tau=0.06\), showing significant improvement of the two-channel CNN inference in the lower panels with respect to the one-channel results (upper panels). We proceed to test this two-channel Gaussian model on the SRoll2 simulations. As shown in the right panel of Figure 5, for fiducial \(\tau=0.06\), the addition of a second channel allows for a significant reduction of the systematic-related bias in \(\tau_{\rm NN}\) and to a better statistical constraint. This leads us to conclude that CNNs are able to recognize common features across channels, combining the information to reduce the statistical uncertainty and to efficiently ignore uncorrelated systematic effects. The corresponding quantitative results, for all the three \(\tau\) values used during testing, are found in Table 3: adding a second channel in the Gaussian training model leads to improved results on the SRoll2 test simulations for all considered values of \(\tau\). However, a residual bias is still present, especially when the CMB signal is smallest, _i.e._ for \(\tau=0.05\). Moreover, we notice that, when applied to the SRoll2 test maps, the models trained on Gaussian simulations return values of \(\sigma_{\rm NN}(\tau)\) not in agreement with the actual spread of estimates \(\sigma(\tau_{\rm NN})\), with the latter being up to \(\sim 25\%\) larger. This implies that the learnt error is not accurate in this case, and therefore cannot be used to described the uncertainties of our inferred \(\tau\) values on SRoll2 maps. We will address error bars in Section 4.4. ### Comparison with Bayesian inference from cross-QML power spectrum estimates In this section we compare NN inference results with results coming from a standard Bayesian approach applied to \(E\)-mode power spectra. In particular, we consider quadratic Maximum Likelihood (QML) estimates (_e.g._ Tegmark & de Oliveira-Costa (2001)) of the 100\(\times\)143 GHz \(EE\) cross-spectrum, and take posterior samples using the well-known power spectrum likelihood approximation introduced by Hamimeche & Lewis (2008) (in the following HL likelihood). The HL likelihood provides a good approximation to the non-Gaussian distribution of the exact power spectrum likelihood, which markedly differs from Gaussianity at low multipoles \(2\leq\ell\lesssim 30\) most relevant for constraining \(\tau\). Evaluating the HL likelihood requires a power spectrum covariance matrix, which we obtain directly from simulations of Gaussian noise and CMB realized with the same \(\tau\) values used for generating the test simulations (Section 2). For the HL likelihood we assume a theoretical model of the CMB \(E\)-modes, computed with CAMB, considering the multipole range \(2\leq\ell\leq 30\), and sampling only for the \(\tau\) parameter, keeping \(10^{9}A_{e}e^{-2\tau}=1.884\) fixed. Our final results are the best-fit value \(\tau_{\rm HL}\), the standard deviation \(\sigma_{\rm HL}(\tau)\) of the posterior, and the scatter \(\sigma(\tau_{\rm HL})\) computed from the set of test simulations. We run the HL likelihood on \(3\times 10,000\) Gaussian sky simulations with input \(\tau=0.05\), \(0.06\) and \(0.07\). As shown in the last three columns of Table 2, we find unbiased best-fit results with average posterior standard deviation \(\overline{\sigma_{\rm HL}(\tau)}\) and best-fit parameter scatter \(\sigma(\tau_{\rm HL})\) of \(\sim 0.0048\). We notice that the uncertainties derived from sampling the HL likelihood are \(\sim 20\%\) smaller than the ones from NN estimates. Part of the scatter of \(\tau_{\rm NN}\) comes from the intrinsic stochastic nature of the training process, and could be reduced by taking the average over multiple NN models (as discussed in Section 4.4). Nevertheless, these results reveal that although we are able to retrieve unbiased \(\tau\) values with NNs from Gaussian simulations, our estimator does not achieve minimum variance. Further development of the method, including an optimization of the convolution algorithm on the sphere, the NN architecture and the training procedure, are required and will be explored in the light of improving the estimator's variance. \begin{table} \begin{tabular}{l c c} \hline \hline & Gaussian test simulations & SRo112 test simulations & _Planck_ data \\ \hline Gaussian NN (1 channel) & Table 2; Figure 5 & Table 3; Figure 5 & \\ Gaussian NN (2 channels) & Table 2; Figures 5, 8 & Table 3; Figures 5, 8 & Table 5; Figures 9, 10 \\ HL likelihood & Table 2 & Table 4 & Table 5; Figures 9, 10 \\ SRo112 training & & Table 4; Figure 8 & Table 5; Figure 10 \\ Empirical likelihood & & Table 5; Figure 10 \\ \hline \hline \end{tabular} \end{table} Table 1: References to results tables and figures in this paper. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Test on Gaussian simulations} \\ \hline & \multicolumn{3}{c}{143 GHz} & \multicolumn{3}{c}{143+100 GHz} & \multicolumn{3}{c}{143\(\times\)100 GHz} \\ & \multicolumn{3}{c}{Gaussian training} & \multicolumn{3}{c}{Gaussian training} & \multicolumn{3}{c}{HL likelihood} \\ \hline fiducial \(\tau\) & \(\overline{\tau_{\rm NN}}\) & \(\overline{\sigma_{\rm NN}(\tau)}\) & \(\overline{\sigma(\tau_{\rm NN})}\) & \(\overline{\sigma_{\rm NN}(\tau)}\) & \(\overline{\sigma(\tau_{\rm NN})}\) & \(\overline{\tau_{\rm HL}}\) & \(\overline{\sigma_{\rm HL}(\tau)}\) & \(\sigma(\tau_{\rm HL})\) \\ 0.05 & 0.0508 & 0.0059 & 0.0066 & 0.0503 & 0.0054 & 0.0057 & 0.0496 & 0.0046 & 0.0047 \\ 0.06 & 0.0608 & 0.0065 & 0.0067 & 0.0600 & 0.0056 & 0.0059 & 0.0596 & 0.0048 & 0.0048 \\ 0.07 & 0.0712 & 0.0067 & 0.0070 & 0.0702 & 0.0057 & 0.0063 & 0.0697 & 0.0048 & 0.0049 \\ \hline \hline \end{tabular} \end{table} Table 2: \(\tau\) predictions from \(10,000\) Gaussian CMB + noise simulations generated with three different, fixed fiducial \(\tau\) values. We show Gaussian training results on one and two channels, respectively, as well as power spectrum likelihood results. Shown are the posterior mean \(\tau_{\rm NN/HL}\) and standard deviation \(\sigma_{\rm NN/HL}(\tau)\) averaged over all simulations, as well as the scatter of \(\tau_{\rm NN/HL}\) over all simulations. Figure 5: Predictions of \(\tau_{\rm NN}\) from \(10,000\) simulations with input \(\tau=0.06\), containing either CMB with Gaussian noise (_left panels_) or CMB with SRo112 noise + systematics (_right panels_). The two rows denote different CNN models trained on CMB with Gaussian noise on a single frequency channel (_top_), on two frequency channels (_bottom_). \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Test on SRo112 simulations} \\ \hline & \multicolumn{3}{c}{143 GHz} & \multicolumn{3}{c}{143+100 GHz} & \multicolumn{3}{c}{143\(\times\)100 GHz} \\ & \multicolumn{3}{c}{Gaussian training} & \multicolumn{3}{c}{Gaussian training} & \multicolumn{3}{c}{HL likelihood} \\ \hline fiducial \(\tau\) & \(\overline{\tau_{\rm NN}}\) & \(\overline{\sigma_{\rm NN}(\tau)}\) & \(\overline{\sigma(\tau_{\rm NN})}\) & \(\overline{\sigma_{\rm NN}(\tau)}\) & \(\overline{\sigma(\tau_{\rm NN})}\) & \(\overline{\tau_{\rm HL}}\) & \(\overline{\sigma_{\rm HL}(\tau)}\) & \(\sigma(\tau_{\rm HL})\) \\ 0.05 & 0.0669 & 0.0065 & 0.0074 & 0.0536 & 0.0055 & 0.0067 & 0.0478 & 0.0050 & 0.0079 \\ 0.06 & 0.0738 & 0.0067 & 0.0076 & 0.0609 & 0.0056 & 0.0070 & 0.0585 & 0.0050 & 0.0073 \\ 0.07 & 0.0813 & 0.0069 & 0.0074 & 0.0690 & 0.0057 & 0.0071 & 0.0688 & 0.0049 & 0.0069 \\ \hline \hline \end{tabular} \end{table} Table 3: Same as Table 2, but testing on CMB and SRo112 simulations instead of CMB and Gaussian noise simulations. In addition to Gaussian simulations, we apply the cross-spectrum inference pipeline on \(3\times 10,000\) SRo1l2 simulations and show the corresponding results in the last three columns of Table 3. We stress that the HL likelihood contains the same covariance matrix as before, calculated from Gaussian simulations. This is done in analogy with the case of Gaussian NN training applied to SRo1l2 simulations, therefore neglecting the presence of systematic effects. We retrieve biased estimates on \(\tau\), confirming our expectation that the power spectrum model implemented in the likelihood is an inaccurate representation of the SRo1l2 simulations, which include spurious non-Gaussian signals. Interestingly, this affects the NN and HL estimates in different ways, leading to biases in opposite directions for \(\tau=0.05\) and \(0.06\). To study the relative behavior of the two estimators, it is instructive to look at a one-by-one comparison of the NN and HL results on the same 10,000 test simulations, as presented in Figure 6 for \(\tau=0.06\). The scatter plot of the estimated \(\tau_{\rm NN}\) and \(\tau_{\rm HL}\) on Gaussian simulations and on SRo1l2 simulations are shown in bright red and dark green, respectively. In the Gaussian case the correlation of the estimated \(\tau\) values is at a level of \(\sim 76\%\), while for SRo1l2 it is at \(63\%\). Therefore systematic effects, present on the maps and partially unaccounted for in the estimates, decrease the correlation and increase the differences between \(\tau_{\rm HL}\) and \(\tau_{\rm NN}\) when going from Gaussian to SRo1l2 test simulations, indicating that the two estimators are impacted differently by spurious non-Gaussian signals. ### Training including systematic effects As previously seen, the two-channel Gaussian training allows to improve our \(\tau\) estimates on SRo1l2 simulations. However, the persistence of residual bias motivates us to move forward in the training setup and include systematic effects in the training simulations. Our goal is to achieve fully unbiased results as a necessary condition to apply our NN models to real _Planck_ maps. In this section we explore two possible ways of including systematics in our NN-based models: training on SRo1l2 simulations from the very beginning, and performing a SRo1l2 retraining update on previously trained Gaussian networks. #### 4.3.1 Training on SRo1l2 simulations The SRo1l2 simulations (Delouis et al., 2019) are designed to accurately describe _Planck_'s Gaussian noise component and non-Gaussian polarization systematics. Motivated by this, we train a CNN from the start on the 200,000 SRo1l2 training simulations described in Section 2.3. As usual, we use 190,000 simulations to perform weight optimization, and 10,000 for validation. We train on _Planck_'s 143 GHz and 100 GHz channels simultaneously and use the same hyper-parameter values as for the Gaussian training, described in Section 3.2. We stress that even though artificially augmented by forming new channel pair combinations, the SRo1l2 training set is essentially built from 400 sampled skies only. The testing is performed on \(3\times 10,000\) SRo1l2 simulations with fixed \(\tau=0.05\), \(0.06\) and \(0.07\), generated from the remaining 100 independent realizations that were not seen by the CNN during training. Results obtained with this approach are displayed in Table 4. For the three input \(\tau\) values we find a positive bias of \(\sim 0.4\sigma\). For \(\tau=0.06\), the average learnt error \(\overline{\sigma_{\rm NN}(\tau)}=0.0062\), slightly larger than for the two-channel Gaussian training but smaller than the scatter \(\sigma(\tau_{\rm NN})=0.0070\) similar to what we see both for the Gaussian CNN and the HL likelihood inference (see Table 3). As in the case of Gaussian NN training, the learnt error does not agree with the SRo1l2 simulation scatter, therefore it cannot be used to infer the statistical uncertainty on \(\tau_{\rm NN}\). We ascribe the main reason for the bias on \(\tau\) to overfitting. Figure 7 illustrates the problem: we compare the \(\tau\) predictions on a set of 10,000 test simulations with the ones coming from 10,000 training simulations. The results show a bias and standard deviation of \(\Delta\tau=0.0023\pm 0.0069\) for the test set, while the training set is unbiased, with \(\Delta\tau=0.0001\pm 0.0068\). This is clear evidence for overfitting: while the model performs well on the 400 SRo1l2 simulations that the training set is built from, these are not enough to generalize to the remaining 100 SRo1l2 simulations used to build the test set, leading to the observed bias on \(\tau\) in the latter case. #### 4.3.2 Retraining update with SRo1l2 simulations We recognize the bias described above as a critical problem that needs to be addressed. The obvious option, training on a considerably higher number of simulations, is unavailable due to the limited number of published SRo1l2 realizations. Therefore, we apply a transfer learning technique to inform our previously trained Gaussian networks about SRo1l2 systematics. In the previous sections we demon Figure 6: Per-simulation comparison between the HL likelihood estimate \(\tau\) and the NN estimate \(\tau_{\rm NN}\) for a test set of 10,000 simulations realized with \(\tau=0.06\). Gaussian simulations are shown in bright red, SRo1l2 simulations in dark green. The correlation coefficient between both estimators are 76% (Gaussian) and 63% (SRo1l2). strated that our Gaussian CNN model is not affected by overfitting issues and, if trained on two channels, performs reasonably well even on SRoll2 simulations. This motivates us to leverage the existing results on Gaussian networks to solve the overfitting issue with as little changes as possible. To this end, we choose the approach of retraining the two-channel Gaussian model on the full set of SRol12 training simulations, while targeting two specific goals: 1. The retrained CNN should learn to extract information on the systematic effects present in the SRoll2 simulations, and update its CNN weights just enough to achieve fully unbiased results on the SRoll2 training set. 2. At the same time, we want to ensure that the information already learnt is not destroyed during the new training phase (an issue sometimes referred to as catastrophic forgetting, see _e.g._Kirkpatrick et al. (2017), Ramasesh et al. (2021)), avoiding going back to the overfitting situation described in the previous section. We achieve this by performing what we call "minimal retraining": we choose the hyper-parameters of the NN such that we obtain unbiased results on the SRoll2 test simulations while making minimal changes to the original network. We find an optimal setup by setting the number of retraining epochs to 5 while choosing a small learning rate of \(LR=10^{-7}\), without making any additional changes to the original network architecture. The right panel of Figure 7, in complete analogy to the left panel, compares the distribution of \(\Delta\tau\) from the SRoll2-retrained model on training simulations (black contours), or test simulations (green filled histogram). We find both histograms in good agreement, indicating that unlike the SRoll2-trained model, the _retrained_ model does not suffer from overfitting, thus achieving our goal (ii) defined above. Table 4 on the right-hand side lists the results of the SRoll2-retrained model on SRoll2 test simulations. We find \(\overline{\tau_{\rm NN}}=0.0508\), \(0.0606\) and \(0.0707\) for the respective input values of \(\tau=0.05\), \(0.06\) and \(0.07\). This amounts to a bias below \(\Delta\tau=8\times 10^{-4}\), or \(\lesssim 0.1\sigma\). In Figure 8 we show a comparison of the results on SRoll2 test sets obtained by Gaussian versus SRoll2-retrained CNNs. The reduction of the bias is evident, in particular for \(\tau=0.05\). Therefore, we choose the retrained approach as our baseline model to estimate \(\tau\) on real _Planck_ data. At the same time, this approach brings an increase in \(\sigma(\tau_{\rm NN})\), an effect not seen with the SRoll2 training procedure described in Section 4.3.15. This could be the consequence of the typical variance-bias \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{Test on SRoll2 simulations} \\ \hline & \multicolumn{3}{c}{143+100 GHz} & \multicolumn{3}{c}{143+100 GHz} \\ & \multicolumn{3}{c}{SRoll2 training} & \multicolumn{3}{c}{SRoll2 retraining} \\ \hline fiducial \(\tau\) & \(\overline{\tau_{\rm NN}}\) & \(\overline{\sigma_{\rm NN}(\tau)}\) & \(\sigma(\tau_{\rm NN})\) & \(\overline{\tau_{\rm NN}}\) & \(\overline{\sigma_{\rm NN}(\tau)}\) & \(\sigma(\tau_{\rm NN})\) \\ 0.05 & 0.0526 & 0.0059 & 0.0066 & 0.0508 & 0.0077 & 0.0091 \\ 0.06 & 0.0622 & 0.0062 & 0.0070 & 0.0606 & 0.0079 & 0.0088 \\ 0.07 & 0.0722 & 0.0064 & 0.0070 & 0.0707 & 0.0081 & 0.0087 \\ \hline \hline \end{tabular} \end{table} Table 4: \(\tau\) predictions from \(10,000\) CMB + SRoll2 test simulations generated with three different fiducial \(\tau\) values. We show results using two frequency channels, either training on SRoll2 from the start, or retraining on SRoll2 maps. Displayed are the average posterior mean, average predicted standard deviation \(\overline{\sigma_{\rm NN}(\tau)}\) and the scatter \(\sigma(\tau_{\rm NN})\) calculated across the test simulations. Figure 8: Predictions of \(\tau_{\rm NN}\) on \(10,000\) SRoll2 simulations with input \(\tau=0.05\), \(0.06\) and \(0.07\) (first, second, third row, respectively). The two colums display two different NN models trained on two channels of Gaussian simulations (left panels), and retrained on two channels of SRoll2 simulations (right panels). All result are for \(f_{\rm sky}=0.5\). Figure 7: Neural network accuracy in predicting the true \(\tau\) input from \(10,000\) simulations. Step-filled histograms show the results on unseen test simulations, black outlines show the results on a subset of the actual training simulations. We compare a network exclusively trained on SRoll2 simulations (_left panel_) with a Gaussian network retrained on SRoll2 simulations (_right panel_). trade-off observed between statistical estimators: with minimal retraining we are able to achieve unbiased estimates (goal (i) above) at the cost of a larger \(\sigma(\tau_{\rm NN})\). In addition to that, we are still unable to retrieve values of the learnt \(\sigma_{\rm NN}(\tau)\) that agree with \(\sigma(\tau_{\rm NN})\) for SRoll2 simulations (and therefore also for _Planck_ data). We conclude that, differently from what happens in the Gaussian model tested on Gaussian simulations, we cannot use the learnt error as an estimate of the uncertainty of the inferred \(\tau_{\rm NN}\). ### NN errors The loss function in Equation (6) provides an estimate for the posterior standard deviation \(\sigma_{\rm NN}(\tau)\). However, as seen in the previous sections, the learnt \(\sigma_{\rm NN}(\tau)\) tends to underestimate the actual spread of the inferred values of \(\tau_{\rm NN}\) on test set maps, especially in the case of SRoll2 maps. We therefore proceed to empirically estimate our errors from simulations. In doing so, we need to make an additional consideration: training a NN is an intrinsically stochastic procedure that relies upon the use of a stochastic optimizer, randomly initialized NN weights and random realizations of the maps in the training set. This results in the fact that each NN prediction can be described as the sum of two random variables: \(\tau_{\rm NN}=\tau+\Delta_{\rm NN}\), and therefore \[\sigma^{2}(\tau_{\rm NN})=\sigma^{2}(\tau)+\sigma^{2}(\Delta_{\rm NN})+2\,{ \rm Cov}(\tau,\Delta_{\rm NN})\,, \tag{7}\] where the first source of uncertainty, \(\sigma(\tau)\), is due to noise and cosmic variance of test simulations or observed data, while the second, \(\sigma(\Delta_{\rm NN})\), represents the stochasticity of the NN estimator. These two terms are sometime referred to as _aleatory_ and _epistemic_ error, respectively (Hullermeier and Waegeman, 2021). We can measure the uncertainty related to the NN stochasticity by training an ensemble of models, all based on the same architecture and hyper-parameters, but with different initial weights and training set realizations. Our estimate of \(\sigma(\Delta_{\rm NN})\) is given by the the standard deviation of the models' \(\tau\) predictions when tested on a single test map. In practice, we define the "model ID" of a trained NN as the fixed random seed controlling the initialization of network weights. We generate a new training set of simulations whose specific realizations (of CMB, noise and potentially systematics) is fully determined by the model ID. Following this recipe, we create 100 independent Gaussian training sets and use them to train 100 Gaussian networks. Repeating this procedure with 100 SRoll2 training sets, we retrain the set of 100 Gaussian networks to obtain 100 SRoll2-retrained networks. Using a single test map with input \(\tau=0.06\), we find \(\sigma(\Delta_{\rm NN})\simeq 0.0024\) for Gaussian NN models tested on Gaussian maps, and \(\sigma(\Delta_{\rm NN})\simeq 0.0034\) for minimally retrained NN models tested on SRoll2. In both cases this represents about 40% of the corresponding value of \(\sigma(\tau_{\rm NN})\) reported in Tables 2 and 4, respectively. We can reduce the impact of the NN stochasticity by taking, for each test map, the ensemble average of the \(\tau\) estimates over the 100 trained NNs. By doing so, for the case with \(f_{\rm sky}=0.5\) and input \(\tau=0.06\), we find \(\sigma(\tau_{\rm NN})\simeq 0.0054\) for Gaussian models applied to Gaussian maps and \(\sigma(\tau_{\rm NN})\simeq 0.0083\) for retrained models applied to SRoll2 simulations. We also evaluate the correlation coefficient between the predictions of pairs of models \((j,k)\), tested on the same 10,000 simulations, for both Gaussian and SRoll2 training and testing, respectively. In both cases, we find \(\rho_{jk}\simeq 0.84\), in agreement with what is expected if Equation (7) holds and the models' epistemic errors are uncorrelated, \({\rm Cov}(\Delta_{\rm NN}^{j},\Delta_{\rm NN}^{k})=\delta_{jk}^{\mathcal{K}} \sigma^{2}(\Delta_{\rm NN})\). In the following section we apply our CNN models to _Planck_ maps to infer the value of \(\tau\) from data, estimating its uncertainty from simulations and using the ensemble average over 100 trained models to reduce the impact of the NN stochasticity. ## 5 Results on _Planck_ data As shown in Sections 4.3.2 and 4.4, by retraining on the SRoll2 simulations, we are able to obtain a CNN-based model that yields unbiased results on unseen SRoll2 test simulations generated with fixed \(\tau\in\{0.05,\,0.06,\,0.07\}\). Having thus confirmed the robustness of our method, we now move to real _Planck_ data and proceed to predict \(\tau\) from the 100 and 143 GHz SRoll2 HFI maps. Our baseline \(\tau\) estimate is obtained by taking the average of the inferred values from the 100 minimally retrained NNs applied to _Planck_ data for a sky mask with \(f_{\rm sky}=0.5\), resulting in a mean estimate of \(\tau_{\rm NN}\simeq 0.0058\). Figure 9 shows the obtained \(\tau\) values for each of these NN models. Following the conclusions of the previous sections, since the learnt \(\sigma_{\rm NN}(\tau)\) is inadequate as an error prediction, we estimate the uncertainty from simulations. In practice, we generate a set of 10,000 SRoll2 simulations realized with \(\tau=0.058\) and average the \(\tau_{\rm NN}\) estimates over 100 networks. Afterwards, we compute the standard deviation over 10,000 simulations. Our final inference on _Planck_ maps in this baseline case results in: \[\tau_{\rm NN}=0.0579\pm 0.0082\quad(\textit{Planck}\,100+143\,{\rm GHz})\,. \tag{8}\] This value is in very good agreement with the \(\tau\) estimates obtained with an empirical likelihood based on cross-QML power spectra, presented in Pagano et al. (2020) (hereafter P2020) applied to the same _Planck_ maps and constructed from the same SRoll2 simulations that we use in this work. In particular, P2020 obtained \(\tau=0.0566\pm 0.0062\) on the \(f_{\rm sky}=0.5\) sky mask. We notice that the uncertainty from our NN method is \(\sim 30\%\) larger. As previously described, this is due to the fact that our NN estimator does not reach minimum variance and that we rely on the retraining strategy that leads to larger errors. However, the fact that we obtain a \(\tau\) value in agreement with the one reported in the literature, considering that we are using an inherently different inference approach based, for the first time, on NNs, represents a remarkable result of this work. We also apply the Gaussian NN model to _Planck_ data, deriving the best-fit parameter value and error bars analogously. Note that, although the Gaussian model leads to results that are mildly biased by up to \(\sim 0.5\sigma\) when applied to SRoll2 maps with low CMB input signal (\(\tau=0.05\)), the bias is below \(0.15\sigma\) when \(\tau=0.06\), as displayed in the 5th column of Table 3. In this case, using the same \(f_{\rm sky}=0.5\) mask, we obtain \(\tau_{\rm NN}=0.0588\pm 0.0063\). The statistical uncertainty is lower for this second method, as we omit retraining on systematics, and similar to the one obtained from the empirical likelihood presented in P2020. Lastly, as a robustness test, we apply the same methods to a second sky mask, with a larger sky coverage of \(f_{\rm sky}=0.6\). Results on parameter estimation are stable for both retrained and Gaussian model, while uncertainties are reduced. The NN predictions of the single models on \(f_{\rm sky}=0.6\) are displayed in Figure 9. A summary of our results on _Planck_ maps is shown in Figure 10 and Table 5. ## 6 Conclusions In this paper, we present the first cosmological parameter inference on _Planck_'s CMB polarization maps that is performed entirely by neural networks. We estimate the optical depth to reionization, \(\tau\), from the SRoll2 low resolution polarization maps of _Planck_-HFI at 100 and 143 GHz. These maps are known to contain a significant level of residual systematic effects at large angular scales that, if ignored, would bias cosmological results. These spurious signals are non-Gaussian and hard to model in an analytical way. For this reason, in the literature (Pagano et al., 2020, P2020), the estimation of \(\tau\) from these maps is obtained by sampling an empirical \(EE\) cross-spectrum likelihood (Planck Collaboration V, 2020; Gerbino et al., 2020), built from a set of realistic SRoll2 simulations (Delouis et al., 2019). In this work, we approach this problem through NN-based inference applied directly on the map domain. One of the benefits of this method is that it does not require an analytical model of the data but, instead, relies solely on using simulations to train a regression model. In particular, we use the NNhealpix algorithm to build our NN models, allowing the application of convolutional layers on the sphere. We consider several setups to train and validate CNNs on multiple sets of simulations, before applying them to _Planck_ data. We adopt the moments loss function of Jeffrey and Wandelt (2020) to learn the mean and standard deviation of the marginal posterior on \(\tau\) inferred from Stokes \(Q\) and \(U\) maps pixelized on a grid at \(N_{\rm side}=16\) (\(\sim 4^{\circ}\)). To find the best training method, we start from simulations of a single frequency channel of CMB with coadded Gaussian correlated noise and, step by step, move to more complex setups that involve two frequency channels containing CMB, noise and systematic effects. We compare the results obtained with NNs with the ones from a standard Bayesian Figure 10: Results on \(\tau\) obtained from _Planck_ SRoll2 data. The values in this plot are shown in Table 5. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{Predictions on _Planck_ SRoll2 data} \\ \hline & \multicolumn{2}{c}{143+100 GHz} & \multicolumn{2}{c}{143+100 GHz} & \multicolumn{2}{c}{143x100 GHz} \\ & \multicolumn{2}{c}{Gaussian training} & \multicolumn{2}{c}{SRoll12 retraining} & \multicolumn{2}{c}{\(C_{\ell}\) likelihood} \\ \hline \(f_{\rm sky}\) & \(\tau_{\rm NN}\) & \(\sigma(\tau_{\rm NN})\) & \(\tau_{\rm NN}\) & \(\sigma(\tau_{\rm NN})\) & \(\tau\) & \(\sigma(\tau)\) \\ \(50\%\) & 0.0588 & 0.0063 & 0.0579 & 0.0082 & 0.0566 & 0.0062 \\ \(60\%\) & 0.0593 & 0.0059 & 0.0583 & 0.0078 & 0.0577 & 0.0054 \\ \hline \hline \end{tabular} \end{table} Table 5: Results from _Planck_ data on two different sky masks, using Gaussian NNs, SRoll2-retrained NN models and the empirical \(C_{\ell}\)-based likelihood presented in Pagano et al. (2020). The NN results are averaged over 100 models, and \(\sigma(\tau_{\rm NN})\) is computed from 10,000 simulations with input \(\tau=0.058\). Figure 9: NN predictions of \(\tau\) from _Planck_ 100+143 GHz data, resulting from training 100 equivalent models with different random initial weights and random seeds for training data, considering Gaussian two-channel training (blue tones) versus SRoll2 retraining (orange tones), and \(f_{\rm sky}=0.5\) (downward triangles) versus \(f_{\rm sky}=0.6\) (upward triangles). Colored triangle markers show the best-fit values for the single models and horizontal lines in the corresponding colors show the ensemble average of \(\tau\) (middle) \(\pm\) the \(68\%\) confidence interval (upper and lower lines). method that applies the HL likelihood to \(EE\) cross-spectra. Our main results and conclusions from the analysis applied to simulations are the following: 1. When trained and applied to Gaussian simulations, the NN models are able to retrieve unbiased values of \(\tau\) directly from maps. Additionally, by using the moments loss function reported in Equation (6), the models can also learn and return an error estimate that is consistent with the spread of the best-fit values on the test set. 2. When trained using maps from two frequency channels that share the same cosmological signal, the NNs are able to effectively combine the information from both maps. This leads to improved accuracy in the \(\tau\) estimates and smaller uncertainties. This ability to combine information from different channels is a key advantage of the NN approach as, in the future, it would allow for a straightforward combination of all available data sets without the need for a joint model, thus reducing the impact of noise and systematics. 3. A comparison of the NN estimates with the ones obtained from the HL cross-spectrum method applied to Gaussian simulations shows that the NN approach leads to higher uncertainties by about 20%. This implies that the NN estimator, although unbiased, does not reach the minimum variance. In order to further improve the performance of the estimator, future work should focus on optimizing the spherical convolution algorithm, the model architecture, and the training procedure. This will help to minimize the uncertainties and reach the best possible performance. 4. The application of the Gaussian two-channel model to the SRoll2 simulations, which include systematic effects, leads to inaccurate estimates on \(\tau\), as does the use of the HL likelihood. Although expected, this observed bias is much smaller (nearly unbiased for \(\tau\sim 0.06\)) than that seen for the single-channel model, demonstrating that the neural network is able to identify common features in the maps, efficiently ignoring the uncorrelated signal between different channels. 5. To recover fully unbiased results on SRoll2 maps, as a prerequisite to apply our NN model to _Planck_ data, we need to train NNs on maps that incorporate instrumental systematic effects. Due to the limited number of available SRoll2 simulations, we adopt a minimal retraining approach, building on the good results already obtained with the Gaussian models. This approach helps to minimize overfitting issues, but it also leads to slightly larger errors in the recovered \(\tau\) values. 6. In more complex scenarios, when the NN models are applied to the SRoll2 maps, we find that the error estimate learned by the NN, \(\sigma_{\rm NN}(\tau)\), underestimates the spread evaluated on the empirical distribution of the test maps, \(\sigma(\tau_{\rm NN})\). This suggests that the NN model is not capturing the full range of uncertainty in the data. To overcome this issue, we proceed by evaluating the final error on \(\tau\) through simulations, by taking the ensemble average of 100 NN models. This helps to reduce the impact of the epistemic uncertainty caused by the intrinsic stochasticity of the NN estimator. After evaluating and validating the performance of the NNs on simulations, we apply our trained models to _Planck_ SRoll2 maps at 100 and 143 GHz. For the minimally retrained model, which is the one that leads to fully unbiased results on the SRoll2 simulations, we obtain \(\tau_{\rm NN}=0.0579\pm 0.0082\) on our fiducial \(f_{\rm sky}=0.5\) mask. This value is in very good agreement with the one obtained from the empirical likelihood based on cross power spectra reported in P2020, which relies on the same set of simulations. We consider this a remarkable result of our work, given the fact that the two estimators are intrinsically different. However, we note that our final uncertainty on the \(\tau_{\rm NN}\) estimate, evaluated through simulations and involving the ensemble average of 100 NN models, is about 30% larger than the one obtained in P2020. This is because our NN estimator does not reach minimum variance and, moreover, we could rely only on a limited number of SRoll2 simulations to inform the NN about systematic effects. The minimal retraining approach allows us to achieve unbiased results, but at the cost of an increased variance. Given its good performance on SRoll2 simulations for \(\tau\sim 0.06\), we also also apply the Gaussian model to the the _Planck_ data. In this second case we obtain \(\tau_{\rm NN}=0.0588\pm 0.0063\), still in agreement with the estimate reported in the literature, and with a similar level of uncertainty. As a robustness test of the NN approach, we also consider a second mask that retains a larger sky fraction of \(f_{\rm sky}=0.6\), finding consistent results. The summary of our results is reported in Table 5 and Figure 10, showing full stability of the retrieved \(\tau_{\rm NN}\) estimations. Concluding, what we have presented in this work is a first thorough application of NN-based inference to real CMB maps. It is important to stress that obtaining reliable results on real data has required a significant effort to validate and test our models on different setups and to develop training strategies that can effectively cope with systematic effects. This highlights the fact that NN models developed to perform well on simplified simulations cannot always be straightforwardly applied to real data, and require careful consideration on training and validation. Nonetheless, the consistent and robust results we have obtained demonstrate that NNs represent a promising tool that could complement standard statistical data analysis techniques for CMB observations, especially in cases where the Gaussian CMB signal is contaminated by spurious effects that cannot be analytically described in a likelihood model. This is particularly relevant for the ongoing search for primordial gravitational waves, constrained by large-scale \(B\)-modes which are targeted by a number of near-future experiments like the _Simons Observatory_(Simons Observatory Collaboration, 2019), _LiteBIRD_(LiteBIRD Collaboration, 2022) and _CMB-S4_(Abazajian et al., 2019). However, additional optimization and validation of this approach must be developed before tackling this challenge. ###### Acknowledgements. The authors acknowledge financial support from the COSMOS network (www.cosmosnet.it) through the ASI (Italian Space Agency) Grants 2016-24-H.0 and 2016-24-H.1-2018, as well as 2020-9-HH.0 (participation in LiteBIRD phase A). LP acknowledges financial support and computing resources at CINECA provided by the INFN InDark initiative. We acknowledge the use of CAIR (Lewis et al., 2000), healpy(Zonca et al., 2019), Nhmealpix(Krachmalnicoff & Tomasi, 2019), numpy(Harris et al., 2020), matplotlib(Hunter, 2007), and keras(Choiet et al., 2015) software packages. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
2307.03929
Fairness-Aware Graph Neural Networks: A Survey
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance on many fundamental learning tasks. Despite this success, GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism that lies at the heart of the large class of GNN models. In this article, we examine and categorize fairness techniques for improving the fairness of GNNs. Previous work on fair GNN models and techniques are discussed in terms of whether they focus on improving fairness during a preprocessing step, during training, or in a post-processing phase. Furthermore, we discuss how such techniques can be used together whenever appropriate, and highlight the advantages and intuition as well. We also introduce an intuitive taxonomy for fairness evaluation metrics including graph-level fairness, neighborhood-level fairness, embedding-level fairness, and prediction-level fairness metrics. In addition, graph datasets that are useful for benchmarking the fairness of GNN models are summarized succinctly. Finally, we highlight key open problems and challenges that remain to be addressed.
April Chen, Ryan A. Rossi, Namyong Park, Puja Trivedi, Yu Wang, Tong Yu, Sungchul Kim, Franck Dernoncourt, Nesreen K. Ahmed
2023-07-08T08:09:06Z
http://arxiv.org/abs/2307.03929v1
# Fairness-Aware Graph Neural Networks: A Survey ###### Abstract. Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance on many fundamental learning tasks. Despite this success, GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism that lies at the heart of the large class of GNN models. In this article, we examine and categorize fairness techniques for improving the fairness of GNNs. Previous work on fair GNN models and techniques are discussed in terms of whether they focus on improving fairness during a preprocessing step, during training, or in a post-processing phase. Furthermore, we discuss how such techniques can be used together whenever appropriate, and highlight the advantages and intuition as well. We also introduce an intuitive taxonomy for fairness evaluation metrics including graph-level fairness, neighborhood-level fairness, embedding-level fairness, and prediction-level fairness metrics. In addition, graph datasets that are useful for benchmarking the fairness of GNN models are summarized succinctly. Finally, we highlight key open problems and challenges that remain to be addressed. Keywords:GNNs, Graph Neural Networks + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023: [https://doi.org/10.1145/nnnnnnnnnnnn](https://doi.org/10.1145/nnnnnnnnnnnn) + Footnote †: 2023: [https://doi.org/10.1145/nnnnnnnnnnnnnn](https://doi.org/10.1145/nnnnnnnnnnnnnn) + Footnote †: 2023: [https://doi.org/10.1145/nnnnnnnnnnnnnnnn](https://doi.org/10.1145/nnnnnnnnnnnnnnnn) + Footnote †: 2023: [https://doi.org/10.1145/](https://doi.org/10.1145/) neighborhoods used to iteratively train these models (Chen et al., 2022; Hussain et al., 2022; Zhang et al., 2023). Furthermore, it is often very difficult to design GNNs that mitigate such unfairness and bias issues that arise due to the nature of the graph structure, input features, and most importantly, the fundamental GNN assumptions and design that make this a far more challenging and complex problem compared to traditional bias and unfairness mitigation techniques for i.i.d. data (Salganik et al., 2022; Song et al., 2022; Xu et al., 2023). In fact, GNNs are largely designed to leverage such bias and unfairness in the data to achieve superior accuracy at the expense of fairness (Dong et al., 2023; Kose and Shen, 2022; Kose and Shen, 2022; Ba et al., 2023; Singer and Radinsky, 2022; Wang et al., 2023). As an example, a GNN-based recommender may suggest fewer job opportunities to individuals of a specific gender or ethnic group. This is due to the fact that most graph data is highly skewed towards one or more groups and often even shows a rough power-law relationship as observed in the literature across a variety of domains in the last decade (Newman, 2003; Watts and Strogatz, 1998). Therefore, fairness in such models are both practically and theoretically important to develop better GNN models that are significantly more fair while also accurate (Liu et al., 2022) for downstream prediction tasks such as node classification (Agarwal et al., 2021; Loveland et al., 2022; Ma et al., 2021; Zhu et al., 2023), link prediction (Buyl and De Bie, 2020; Li et al., 2020; Patro et al., 2020; Rahman et al., 2019; Spinelli et al., 2021), and link classification (Chen et al., 2022). In this work, we discuss the fairness issues that arise in GNNs and survey the techniques to improve fairness in GNNs. We highlight three fundamental facets that can lead to bias when training GNN models. First, the underlying graph structure \(G\) used for training is often biased, _e.g._, when considering an attribute of a node (representing an individual) such as political views, we often observe significant homophily among the neighbors of nodes. In fact, in such data, there are often very tightly-knit communities of individuals that all retweet or follow each other. Second, the features given as input to GNNs can also be biased and unfair in a variety of ways. Such features when used independently may essentially have all the unfairness issues of traditional i.i.d. data. Third, the underlying mechanism used for aggregation and training of GNNs is inherently biased, and this is a much more difficult issue to resolve compared to traditional fairness on i.i.d. data. Overall, fairness issues in GNNs arise due to various factors such as biased training data including both the input features along with the graph structure, as well as the training and aggregation mechanisms that lie at the heart of GNNs. Addressing these issues requires careful consideration of the data, model, and evaluation metrics to ensure fair and unbiased predictions. ### Summary of main contributions The key contributions of this work are as follows: 1. A comprehensive survey of existing work on bias and unfairness mitigation techniques for GNNs. We also survey graph fairness metrics and summarize existing graph datasets used in the literature by the domain the graph originates (_e.g._, social network) along with task it can be used for and the dataset statistics and characteristics useful for various graph settings. 2. We introduce a few intuitive taxonomies for bias mitigation in GNNs and survey existing methods using these taxonomies. The taxonomy categorizes techniques based on whether the approach mitigates unfairness at the pre-processing stage, training stage, or at the post-processing stage by debiasing the learned embeddings directly. Methods are also categorized by the type of input graph data supported such as whether the graph is homogeneous, bipartite, heterogeneous, or temporal, as well as by the underlying graph learning task for which the method was designed. 3. We identify key open problems and challenges that are important for future work to address in this rapidly emerging but critically important field. ### Scope of this article In this article, we focus on examining and categorizing various fairness techniques for graph neural networks. We do not attempt to survey the abundance of work on fairness in graph mining (Dong et al., 2022; Zhang et al., 2022) and graph machine learning in general (Choudhary et al., 2022). In contrast, we focus solely on fair GNN models as opposed to general graph fairness. In some cases, techniques we survey may have been used in a different context. Regardless of the context, we examine the general applicability and benefits of these techniques when used for improving fairness in GNN models. ## 2. Problem formulation Given a graph \(G=(V,E)\) consisting of a set of nodes \(V\) along with a set of edges \(E\subseteq V\times V\) that encode dependencies between pairs of nodes in \(V\). Furthermore, every node \(v\in V\) typically has a \(k\)-dimensional feature vector \(\mathbf{x}_{o}\) associated with it. This can be represented compactly as a node feature matrix \(\mathbf{X}=[\,\mathbf{x}_{1}\,\,\mathbf{x}_{2}\,\cdots\,\mathbf{x}_{|V|}\,]^{ \top}\in\mathbb{R}^{|V|\times k}\). We also have one or more sensitive attributes \(\mathbf{s}=\left[\,s_{1}\,\,s_{2}\,\cdots\,s_{i}\,\,\cdots\,s_{|V|}\,\right]\) where \(s_{i}\) is the sensitive attribute value of node \(i\). The graph is encoded as a sparse adjacency matrix \(\mathbf{A}\) where \(A_{ou}=1\) if \((v,u)\in E\) and \(A_{ou}=0\) otherwise. GNN functions operate over the local neighborhoods of the nodes in the graph, that is, the neighborhood \(N_{v}\) for node \(v\) is defined as \(N_{v}=\{u\in V\,|\,(u,u)\in E\}\). Hence, \(N_{v}\) is the set of nodes adjacent to \(v\). From \(N_{v}\), we define the multiset of features from the neighborhood of \(v\) as \(\mathbf{X}_{N_{v}}=\{\mathbf{x}_{u}\,|\,u\in N_{v}\}\). A key challenge of ensuring fairness in this setting with respect to the sensitive attribute \(\mathbf{s}\) is that this sensitive information is often encoded in the graph's adjacency matrix \(\mathbf{A}\) and even the feature matrix \(\mathbf{X}\). Both \(\mathbf{A}\) and \(\mathbf{X}\) are fundamental to the training of GNNs. In terms of the graph structure \(\mathbf{A}\), this occurs when the sensitive attribute values of the neighborhood \(N_{v}\) of a node \(v\) are overwhelming the same. This implies the presence of homophily in \(G\) where nodes sharing the same sensitive attribute value are more likely to be connected. Conversely, the feature matrix \(\mathbf{X}\) may also be highly correlated with the sensitive attribute \(\mathbf{s}\), especially when diffused over the graph structure \(\mathbf{A}\): \[\mathbf{H}=\text{GNN}(\mathbf{X},\mathbf{A}) \tag{1}\] where GNN can be any GNN layer. More formally, let \(\phi\) denote a local diffusion (propagation) function that operates over the neighborhood of a node. Then for node \(v\in V\), we have \(\mathbf{h}_{v}=\phi(\mathbf{x}_{v},\mathbf{X}_{N_{v}})\). The majority of GNNs can be categorized into convolutional (Eq. 2), attentional (Eq. 3), or message-passing (Eq. 4) (Bronstein et al., 2021). In particular, (2) \[\mathbf{h}_{v}=\phi\Bigg{(}\mathbf{x}_{v}\bigoplus_{u\in N_{v}} \,c_{uv}\,\,\psi(\mathbf{x}_{u})\Bigg{)}\] (Attentional) (3) \[\mathbf{h}_{v}=\phi\Bigg{(}\mathbf{x}_{v}\bigoplus_{u\in N_{v}} \,\psi(\mathbf{x}_{u},\mathbf{x}_{v})\Bigg{)}\] (Message-passing) (4) where \(\psi\) and \(\phi\) are neural networks (\(e\).g., ReLU) and \(\bigoplus\) is any aggregator such as \(\sum\), mean, max, among others (Rossi et al., 2018). The fair GNN learning problem is to learn a low-dimensional fair embedding matrix \(\mathbf{H}\in\mathbb{R}^{n\times d}\) of the nodes such that \(d\ll n\). Most importantly, the embeddings must encode different properties of the graph structure along with the input features. Typically, it is assumed that two nodes connected in the graph have similar embeddings. Furthermore, the embeddings \(\mathbf{H}\) must be independent of the sensitive attributes \(\mathbf{s}\) such that no information is revealed about the sensitive attribute \(\mathbf{s}\) from the learned embeddings \(\mathbf{H}\). This problem is often very challenging since the graph structure \(\mathbf{A}\), and more specifically, the neighborhoods \(\{N_{1},N_{2},\ldots,N_{|V|}\}\) of nodes in the graph \(G\) (and/or input features \(\mathbf{X}\)) are often strongly correlated with the sensitive attribute \(\mathbf{s}\). Therefore, the goal is often to balance the trade-off between fairness and accuracy. ### Taxonomy of GNN Fairness Techniques Techniques for developing GNNs that are fair and unbiased with respect to the properties above can be categorized as pre-processing, in-training, post-processing, and hybrid. 1. **Pre-processing (Sec. 4.1)**: Using a pre-processing technique to remove bias or unfairness present in the graph structure \(\mathbf{A}\) or input features \(\mathbf{X}\) before using GNNs. 2. **In-training (Sec. 4.2)**: By modifying the objective function of GNNs to learn fair and unbiased embeddings during training. This can be the addition of constraints or regularization to the objective function or adding attention weights to GATs that focus on fairness weighting. 3. **Post-processing (Sec. 4.3)**: Using a post-processing technique to remove bias from the resulting embeddings of a GNN model. This can be simply adjusting the embeddings to be independent of the sensitive attribute. 4. **Hybrid (Sec. 4.4)**: A hybrid technique combines two or more of the previous techniques to ensure a better and more robust degree of fairness with respect to the sensitive attribute(s). An example of this might be to use a preprocessing technique such as rewiring the graph to ensure exact neighborhood fairness (Chen et al., 2022) and then using an in-training technique that adds a fairness constraint to the objective of a GNN model to further ensure fairness of the learned embeddings. Fairness techniques for GNNs are summarized and categorized according to our proposed taxonomy in Table 1. Notably, we propose a simple and intuitive taxonomy that categorizes fairness techniques for GNNs based on the (i) type of input graph supported such as homogeneous, bipartite, or heterogeneous, (ii) type of unfairness mitigation technique based on whether bias/unfairness mitigation is performed as a pre-processing routine, during training, or as a post-processing technique after learning the embeddings, and the (iii) graph learning task such as node classification, link prediction, or link classification. ### Graph Tasks There are three fundamental tasks that all practical applications of GNNs leverage, namely, whether the application is edge-based, node-based, or graph-based. These three general graph machine learning tasks can be formulated as: \[\mathbf{z}_{i} =f(\mathbf{h}_{i}) \text{(node-based task)} \tag{5}\] \[\mathbf{z}_{ij} =f(\mathbf{h}_{i},\mathbf{h}_{j}) \text{(edge-based task)}\] (6) \[\mathbf{z}_{G} =f\big{(}\oplus_{i\in V}\mathbf{h}_{i}\big{)} \text{(graph-based task)} \tag{7}\] where \(\mathbf{z}_{i}\), \(\mathbf{z}_{ij}\), and \(\mathbf{z}_{G}\) are the final embeddings of node \(i\in V\), potential edge \((i,j)\), and graph \(G\). An example of a node-based application is node classification, whereas examples of edge-based applications include link prediction and link classification (Rossi et al., 2012). For graph-based tasks, the most common application is graph classification. \begin{table} \begin{tabular}{c ## 3. Fairness evaluation metrics We now present metrics for evaluating fairness at different fundamental levels. We categorize these fundamental fairness evaluation metrics into graph-level fairness metrics (Sec. 3.1), neighborhood-level fairness metrics (Sec. 3.2), embedding-level fairness metrics (Sec. 3.3), and prediction-level fairness metrics (Sec. 3.4). ### Graph-level Fairness Metrics We first present metrics for evaluating fairness at the graph-level. Intuitively, graph-level fairness metrics consider the bias that arises from the graph structure \(G\) for a specific sensitive attribute \(\mathbf{s}\). These metrics are largely based on the notion of homophily that is assumed by the vast majority of graph models. Homophily is the notion that nodes that are neighbors (adjacent) are more likely to share the same attribute value. Note that these fairness evaluation metrics are independent of the trained model and its predictions. One such simple metric for measuring the homophily in a graph is as follows: Definition **(Homophily Ratio \(h\))**.: _Given a graph \(G=(V,E)\) and a sensitive attribute \(\mathbf{s}\) with \(|S|\) unique values, then let \(\mathbf{C}\in\mathbb{R}^{|S|\times|S|}\) be defined as_ \[C_{ij}=|\{(u,v):(u,v)\in E\wedge s_{u}=i\wedge s_{v}=j\}| \tag{8}\] _Intuitively, \(C_{ij}\) is the frequency that two nodes connected by an edge in \(G\) have attribute values of \(i\in S\) and \(j\in S\). Then, the homophily ratio \(h\) of \(G\) is:_ \[h(G)=\frac{\sum_{i}C_{ii}}{\sum_{i}\sum_{j}C_{ij}}=\frac{\sum_{i}C_{ii}}{|E|} \tag{9}\] _where \(h\in[0,1]\). At the extremes, a graph with \(h=1\) implies that all edges in \(G\) connect nodes that have the same sensitive attribute value, and therefore, are highly biased, whereas for \(h=0\), then we have the opposite where edges in \(G\) connect to nodes with completely different labels._ There is also another commonly used metric based on the notion of assortativity: Definition **(Assortativity Coefficient \(r\))**.: _Given a graph \(G=(V,E)\) and a sensitive attribute \(\mathbf{s}\) with \(|S|\) unique values, then let \(\mathbf{F}\in\mathbb{R}^{|S|\times|S|}\) be defined as_ \[F_{ij}=\frac{|\{(u,v):(u,v)\in E\wedge s_{u}=i\wedge s_{v}=j\}|}{|E|} \tag{10}\] _where \(F_{ij}\) is the fraction of edges in \(G\) that connect two nodes with attribute values \(i\in S\) and \(j\in S\). Notice that \(\sum_{i,j}F_{ij}=1\). Let \(a_{i}=\sum_{j}F_{ij}\) and \(b_{j}=\sum_{i}F_{ij}\), then the assortativity coefficient \(r\) of \(G\) is:_ \[r(G)=\frac{\sum_{i}F_{ii}-\sum_{i}a_{i}b_{i}}{1-\sum_{i}a_{i}b_{i}} \tag{11}\] _where \(r(G)\in[-1,1]\). Intuitively, \(r(G)=1\) implies that all edges in \(G\) are between nodes with the same sensitive attribute value whereas \(r(G)=0\) implies the opposite, that is, all edges in \(G\) are between nodes with different sensitive attribute values._ These graph-level metrics are important to understand fairness with respect to only the graph structure and sensitive attributes. More importantly, suppose a pre-processing fairness approach is used over the initial graph \(G\) to make it more fair, thus resulting in another modified graph \(G^{\prime}\). The graph-level fairness evaluation metrics can be used over this new modified graph \(G^{\prime}\) to evaluate whether it is more fair or not compared to the original graph or even another graph derived from another approach. These evaluation metrics can also be used internally during the training process. ### Neighborhood-level Fairness Metrics We now formally present a neighborhood fairness metric that can be leveraged prior to training a graph neural network model to determine the overall localized fairness in the graph with respect to one or more sensitive attributes. This metric indicates the impact on fairness from the neighborhoods on the learned embeddings. In other words, it reveals the overall local fairness when a GNN-based approach is used since these methods all leverage neighborhoods for learning the embeddings of the nodes in the graph. Therefore, this metric can reveal the overall fairness apriori to training a large-scale GNN model, and based on this, can leverage our approach or future state-of-the-art to mitigate the identified fairness issues that are revealed by the neighborhood fairness metric. More formally, the entropy-based neighborhood fairness metric is defined as follows: Definition 3 (Local Node Neighborhood Fairness).: _Let \(\mathbf{c}_{i}\) be the vector of the frequency of the sensitive attribute values of the neighbors \(N_{i}\) of node \(i\) such that \(c_{ik}=|N_{i}^{k}|\) where \(N_{i}^{k}\) is the subset of \(N_{i}\) with sensitive attribute value \(k\). Then the neighborhood fairness metric quantifying the localized fairness of a neighborhood of a node \(i\) is:_ \[\mathbb{F}(\mathbf{p}_{i})=-\sum_{k}\ p_{ik}\log p_{ik} \tag{12}\] _where \(\mathbf{p}_{i}=\frac{\mathbf{c}_{i}}{\sum_{k}c_{ik}}\) is the probability distribution vector \(\sum_{k}p_{ik}=1\) of node \(i\). Intuitively, when \(\mathbb{F}(\mathbf{p}_{i})=1\), then the neighborhood of \(i\) is said to be completely fair, as no information is revealed from the neighborhood of \(i\) about the sensitive attribute value of \(i\). In other words, when \(\mathbb{F}(\mathbf{p}_{i})=1\), the neighborhood \(N_{i}\) leaks no information about the sensitive attribute of \(i\) (maximum fairness). Conversely, when \(\mathbb{F}(\mathbf{p}_{i})=0\), then knowing \(\mathbf{p}_{i}\) reveals significant information about the sensitive attribute (least uncertainty). Hence, \(\mathbb{F}(\mathbf{p}_{i})=0\) indicates a neighborhood with minimum fairness (max unfairness) whereas \(\mathbb{F}(\mathbf{p}_{i})=1\) indicates a neighborhood with maximum fairness._ Notice that maximum neighborhood fairness is achieved when \(\mathbb{F}(\mathbf{p}_{i})=1\), that is, \(\mathbf{p}_{i}\) is the uniform probability distribution, therefore, revealing no information about the sensitive attribute value \(s_{i}\) of node \(i\). Conversely, maximum neighborhood unfairness is achieved when \(\mathbb{F}(\mathbf{p}_{i})=0\), indicating that the sensitive attribute value of \(i\) is deterministic, that is, able to be predicted with no uncertainty. Using Definition 3, we define the overall neighborhood fairness metric of a graph \(G\) is defined as follows: Definition 4 (Neighborhood Fairness).: _The neighborhood fairness \(\mathbb{F}(G)\) of a graph \(G\) is_ \[\mathbb{F}(G)=\frac{1}{|V|}\sum_{i\in V}\mathbb{F}(\mathbf{p}_{i}) \tag{13}\] _where \(\mathbb{F}(G)\) is an intuitive metric characterizing the inherit fairness of \(G\) over all the local neighborhoods. Thus, capturing the local fairness of the graph \(G\) with respect to the sensitive attribute \(\mathbf{s}=\left[\begin{array}{cccc}s_{1}s_{2}&\cdots&s_{i}&\cdots&s_{n}\\ \end{array}\right]\)._ Since neighborhoods lie at the heart of all GNN-based methods (Wu et al., 2020), the fairness of the trained GNN models and the resulting node embeddings are fundamentally tied to the neighborhoods used to train these models. ### Embedding-level Fairness Metrics To measure the fairness of the learned embeddings, one can leverage the notion of representation bias (RB). This metric enables one to understand if the node embeddings given as output by some arbitrary approach can be leveraged by an adversary to recover the sensitive attribute values of the nodes in the graph. More formally, for classifier \(c\), let \(P_{c}(s,\mathbf{z}_{i})\) denote the estimated probability that node \(i\) with embedding \(\mathbf{z}_{i}\) has sensitive attribute value \(s\in S\). Then representation bias (RB) score (Buyl and De Bie, 2020) is: \[\text{RB}=\sum_{s\in S}\frac{1}{V_{s}}\text{AUC}\big{(}\{P_{c}(A(j)|\mathbf{z}_ {j})|j\in V_{s}\}\big{)} \tag{14}\] where \(V_{s}=\{j\in V|A(j)=s\}\) and \(A(j)\) is the sensitive attribute value for node \(j\). Eq. 14 uses weighted one-vs-rest AUC score to measure prediction performance. Intuitively, if a model learns fair embeddings, then the classifier trained using the node embeddings should perform poorly (close to random if truly independent). However, if we are able to predict the sensitive attribute of a node with high accuracy using only the learned embeddings, then they are obviously not independent. ### Prediction-level Fairness Metrics #### 3.4.1. Statistical Parity (SP) The statistical parity (SP) metric (also called demographic parity, or DP) measures the difference between the group-level selection rates of the largest and the smallest groups. More formally, given the prediction \(\hat{Y}\) along with the sensitive attribute value \(s\), \(\Delta\texttt{SP}\) is: \[\Delta\texttt{SP}=\big{|}\mathbb{P}(\hat{Y}=1\,|\,s=1)\;-\;\mathbb{P}(\hat{Y }=1\,|\,s=0)\big{|} \tag{15}\] where \(\Delta\texttt{SP}=0\) implies all groups have the same selection rates, and thus, completely fair. Statistical parity measures the preferential treatment gap between the groups. However, \(\Delta\texttt{SP}\) does not consider whether the individual is qualified or not since it does not consider the ground-truth \(Y\). Note that Eq. 15 is defined for sensitive attributes with only two groups, though, is easy to generalize to \(k\) groups by considering the difference between the largest and smallest group-level selection rate across all values of the sensitive attribute: \[\Delta\texttt{SP}=\Big{|}\max_{s}\,\mathbb{E}\big{[}\hat{Y}\,|\,S=s\big{]}\;- \;\min_{s}\,\mathbb{E}\big{[}\hat{Y}\,|\,S=s\big{]}\Big{|} \tag{16}\] The above formulation also generalizes to the link prediction task. More formally, statistical parity for a link prediction function \(h:V\times V\to\{0,1\}\) is: \[\Delta\texttt{SP} =\big{|}\mathbb{P}\big{(}h(v,u)=1\,|\,s_{v}\neq s_{u}\big{)}\;- \;\mathbb{P}\big{(}h(v,u)=1\,|\,s_{v}=s_{u}\big{)}\big{|} \tag{17}\] where \(s_{v}\oplus s_{u}=1\) (or \(s_{v}\neq s_{u}\)) implies that \(v\) and \(u\) belong to different groups whereas \(s_{v}\oplus s_{u}=0\) (or \(s_{v}=s_{u}\)) implies they belong to the same group. Hence, in the case of link prediction, we only consider whether the sensitive attribute values are the same \(s_{v}=s_{u}\) or not \(s_{v}\neq s_{u}\), since an edge either exists or not, \(h(v,u)\in\{0,1\}\). #### 3.4.2. Equal Opportunity (EO) The equal opportunity metric requires the non-discrimination only within the "advantaged" outcome group. More formally, given the ground-truth \(Y\), the prediction \(\hat{Y}\), along with the sensitive attribute value \(s\), \(\Delta\texttt{EO}\) is: \[\Delta\texttt{EO}=\big{|}\mathbb{P}(\hat{Y}=1\,|\,Y=1,\,s=1)\;-\;\mathbb{P}( \hat{Y}=1\,|\,Y=1,\,s=0)\big{|} \tag{18}\] where lower values of \(\Delta\texttt{EO}\) imply better fairness. The above equal opportunity metric is a relaxation of the equalized odds metric (Hardt et al., 2016) that measures the difference of true positives, true negatives, false positives and false negatives between the groups. More formally, given the ground-truth \(Y\), the prediction \(\hat{Y}\), along with the sensitive attribute value \(s\), \(\Delta\texttt{EO}\) is: \[\Delta\texttt{EO}=\big{|}\mathbb{P}(\hat{Y}=1\,|\,Y=y,\,s=1)\;-\;\mathbb{P}( \hat{Y}=1\,|\,Y=y,\,s=0)\big{|}\,,\quad y\in\{0,1\} \tag{19}\] where \(\Delta\texttt{EO}=0\) implies all groups have the same true positive, true negative, false positive, and false negative rates, and are therefore fair. ## 4. GNN Fairness Techniques For graph fairness, techniques can generally take one of three entry points for their mitigation - modifying graphs before training with pre-processing (Sec. 4.1), modifying the training process (Sec. 4.2), modifying the outputs with post-processing (Sec. 4.3), or a hybrid approach that combines two or more mitigation techniques at different stages (Sec. 4.4). We summarize and categorize GNN fairness techniques in Table 1 using the proposed taxonomy that categorizes fairness techniques for GNNs based on the (i) type of input graph supported (_e.g._, homogeneous, bipartite, heterogeneous), (ii) type of unfairness mitigation technique based on whether bias mitigation is performed during pre-processing, training, or post-processing, and (iii) graph learning task such as node classification, link prediction, or link classification. ### Pre-Processing Pre-processing techniques remove bias or unfairness before GNN training occurs, by targeting the input graph structure \(\mathbf{A}\), input features \(\mathbf{X}\), or both. For instance, work by Spinelli et al. (2021) proposed a pre-processing approach that randomly removes edges from the graph before training to debias the resulting GNN model. More recently, Chen et al. (2022) developed a GNN fairness framework based on the proposed notion of neighborhood fairness. The framework consists of two main components where the first component constructs unbiased and fair neighborhoods by adding and removing edges to ensure each neighborhood is unbiased with respect to the sensitive attribute while preserving structures important for prediction tasks such as link prediction and classification. The second component provides additional flexibility by enabling the fair neighborhoods to be modified via a function to capture certain application or data-dependent constraints. These fair Figure 1. Exact neighborhood fairness is NP-hard for GNNs even at the graph-level. Greedy fairness optimization via neighborhood edge augmentation. In each step, a vertex \(i\in V\) is selected, and the neighborhood is modified to make it fair, _e.g._, by adding two edges as shown in (a). As this process continues for every node in a greedy fashion, there is no guarantee that the subsequent nodes remain fair, unless we explicitly incorporate this constraint, which makes this problem in general very complex, and even with this additional constraint to revisit nodes or carefully change the graph to ensure nodes visited previously remain fair, there is no guarantee that all such nodes can actually be fair. This intuition is also useful for understanding in-training methods discussed in Sec 4.2 that sample or augment neighborhoods to reduce bias for each node during the neighborhood aggregation process when training the GNN. neighborhoods are then leveraged by any arbitrary GNN model to learn fair embeddings for downstream graph learning tasks. An intuitive illustration showing the difficulty of guaranteeing exact fairness with respect to the neighborhoods is shown in Figure 1. In particular, we see that even in this simple example, it becomes impossible to ensure fairness with respect to each neighborhood in the graph, since when one neighborhood is made fair, it can impact the surrounding neighborhoods. It is also straightforward to see that an iterative optimization approach to solve this would require a significant computational cost without any guarantees of fairness across each neighborhood, and the neighborhoods that have the largest impact are often the ones that have the most impact when training GNNs, since they are the ones that connect to many other nodes, and therefore, updating the neighborhood, and even the embedding when using this approach during training of the GNN impacts the embeddings of all other nodes connected. Current et al. (2022) studied a few graph modification strategies that perform either microscopic or macroscopic edits to the input graph. One of the proposed methods adds a new node for each existing node to balance biases in the graph, whereas the other methods only include a fixed number of existing nodes and include weights for the edges as a means to debias the graph for GNN training. Another approach called FairAdj (Li et al., 2020) seeks to learn a fair adjacency matrix for a downstream link prediction task by updating the normalized adjacency matrix while keeping the original graph unchanged. This approach rewires the graph to preserve structural constraints for fairness while trying to also preserve accuracy as much as possible. Furthermore, they introduce dyadic fairness that expects the prediction of a link between two nodes to be statistically independent of their sensitive attributes, hence, \(P(g(u,v)|S(u)=S(v))=P(g(u,v)|S(u)\neq S(v))\). Yang et al. (2022) proposed data reparation through optimal transport techniques to obtain dyadic fairness. Similarly, Laclau et al. (2021) proposed a repairing procedure for the graph adjacency matrix with a trade-off between group and individual fairness. ### In-Training Most GNN-based bias mitigation techniques have focused on modifying the objective function of GNNs to learn fair and unbiased embeddings during training. This can be through the addition of constraints or regularization to the objective function, adding attention weights to GATs that focus on fairness weighting, or careful sampling of the explicit neighborhoods for updating the embedding via aggregation, as well as many other ways to mitigate bias during training, which are discussed in detail below. For instance, DegFairGCN (Liu et al., 2023) considers two groups of nodes based on low and high-degree when performing neighbor aggregations, namely, \(\mathcal{S}_{0}=\{v\in\mathcal{V}|\text{deg}_{1}(v)\leq K\}\) and \(\mathcal{S}_{1}=\mathcal{V}\setminus\mathcal{S}_{0}\) where \(\mathcal{S}_{0}\) is the low-degree nodes, \(\mathcal{S}_{1}\) is the high-degree node group, and \(K\) is a threshold for creating such groups. Using these two groups, they modify the neighborhood aggregation used to consider these two groups differently, attempting to debias them accordingly during training. Instead of using traditional sensitive attributes for fairness evaluation, that work used node degree as the sensitive attribute, which is problematic. FairEdit leverages both greedy edge additions and deletions to improve fairness in GNNs (Loveland et al., 2022). Recent work by Kose and Shen (2023) developed an attention mechanism that mitigates bias called FairGAT. The fairness-aware attention mechanism can be leveraged in other attention-oriented GNNs as well (Lee et al., 2019). All these approaches discussed thus far have focused on reducing bias in the neighborhood used for aggregation either by sampling, modifying, or reweighting the nodes in the neighborhood right before its used for aggregation during training. However, performing aggregation using these locally "fair" neighborhoods has even fewer guarantees than approaches that modify the graph structure before training, see Fig 1 and the discussion in Sec. 4.1 for intuition. Other work by Palowitch and Perozzi (2020) introduced a neural network architecture component called MONET that performs training-time debiasing by ensuring the embeddings are trained on a hyperplane orthogonal to the metadata. Agarwal et al. (2021) developed a novel objective function to account for fairness and stability called NIFTY. They also introduce a layer-wise weight normalization to enforce fairness in the GNN architecture. Further, Buyl and De Bie (2020) proposed a Bayesian method called DeBayes that leverages a biased prior to learn debiased embeddings. Dong et al. (2021) introduced a rank-based approach called REDRESS for mitigating individual unfairness in GNNs where the goal is to ensure GNNs infer similar predictions for individual nodes that are similar to one another. The approach jointly optimizes the utility maximization of GNNs and rank-based individual fairness in an end-to-end fashion. Zhu et al. (2023) proposed a fairness-aware message-passing framework for GNNs called GMMD for node classification that seeks to jointly optimize both representation fairness and graph smoothing. Similarly, Lin et al. (2023) developed a balanced message-passing approach for GNNs called BeMap. This approach uses a sampling strategy to balance the number of 1-hop neighbors of each type for every node in the graph. This is in principle similar to the first step of FairNeigh, though performed during training. Dong et al. (2022) developed an approach called Edits for mitigating both attribute-based bias as well as structural bias in GNNs based on the Wasserstein distance. However, attribute and structural debiasing are independent of one another, as opposed to being jointly considered, which is important since GNNs are trained by leveraging both. More recently, He et al. (2023) proposed an efficient approach called FairMILE for ensuring fairness in GNNs via a multi-level framework that leverages graph coarsening to obtain base embeddings and then refines these to obtain an embedding for each node of the graph. There are also many other in-training approaches that leverage an adversarial framework, by incorporating the objective that an adversarial model should not have high accuracy in predicting the sensitive attribute (Khajehnejad et al., 2020; Liu et al., 2022; Singh et al., 2021; Wang et al., 2022). There have also been a number of works focused on GNN-based recommendation (Chizari et al., 2023; Liu et al., 2022; Medda et al., 2023; Salganik et al., 2022; Wu et al., 2022, 2023; Xu et al., 2023). FairRec (Patro et al., 2020) was proposed for the closely related task of recommendation. In particular, they studied fairness in recommender systems involving customers and producers, and proposed an approach called FairRec that is based on fair allocation of indivisible goods. FairRec guarantees at least maximin share of exposure for most producers and envy-free up to one good fairness for every customer. Li et al. (2021) proposed an adversarial in-training method to learn fair user embeddings for fair recommendations. Separately, Li et al. (2022) designed a framework for fair sequential recommendations, which can both do end-to-end training and also learn fairness-aware preference graph embeddings. There have also been some recent works that exploit communities to obtain fair link predictions in complex networks, such as HM-EIICT (Saxena et al., 2021). Tsioutsioliklis et al. (2021) developed algorithms for fairness in the PageRank algorithm, requiring fairness on the proportion of PageRank score assigned to each group, and then fairness on the derived personalized PageRank to each node. Recent work has also considered mitigating fairness issues in a wide range of different types of graphs, including hypergraphs (Wei et al., 2022), heterogeneous information networks (Cao et al., 2023; Zeng et al., 2021), knowledge graphs (Vannur et al., 2021; Wang et al., 2022), and even temporal networks (Song et al., 2022; Xu et al., 2021). For hypergraphs, Wei et al. (2022) proposed HyperGCL and show that their method for augmenting hypergraphs improves fairness in representation learning. Cao et al. (2023) proposed FairHELP for deriving fair embeddings for heterogeneous information networks. Most recently, temporal networks have been considered by Song et al. (2022) where they propose an approach for improving individual fairness for dynamic GNNs such as EvolveGCN. For this, they introduce a simple regularization-based method to achieve individual fairness in the dynamic graph setting for GNNs. Other papers have identified node degree as a source of bias (Jiang et al., 2022; Kang et al., 2022; Tang et al., 2020). The latter defines node-degree disparity in terms of the Rawlsian difference principle, and proposes a RawlsGCN-Graph pre-processing method and a RawlsGCN-Grad in-processing method for fair predictive accuracy. Recent work by Loveland et al. (2022) investigated fairness in GNNs when neighborhoods in the graph are heterophilous as opposed to homophilous. In this setting, they find that several fairness metrics can be significantly improved when leveraging heterophilous GNNs that naturally handle disassortativity. ### Post-Processing Post-processing techniques take the output embeddings of a GNN model and remove bias from them. Techniques include adding filters or otherwise removing information about the sensitive attribute from those embeddings. Masrour et al. (2020) addressed the problem of link prediction homophily with postprocessing, as well as an adversarial framework. Fisher et al. (2020) developed techniques for debiasing knowledge graph embeddings. Dai and Wang (2021) proposed FairGNN, a framework for fair node classification using GNNs given limited sensitive attribute information. Bose and Hamilton (2019) investigated incorporating compositional fairness constraints into graph embeddings, creating sensitive attribute filters that can be optionally applied after training. FairGO (Wu et al., 2021) was proposed to ensure fairness by taking the original embeddings from a method and then leveraging a composition of filters that transform the embeddings to a new filtered embedding space to improve fairness. This transformation leverages adversarial learning of a user-centric graph to obfuscate the sensitive attribute from the underlying embeddings. Kose et al. (2023) designed a fairness-aware filter to reduce the bias in the learned embeddings from GNNs by essentially removing the sensitive information. This technique can be used in other GNN designs. They also demonstrate the effectiveness theoretically compared to the fairness-agnostic embedding that arises without their fairness-aware filter. ### Hybrid A hybrid technique combines two or more techniques that are used at different stages (_e.g._, pre-processing, during training, and post-processing) to ensure a better and more robust degree of fairness with respect to the sensitive attribute(s). Few papers explicitly propose hybrid methods, which combine two or more of the previous techniques, but it is very much a possibility to improve fairness and robustness through some combinations. An example of this might be to use a preprocessing technique such as rewiring the graph to ensure exact neighborhood fairness (Chen et al., 2022) and then using an in-training technique that adds a fairness constraint to the objective of a GNN model to further ensure fairness of the learned embeddings. While there have not been many hybrid fairness techniques for GNNs, there has been one work by Kang et al. (2020) that focused on a related graph learning method for graph mining tasks. In particular, InFoRM first modifies the graph structure to remove bias, then attempts to debias the mining model by solving an optimization problem, and finally solves a similar problem for debiasing the results from the mining model. It is straightforward to leverage many of the GNN fairness techniques discussed in the previous sections together to obtain novel hybrid approaches for providing even more fair GNNs and results with potentially stronger fairness guarantees as well. ## 5. Datasets We summarize datasets commonly used for evaluating fairness of GNNs in Table 2. Notably, the datasets are organized by application domain including **recommendation** (**bipartite graphs**), **social networks**, **collaboration networks**, **web graphs**, **similarity graphs**, and **citation networks**. Furthermore, \(|V|\) and \(|E|\) are the number of nodes and edges, whereas \(|S|\) is the number of unique values of the sensitive attribute \(S\). We also denote \(|\mathcal{S}|\) as the number of sensitive attributes and \(|\mathbf{X}|\) as the number of input features. While most work focuses on a single sensitive attribute, there are some datasets with multiple sensitive attributes that can be used for fairness techniques designed for multiple sensitive attributes. Furthermore, we also summarize the various fair graph learning tasks that each dataset was used, which include link prediction (LP), node classification (NC), and link classification (LC). Notably, there is only a single link classification dataset used in the literature. This may be due to the task being close to real-world data found in industry, but also \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline & Dataset & \(|V|\) & \(|E|\) & \(S\) (\(|S|\)) & \(|\mathbf{X}|\) & Description & Task \\ \hline REC & ML-100K & 2,625 & 100K & gender (2), age (7), job (21) & 12 & user-by-movie & LP \\ & ML-1M & 10,040 & 1M & gender (2), age (7), job (21) & 11 & user-by-movie & LP \\ & LastFM & 49,900 & 518,647 & gender (2), age (3) & \(-\) & user-by-artist & LP \\ & Amazon & 334,863 & 925,872 & product category (4) & \(-\) & user-by-product & LP \\ & Yelp & 12,683 & 211,721 & food genre (4) & 14 & user-by-business & LP \\ \hline SOCIAL & Pokec & 1.63M & 30.6M & gender (2), region (2) & 59 & friendship & LP \\ & Pokec-n & 66,569 & 729,129 & region (2) & 59 & friendship & NC \\ & Pokec-z & 67,797 & 882,765 & region (2) & 59 & friendship & NC \\ & Twitter\({}^{*}\) & 81,306 & 1,768,149 & political view (2) & 1,364 & who-follows- & LP \\ & & & & & & whom & \\ Facebook & 1,034 & 26,749 & gender (2) & 224 & friendship & LP \\ & fb-Ok97 & 3,111 & 73,230 & gender (2) & 8 & friendship & LP,NC \\ & fb-UNC28 & 4,018 & 65,287 & gender (2) & 8 & friendship & LP,NC \\ & fb-Rice & 1,205 & 42,443 & age (2) & 2 & friendship & LP,NC \\ & Google+ & 4,938 & 547,923 & gender (2) & 5 & friendship & LP \\ & NBA & 403 & 10,621 & country (2) & 96 & who-follows- & NC \\ & & & & & & whom & \\ & fb-Gender & 7,315 & 89,733 & gender (2) & \(-\) & friendship & LP \\ & Retweet-pol & 18,470 & 61,157 & political view (2) & \(-\) & friendship & LP \\ & Dutch school & 26 & 221 & gender (2) & \(-\) & friendship & LP \\ & Epinion & 8,806 & 157,887 & \(-\) & \(-\) & user-trusts-user & LP \\ & Ciao & 7,317 & 85,205 & \(-\) & \(-\) & user-trusts-user & LP \\ & Filmtrust & 3,579 & 35,494 & \(-\) & \(-\) & user-trusts-user & LP \\ \hline COLLAB & Citeseer & 3,327 & 4,732 & topic (6) & 3,703 & coauthorship & LP \\ & Cora & 2,708 & 5,429 & topic (7) & 1,433 & coauthorship & LP \\ & Pubmed & 19,717 & 44,338 & topic (3) & 500 & coauthorship & LP \\ & DBLP & 3,980 & 6,965 & continent (5), gender(2) & \(-\) & coauthorship & LP \\ & Hospital-cont & 75 & 1139 & job (4) & \(-\) & proximity & LC \\ \hline WEB & WebKB & 265 & 530 & topic (5) & 500 & web graph & LP \\ & Chameleon & 2,277 & 31,371 & gen. node degree (2) & 2,325 & web graph & NC \\ & Squirrel & 5,201 & 198,353 & gen. node degree (2) & 2,089 & web graph & NC \\ \hline SIM & German & 1,000 & 21,742 & gender (2) & 27 & client similarity & NC \\ & Recidivism & 18,876 & 311,870 & race (2) & 18 & defendant sim. & NC \\ & Credit & 30,000 & 1,421,858 & age (2) & 13 & individual sim. & NC \\ \hline CIT & EMNLP & 2,600 & 7,969 & gen. node degree (2) & 8 & citation network & NC \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of datasets used for fairness evaluation of GNNs. Note \(|V|\) and \(|E|\) are the number of nodes and edges, respectively. Further, \(|S|\) is the number of unique values of the sensitive attribute \(S\). Also, \(|\mathcal{S}|\) is the number of sensitive attributes and \(|\mathbf{X}|\) is the number of input features (if any). We categorize datasets according to their domain (recommendation, social networks, collaboration, web graphs, similarity, or citation networks) along with the task that each dataset was used. evaluating fairness in link classification requires having one or more sensitive attributes on the nodes or links. ## 6. Open Problems & Challenges In this section, we discuss open problems and highlight important challenges for future work. ### Feature vs. Feature-less Setting Previous work on developing fairness techniques for GNNs has mainly focused on graphs with features. The reason for this is quite simple. GNNs require an initial feature matrix, and therefore, most work has simply used datasets that naturally come with an input feature matrix. However, this severely limits the graph datasets used for evaluation to those that have one or more sensitive attributes, as well as an entire feature matrix. Most importantly though, input features are often highly correlated with the sensitive attribute, and therefore potentially (and often) add multiple confounding factors when evaluating fairness techniques that are mostly focused on the structure of the graph, as opposed to the correlation of input features with respect to the sensitive attribute. For these fundamental reasons, one should also consider a new setting when evaluating techniques for improving fairness in GNNs, which we call the _feature-less setting_. In this proposed setting, we do not include any underlying features as input, and instead initialize the feature matrix either uniformly at random or based on the graph structure, e.g., using SVD or an unsupervised embedding approach such as node2vec (Grover and Leskovec, 2016) or DeepGL (Rossi et al., 2018). This new feature-less setting may actually be more useful, as most graphs do not naturally come with input features (see NetworkRepository by Rossi and Ahmed (2015)), and therefore, it opens the door for evaluation on a much larger scale and gives rise to entirely new use cases and practical applications for such approaches. Nevertheless, one can also study graphs with features under this setting by simply ignoring them. Studying both the feature and feature-less setting allows for a better evaluation and understanding of the approach under different conditions and controlling for different factors that may influence the fairness, accuracy, and the underlying conclusions that are drawn from the experiments. Understanding how previous work performs in this setting remains an open problem. ### Theoretical Limits Understanding the theoretical limits in terms of the fairness and accuracy trade-offs and deriving theoretical guarantees for such techniques is fundamentally important. Despite this, theoretically analyzing existing fairness techniques for GNNs remains a largely open problem for future work. ### Link Classification While most work has focused on developing techniques for either node classification or link prediction, the problem of link classification remains largely unexplored. In this task, we are given a small fraction of link labels for training and need to predict the remaining held-out labels of the links. This is fundamentally different from link prediction since in link classification we are given the entire graph \(G\) along with a sensitive attribute on the nodes that is never seen by the algorithm, and need to correctly infer the label of the link (which is already existing in the graph). Link classification in GNNs are often a multi-class problem where the number of unique labels to infer is much larger than inferring only two labels, which is the simplest binary link classification task. ### Heterogeneous and Temporal Networks Most work has only focused on developing fairness techniques for GNNs on simple graphs. However, it is unclear how such techniques perform when the graph is heterogeneous, that is, nodes and edges may be of completely different types. Similarly, ensuring fairness when the graph structure and possibly even its attributes of it are changing over time remains an open and challenging problem. These different types of networks may also lead to the need for fairness metrics for the evaluation of such techniques for these important types of networks. ### Large Language Models Recently, GNNs have found applications in language models (Meng et al., 2021). Fairness in such models is of fundamental importance due to their wide-scale use in many applications, yet very challenging due to how these models are trained. Future work should focus on developing fair and unbiased GNN-based language models. ## 7. Conclusion Given the importance of GNNs due to their representational power and state-of-the-art predictive performance, this paper has surveyed techniques for improving the fairness of GNNs. After presenting a taxonomy for fairness in GNNs that categorizes techniques based on the type of input graph supported, type of fairness technique (post-processing, in-training, post-processing), and the graph learning task. We also introduce an intuitive taxonomy for graph fairness evaluation metrics including graph-level fairness, neighborhood-level fairness, embedding fairness, and prediction-level fairness metrics for GNNs. Furthermore, we summarize the graph datasets useful for benchmarking GNN fairness techniques and categorize them according to the domain and graph learning task. As discussed in Section 6, there remains significant work to do. One important and largely open problem is understanding the theoretical limits in terms of the fairness and accuracy trade-offs and deriving theoretical guarantees for such techniques. Theoretically analyzing existing fairness techniques across the different categories of approaches (pre-processing, in-training, and post-processing) remains a largely open problem for future work as well.
2308.04169
Dual input neural networks for positional sound source localization
In many signal processing applications, metadata may be advantageously used in conjunction with a high dimensional signal to produce a desired output. In the case of classical Sound Source Localization (SSL) algorithms, information from a high dimensional, multichannel audio signals received by many distributed microphones is combined with information describing acoustic properties of the scene, such as the microphones' coordinates in space, to estimate the position of a sound source. We introduce Dual Input Neural Networks (DI-NNs) as a simple and effective way to model these two data types in a neural network. We train and evaluate our proposed DI-NN on scenarios of varying difficulty and realism and compare it against an alternative architecture, a classical Least-Squares (LS) method as well as a classical Convolutional Recurrent Neural Network (CRNN). Our results show that the DI-NN significantly outperforms the baselines, achieving a five times lower localization error than the LS method and two times lower than the CRNN in a test dataset of real recordings.
Eric Grinstein, Vincent W. Neo, Patrick A. Naylor
2023-08-08T09:59:56Z
http://arxiv.org/abs/2308.04169v1
# Dual input neural networks for positional sound source localization ###### Abstract In many signal processing applications, metadata may be advantageously used in conjunction with a high dimensional signal to produce a desired output. In the case of classical Sound Source Localization (SSL) algorithms, information from a high dimensional, multichannel audio signals received by many distributed microphones is combined with information describing acoustic properties of the scene, such as the microphones' coordinates in space, to estimate the position of a sound source. We introduce Dual Input Neural Networks (DI-NNs) as a simple and effective way to model these two data types in a neural network. We train and evaluate our proposed DI-NN on scenarios of varying difficulty and realism and compare it against an alternative architecture, a classical Least-Squares (LS) method as well as a classical Convolutional Recurrent Neural Network (CRNN). Our results show that the DI-NN significantly outperforms the baselines, achieving a five times lower localization error than the LS method and two times lower than the CRNN in a test dataset of real recordings. Keywords:sound source localization; multichannel audio processing; multimodal machine learning; convolutional recurrent neural networks + Footnote †: journal: ## 1 Introduction Most signals, such as audio and images, contain metadata. Metadata can be signal-based, which describes quantitative properties of the signal, such as its sampling rate, as well as semantic, which describes, for example, contextual properties. In speech processing, semantic metadata could consist of the speaker's language or gender. Whether signal-based or semantic, including metadata as a secondary input into neural network models may provide relevant information which would translate into an economy of training time, model parameters and flexibility. However, metadata typically has a different dimensionality than the input signals, making its incorporation into those models not trivial. The main focus of this paper is to study the effectiveness of schemes to process signals and exploit metadata jointly using neural network models. We focus on the task of Sound Source Localization (SSL) [1] using distributed microphone arrays to demonstrate the effectiveness of our proposed approach. In the context of SSL, relevant metadata which is exploited by classical methods is the microphone positions, which can be acquired by manual measurement or using self-calibration [2] methods. Other relevant metadata is the room dimensions and its reverberation time. SSL refers to the task of estimating the spatial location of a sound source, such as a human talker or a loudspeaker. In this scenario, metadata refers to properties of the acoustic scene such as the coordinates of microphones, dimensions of the room and, the reflection coefficient of the walls. SSL has many applications, including noise reduction and speech enhancement [3], camera steering [4] and acoustic Simultaneous Localization and Mapping (SLAM) [5]. In turn, distributed microphone arrays have become an active research topic in the signal processing community due to their versatility. Such arrays may be composed of multiple network-connected devices, including everyday devices such as cell phones, smart assistants, and laptops, for example. The array and the constituent devices may be configured as a Wireless Acoustic Sensor Network (WASN) [6]. SSL approaches may be divided into classical signal processing-based and data-driven neural network-based methods. By explicitly exploiting metadata describing microphone positions and room dimensions, classical approaches may be applied to different rooms and microphone configurations. Conversely, neural network approaches have recently achieved state of the art results for source localization [7; 8; 9], at the expense of requiring one network to be trained for every microphone topology. One reason current neural approaches do not incorporate the microphones' positional information is that the microphones' signal and positional data are very different from one another in nature and dimension. Previous work which discusses the joint processing of signals and metadata is [10], where a single input neural network is used to process metadata in conjunction with a low-dimensional physical signal. However, unlike our work, the method of [10] is restricted to multilayer perceptron architectures and one-dimensional
2303.13997
PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration
Deep neural networks (DNNs) have been successfully applied in various fields. A major challenge of deploying DNNs, especially on edge devices, is power consumption, due to the large number of multiply-and-accumulate (MAC) operations. To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations. In addition, the timing characteristics of the selected weights together with all activation transitions are evaluated. The weights and activations that lead to small delays are further selected. Consequently, the maximum delay of the sensitized circuit paths in the MAC units is reduced even without modifying MAC units, which thus allows a flexible scaling of supply voltage to reduce power consumption further. Together with retraining, the proposed method can reduce power consumption of DNNs on hardware by up to 78.3% with only a slight accuracy loss.
Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li
2023-03-24T13:52:07Z
http://arxiv.org/abs/2303.13997v2
# PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration ###### Abstract. Deep neural networks (DNNs) have been successfully applied in various fields. A major challenge of deploying DNNs, especially on edge devices, is power consumption, due to the large number of multiply-and-accumulate (MAC) operations. To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations. In addition, the timing characteristics of the selected weights together with all activation transitions are evaluated. The weights and activations that lead to small delays are further selected. Consequently, the maximum delay of the sensitized circuit paths in the MAC units is reduced even without modifying MAC units, which thus allows a flexible scaling of supply voltage to reduce power consumption further. Together with retraining, the proposed method can reduce power consumption of DNNs on hardware by up to 78.3% with only a slight accuracy loss. ## 1. Introduction Deep neural networks (DNNs) have been successfully applied in various fields, e.g., image/speech recognition. In DNNs, a huge number of multiply-and-accumulate (MAC) operations with weights need to be executed, which correspondingly causes a high power consumption in hardware. This high power consumption poses challenges in applying DNNs on power-constrained computing scenarios, e.g., plant disease detection in agriculture (Beng et al., 2015) and medical diagnosis devices (Beng et al., 2015). To overcome the challenge above, various methods on software and hardware levels have been explored. On the software level, pruning has been proposed to reduce the number of weights in DNNs and thus power consumption. For example, (Beng et al., 2015) proposes to prune weights with small absolute values to reduce the computation cost while maintaining inference accuracy. In addition, structure pruning (Beng et al., 2015) is further developed to facilitate the mapping of DNNs onto hardware. Besides pruning, quantization (Beng et al., 2015) is another major category of methods to reduce the computation cost of DNNs. With quantization, MAC units are implemented to process only integer instead of floating-point arithmetic, thus leading to a significant power reduction (Beng et al., 2015). On the hardware level, various architectures have been proposed to explore how MAC units are organized and how data flow through the accelerators to reduce power consumption. The systolic array from Google (Grover et al., 2016; Chen et al., 2016) adopts a weight-stationary data flow, where weights are stationary and activations and partial sums are moved across the array to maximize data reuse. Accordingly, the amount of memory access and thus power consumption can be reduced. In addition, the Eyeriss structure (Eyeriss, 2016) uses a row-stationary data flow where the multiplication of rows of filters and activations is computed in a MAC array to reduce data movement and thus power consumption. The hardware architectures above have also been extended to reduce power consumption further. For example, a clock-gating scheme is proposed in (Han et al., 2017) to disable the operations of unused MAC units to reduce dynamic power consumption. In (Grover et al., 2016), power-gating unused processing elements is proposed to reduce leakage power in idle hardware units. In addition, an earlystop technique in hardware has been proposed in (Li et al., 2017) to skip unnecessary MAC operations, though a complex control logic is needed to implement this technique. Furthermore, GreenTPU in (Li et al., 2018) scales the supply voltage of the computing logic down to near-threshold levels while keeping a high compute performance. But this method requires complex control logic to detect timing errors on-the-fly and to track activation sequences that cause timing errors. Similarly, Minerva (Minerva, 2018) proposes a voltage scaling of memory units storing weights while exploiting the flexibility of neural networks to tolerate weight errors. Different from the previous methods, most of which require special hardware architecture or control logic, we propose PowerPruning, a novel method exploiting the power and timing characteristics of weights and activations to reduce power consumption without modifying MAC units. PowerPruning is the first technique to evaluate the power and timing properties of each individual weight value and adjust neural networks accordingly. This technique is compatible with the previous methods for power reduction of executing neural networks and can be integrated with them seamlessly. The key contributions are summarized as follows: * The power consumption of weight values is evaluated with respect to activations when the MAC operations are executed on hardware. Afterwards, weight values that lead to less power consumption in MAC operations are preferred for training neural networks to enhance the power efficiency. * We consider the actual delays of the MAC operations in hardware with respect to weight values and activations. In training neural networks, the weight values and activations that sensitize paths with small delays are selected Correspondingly, the circuit can run faster without modifying MAC units. We then scale the supply voltage to reduce the power consumption while maintaining the original computational performance. * Neural networks are retrained by restricting weights and activations to the selected values while maximizing the inference accuracy. With the selected weights and activations, power consumption of DNNs can be reduced by up to 78.3% with only a slight accuracy loss. The rest of the paper is structured as follows. Section 2 explains the motivation of this work. Section 3 elaborates the details of the proposed technique. Experimental results are presented in Section 4 and conclusions are drawn in Section 5. ## 2. Motivation In executing DNNs on hardware platforms, the huge number of MAC operations may consume much power. Existing methods often introduce hardware modifications, which may incur extra hardware cost or make the design specific for individual neural networks. On the contrary, we address this power consumption issue by examining the power and timing properties of the weight values and activations. A MAC unit calculates the multiplication of a weight and an activation and adds the result to a partial sum, as illustrated in Figure 2. Assume the weight of a neural network is quantized to \(n\) bits. Correspondingly, there are \(2^{n}\) possible weight values. These weight values are one of the inputs to the digital logic implementing the MAC operations. Since different weight values cause different signal switching activities inside the MAC units, they also exhibit different average power consumption with respect to the activation transitions and partial sum transitions. For example, the weight values \(2^{n},n=0,1,\ldots,n-2\), lead to less power consumption, because the multiplication with these weight values are actually shift operations and can thus activate fewer signal propagations in the circuit. To demonstrate the different power consumption of weight values, we evaluated the average power consumption of different weight values in a MAC unit of a \(64\times 64\) systolic array. We simulated the execution of LeNet-5 processing 100 pictures randomly selected from the CIFAR-10 dataset. During simulation, we collected statistics of the switching activities of various signals inside the systolic array. Based on this data we estimated the average power consumption of each weight value using Power Compiler from Synopsys. Figure 2 illustrates the average power consumption of the weight values obtained by the simulation described above. According to this figure, different weight values can lead to substantially different average power consumption. For example, the quantized weight value -105 has a large average power consumption 1,029 \(\mu\)W, while the quantized weight value -2 has only 539 \(\mu\)W. _According to this observation, by restricting neural networks to prefer the weight values with small average power consumption, the overall power consumption of executing neural networks can be lowered._ Besides different power characteristics, different weights also exhibit different timing profiles in a MAC unit. Inside a MAC unit shown in Figure 2, there are many combinational paths, which have different delays and are triggered by specific input data, i.e., weight, activation, and partial sum. If the weight is fixed to a given value, some combinational paths in the MAC unit cannot be sensitized. Accordingly, the delay of the MAC unit may differ with respect to different weight values. To demonstrate this difference, we conducted timing analysis of the MAC unit with fixed weight values and all activation transitions using Modelsim. Figure 2 illustrates the delay profiles of two quantized weight values -105 and 64, where the x-axis shows the delay and the y-axis shows the frequency of this delay appearing with respect to all possible activation transitions. Figure 2 confirms that different weight values lead to different delays. In addition, it shows that the delays can be reduced further if some activations can be pruned from the neural network, e.g., the activation transitions triggering delays on the far right end of the x-axis. _Since the clock period of a circuit is determined by the maximum delay of all the combinational paths, the clock frequency of the MAC unit and thus the computational performance can be increased by pruning weights and activations according to their timing profiles. Alternatively, the supply voltage can be lowered to reduce power consumption further, while maintaining the original clock frequency._ ## 3. Weight and Activation Selection for Power-Efficient Neural Network Acceleration In this section, we introduce the proposed PowerPruning method to reduce power consumption in digital neural network accelerators. The weight selection according to the average power consumption is first explained in Section 3.1. Afterwards, the selection of weights and activations with respect to their timing characteristics is explained in Section 3.2. The retraining of neural networks by restricting weights and activations to the selected values to reduce power consumption is described in Section 3.3. ### Weight selection according to power consumption As shown in Figure 2, different weights in a MAC unit lead to different average power consumption. To take advantage of this characteristic to reduce power consumption of DNN accelerators, the average power consumption of all the 8-bit integer weight values in a MAC unit should be evaluated. To do this, the input of the MAC unit corresponding to the weight is fixed to a given value, as shown in Figure 2. The various combinations of activation transitions and partial sum transitions are fed into the other inputs of the MAC unit to obtain the switching activities of the MAC unit. Based on these switching activities the power consumption for the fixed weight value can be evaluated using Power Compiler from Synopsys. Two challenges in evaluating the average power consumption of a weight should be addressed. First, the number of combined transitions of activations and partial sums is huge, e.g., \(2^{(8+22)\times 2}=2^{60}\approx 10^{18}\), when the activations and the partial sums are quantized to 8 and 22 bits, respectively, for a \(64\ \times 64\) systolic array. \(\times 2\) is due to the fact that the power consumption is caused by the transitions from a combination of activation and partial sum to another combination, instead of the static values of the activation and partial sum. Accordingly, simulating all these transitions to identify the power consumption is very time-consuming. Second, just sampling all possible combined transitions of activations and partial sums does not reflect the probabilities of such transitions when executing neural networks in a systolic array. For example, a combined transition may appear more frequently than other transitions, so that it should contribute more to the result of power evaluation than others. To deal with these challenges, we first identify the transition distributions for activations and partial sums with real data executing on the systolic array, described as follows. In addition, we partition the value range of the partial sum into a small number of bins to reduce the partial sum transition space and then evaluate the transition probability from one bin to another bin. #### 3.1.1. Evaluation of activation transition distribution For the 8-bit activation as an input to a MAC unit, the total number of possible transitions is \(2^{8\times 2}=2^{16}\). To obtain the activation transition distribution, we simulate the activities of a systolic array and count the frequency of each individual transition. For example, for LeNet-5 on CIFAR10, we randomly select 100 pictures and execute the neural network on the systolic array. In total we counted approximately \(10^{17}\) activation transitions. Since this number is larger than the number of possible transitions \(2^{16}\), the result will well exhibit the distribution of the activation transitions. Figure 4(a) shows the resulting activation transition distribution, where darker colors represent a lower probability and brighter colors a higher probability. In this figure, the bright diagonal line clearly indicates that most transitions appear between activations with similar values, while activation transitions from very high to very low values and vice versa are very unlikely to happen. #### 3.1.2. Evaluation of partial sum transition distribution and transition space reduction A partial sum has 22 bits in a systolic array with the size of \(64\times 64\), which results in \(2^{22\times 2}=2^{44}\approx 1.8\times 10^{13}\) possible transitions. If we would simulate 100 pictures on the systolic array, we can obtain approximately \(2.2\times 10^{8}\) partial sum transitions, which is much smaller than the number of possible transitions and cannot produce a trustworthy distribution. Increasing the number of pictures in simulation is not a viable solution due to runtime. To solve this problem, we partition the value range of the partial sum into a small number of bins. Accordingly, instead of evaluating the transition probability of individual partial sum values, we evaluate the transition probability from one bin to another bin. To partition all partial sums into a small number of bins, we should guarantee that in each bin the switching activities of partial sums should be kept as similar as possible. We partition the partial sums by keeping the number of consecutive most significant bits with the same value as large as possible. Table 1 lists the lower and upper bounds of some selected bins. All values which are in the same bin have the same number of consecutive most significant bits with the same value. In total, we end up with 43 bins. This partition of the partial sums allows us to capture how many most significant bits either stay constant if during a transition the sign does not change, or how many most significant bits change if the sign changes during a transition. This binning approach is only a heuristic solution and future work is needed to capture the bit-switching characteristics of the partial sums more accurately. After the partition of partial sums into bins, we simulated 100 pictures and assigned the real transitions into these bins. Afterwards, the probabilities of the transitions between bins can be identified, similar to the evaluation of the activation transition distribution in Section 3.1.1. Figure 4(b) shows the partial sum transition distribution of the bins. It can be observed that the full value range of the partial sums are rarely used. The bright diagonal line from the upper left to the lower right corner indicates that there are many transitions between partial sums with similar values. However, there is also a slightly weaker diagonal line from the upper right to the lower left corner, showing that a portion of partial sums changes their signs during transitions and causes relatively large switching activities. #### 3.1.3. Weight selection With the distributions identified above, we sample 10,000 transitions of both activations and partial sums according to their probabilities. The combined transitions are used to simulate the activities of the MAC unit with the weight input fixed to specific values. The resulting switching activities are then used to calculate the average power consumption of the MAC unit for this weight. This simulation is repeated for each individual weight value and the result is shown in Figure 2, where the power consumption of each weight varies greatly. In this result, there is also a trend that weights close to zero have especially low power consumption, with weight zero having by far the lowest. Based on the result of power analysis we first conduct conventional pruning to maximize the number of weights with zero value to reduce power consumption. Afterwards, we select weight values that lead to small power consumption by setting a power threshold, e.g., 900 \(\mu\)W in Figure 2. By setting the threshold lower, we can achieve potentially more power savings by excluding more high-power weight values. However, the accuracy of the DNN may degrade. Therefore, a tradeoff between power saving and inference accuracy should be made. ### Weight and activation selection according to timing profiles According to Figure 3, weight values exhibit different timing characteristics. Even for the same weight, different activation transitions lead to different delays. To identify weight values and activations with small delays, the timing of each weight value with respect to activation transitions and partial sum transitions in the MAC unit \begin{table} \begin{tabular}{c c c} \hline Bin index & Lower bound & Upper Bound \\ \hline 0 & 10000000000000000000 & 10111111111111111111111111 \\ 8 & 11111111100000000000000 & 1 should be analyzed. Two types of timing analysis methods, dynamic timing analysis and static timing analysis, are available for this task. The former is conducted by applying input transitions into a circuit and evaluating the delays of correspondingly triggered paths, while the latter evaluates the delay statically without considering the corresponding triggered paths. The latter is conservative since the delays of some paths that are not activated are also included and the clock frequency of the circuit may be unnecessarily lowered. To evaluate the timing profile of a weight value, an intuitive idea is to fix the weight input into the MAC unit and then apply dynamic timing analysis with all the transition patterns of activations and partial sums to simulate the unit. The challenge of this method is that the number of combined transitions of activations and partial sums is huge, as described in Section 3.1. Simulating the delay of the MAC unit with respect to all these combinations is thus time-consuming. To reduce the runtime of timing analysis, we separate the timing analysis of the multiplier and adder in the MAC unit. Specifically, we apply static timing analysis on the adder to avoid the consideration of input transitions, because the number of inputs to the adder is very large. On the other hand, the multiplier is evaluated using accurate dynamic timing analysis, since the delay of the multiplier usually dominates the delay of the MAC unit and this delay can be lowered by filtering out some weight values and activation values. To conduct dynamic timing analysis of the multiplier for a weight value, we simulate the multiplier by fixing the weight input and enumerating the \(2^{8\times 2}\) possible transitions of the activations. Static timing analysis of the adder is conducted by the built-in timing analyzer in Design Compiler from Synopsys. To incorporate the relation between the timing paths in the multiplier and the adder, we evaluate the largest delay starting from each individual bit of the product to the output of the adder. Afterwards, the largest delay of the MAC unit with respect to the given weight value is calculated by adding the delays from the input activation to the output bits of the multiplier and the delays from the corresponding product bits to the output of the adder. Figure 5 illustrates the concept of timing analysis of the MAC unit, where the quantized weight 1, the activation transition from quantized 1 to quantized 2, and four product bits at the output of the multiplier are used as example. With the dynamic timing analysis applied on the multiplier, the delays from the input activation to Product[0] and Product[1] are 5 and 8, respectively. The delays to the other two product bits can be 0 if the combinational paths to them are not activated by the activation transition. With static timing analysis applied on the adder, the delays from the output bits of the multiplier to the output of the adder are 4, 3, 2, 1, respectively. Assume the delay from the partial sum to the output of the adder is 6 returned by static timing analysis. The largest delay of the MAC unit is thus \(\max\{5+4,8+3,6\}=11\). The timing analysis method described above is applied for each weight individually. After that, all the delays of weights with respect to activation transitions can be obtained. To select weights and activations with small delays, we first set a delay threshold and iteratively remove weights or activations that lead to delays larger than the given threshold. The iterations end until all the delays of the remaining weights and activations are smaller than the given delay threshold. Figure 6 illustrates an example with the delay threshold set to 90. In the first step, we find the largest delay 99. Since 99 is larger than the specified threshold, we have to remove either \(w_{1}\), \(a_{5}\) or \(a_{8}\) to exclude the correponding combination. Since the removal of either \(w_{1}\), \(a_{5}\) or \(a_{8}\) also affects other combinations in Figure 6, it is difficult to find the optimal sequence to remove the weights and activations. Accordingly, we randomly remove any of them and then remove the other combinations containing the removed weight or activation in Figure 6. For example, removing \(w_{1}\) also leads to the removal of the first combination in Figure 6. To avoid local optimum, we execute this process several times and choose which weight or activation to remove in each step randomly. The removal process ends when the maximum delay of all the combinations is lower than the given threshold 90. The result is a set of weights and activations that satisfy the delay requirements. While pruned weight values can be avoided during training of neural networks, the filtering of activations needs to be integrated into the activation function after each layer. ### Neural network training for power reduction To reduce power consumption of DNNs on hardware, we first apply conventional pruning to remove weights whose absolute values are close to zero. Afterwards, we select weights that lead to small power consumption by setting a power threshold. The initial power threshold is 900 \(\mu\)W and it is iteratively reduced by 50 \(\mu\)W to select weights. In each iteration, the neural networks are retrained with the selected weights to verify the inference accuracy. During retraining, we force the weights to take the restricted values in the forward propagation. In the backward propagation, the straight through estimator [15] is adopted to skip the restriction operation. The iterations end when the inference accuracy starts to drop noticeably. After the power threshold is determined, we then select weight values and activations that lead to small delays by setting a delay threshold. The initial delay threshold is 170 \(ps\) and the delay threshold is iteratively reduced by 10 \(ps\) to select weight values and activations. In each iteration, the neural networks are retrained and verified. When the inference accuracy drops by around 5% of the original inference accuracy of the neural networks, the best training result is returned. When executing the neural networks, if the original clock frequency should be maintained, we can lower the supply voltage to reduce power reduction. We use the results in [16] to determine the relation between supply voltage and the delay of the circuit. The scaling of dynamic power consumption and leakage is conducted according to [17]. Figure 5. Concept of timing analysis of the MAC unit. Figure 6. Concept of weight and activation selection for delay reduction. ## 4. Experimental Results To verify the proposed method, we tested four different neural network and dataset combinations, as shown in the first column of Table 2. The weights and activations were quantized to eight bits. The neural networks were trained using Tensorflow while considering quantization (Cheng et al., 2015). In Tensorflow, the number of 8-bit weights is 255 instead of 256 to maintain the weight distribution symmetrical while the number of 8-bit activations is 256. After training, small weights were pruned to compress the neural network. We then applied the proposed method to reduce the power consumption. For LeNet-5, ResNet-20 and ResNet-50, Nvidia Quadro RTX 6000 GPU 24 GB was used for training, and for EfficientNet-B0-Lite Nvidia A100 80 GB GPU was used. The number of times to execute the selection of weight and activation with small delays in Section 3.2 is set to 20 in the experiments. To demonstrate the effectiveness of the proposed method on different types of accelerators, two different hardware implementations of systolic array were evaluated. In the optimized hardware architecture (Optimized HW), clock gating of a MAC unit in case of a zero weight to reduce dynamic power consumption and power gating of whole unutilized columns in the systolic array to reduce both the dynamic and static power consumption are applied. In the standard architecture (Standard HW), none of these power-saving features were applied. The power consumption during inference was estimated with Power Compiler by simulating the systolic array executing the neural networks using Modelsim. The simulation was conducted using a netlist description of the systolic array synthesized with the NanGate 15 nm cell libraries (Nvidia, 2015) and the clock frequency around 5 GHz. Since cycle-accurate simulations are extremely time-consuming, for ResNet-20, ResNet-50 and EfficientNet-B0-Lite only the convolutional layers with the largest number of MAC operations were simulated and compared. Table 2 summarizes the experimental results of the proposed method. The original accuracy and the accuracy with our method are shown in the second and third columns. According to these two columns, the accuracy degradation is relatively small, except for EfficientNet-B0-Lite with a drop of 4%. With a slight accuracy loss, a significant reduction in power consumption up to 78.3% can be achieved, as shown in the sixth and ninth columns, demonstrating the effectiveness of the proposed method in enhancing power efficiency of digital accelerators for neural networks. This is especially useful for edge devices where power consumption is a major issue. When executing the neural networks on Standard HW, the total power including dynamic and leakage power was reduced by up to 60.2% (sixth column). On Optimized HW, the power saving was even greater with a power reduction up to 78.3% (ninth column). The relatively smaller power reduction on Standard HW was caused by the leakage power consumption from the MAC units that were not gated even when they are not used. In all the cases, the power consumption of EfficientNet-B0-Lite was not reduced significantly. This was due to their depth-wise 2D convolutions, which had a very low utilization rate of the systolic array and thus high execution time, so that the dynamic power consumption was much lower than the leakage power. To reduce the maximum delay of a MAC unit, which was 180 ps after synthesis, we only selected a subset of weight values and activations that lead to small delays of the MAC operations. The number of selected weight values and selected activation values are shown in the tenth column (Wei.) and the eleventh column (Act.). According to the tenth column, the number of selected weight values is reduced significantly, e.g., from 255 to 35 in LeNet5 and ResNet-20. On the contrary, most activation values still remain to maintain a good inference accuracy. The delay reduction due to the weight and activation selection is shown in the twelfth column (Max Delay Red.). In identifying delay reduction, our search granularity was 10 ps. This can be lowered if necessary, but at the expense of more runtime. To reduce power consumption further, the supply voltage is lowered by the ratio shown in thirteenth column (Voltage Scaling Factor). For example, for LeNet-5-CIFAR-10, the supply voltage was reduced from 0.8 V to 0.71 V while still maintaining the original clock frequency. The relation between supply voltage scaling and circuit delay was evaluated according to the simulation results in (Nvidia, 2015). The last two columns show the percentage of power reduction contributed by voltage scaling. For Standard HW (column V_SHW) and Optimized HW (column V_OHW), voltage scaling can reduce power consumption by up to 13.2% and 11.5%, respectively. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Total Power Consumption [mW]} \\ \cline{3-13} & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Standard HW} & \multicolumn{2}{c}{Optimized HW} & \multicolumn{2}{c}{\#Selected} & \multicolumn{2}{c}{Max Delay} & \multicolumn{2}{c}{Voltage Scaling} \\ Network-Dataset & Orig. & Prop. & Orig. & Prop. & Red. & Orig. & Prop. & Red. & Wei. & Act. & Red. & Factor & V\_SHW & V\_OHW \\ \hline LeNet-5-CIFAR-10 & 80.6\% & 78.5\% & 375.5 & 149.6 & 60.2\% & 360.7 & 78.3\% & 35 & 210 & 40 ps & 0.71/0.8 & 10.1\% & 5.4\% \\ ResNet-20-CIFAR-10 & 91.9\% & 89.6\% & 718.9 & 361.0 & 49.8\% & 663.9 & 288.3 & 56.6\% & 35 & 210 & 40 ps & 0.71/0.8 & 13.2\% & 11.5\% \\ ResNet-50-CIFAR-100 & 79.9\% & 78.5\% & 708.7 & 293.8 & 58.5\% & 701.8 & 157.1 & 77.6\% & 41 & 223 & 30 ps & 0.73/0.8 & 8.1\% & 4.2\% \\ EfficientNet-B0-Lite-ImageNet & 73.8\% & 69.7\% & 21.2 & 19.3 & 9.0\% & 2.4 & 1.9 & 20.8\% & 50 & 236 & 20 ps & 0.75/0.8 & 6.4\% & 6.5\% \\ \hline \hline \end{tabular} \end{table} Table 2. Experimental results of proposed method. Figure 7. Comparison with conventional pruning, evaluated on Optimized HW. To demonstrate the advantage of the proposed method over conventional pruning, we show the comparison of the power consumption and the inference accuracy of conventional pruning and the proposed method in Figure 7. According to this comparison, the proposed method can significantly reduce the power consumption of a pruned neural network further with only a slight accuracy loss. The proposed method achieves better power savings when the dynamic power consumption dominates the overall power consumption, e.g., the first three comparisons in Figure 7, because it focuses on reducing the signal switching activities in the circuits by selecting weights. To demonstrate the tradeoff between the number of selected weight values and the inference accuracy, we used different thresholds to select weight values according to their power consumption and evaluated the accuracy by restricting the neural networks to these weight values. Figure 8 illustrates the results. As expected, a lower power threshold leads to a lower inference accuracy. However, there is still a good potential for power reduction before significant accuracy degradation appears. For example, for ResNet-50-CIFAR-100 the power threshold can be lowered down to 800 \(\mu\)W, which corresponds to 55 weight values, leading to total power savings of 19.6% with only a negligible accuracy loss. Note that the drop at 850 \(\mu\)W for ResNet-50-CIFAR-100 is due to the stochastic nature of the training process. For EfficientNet-B0-Lite the power reduction is relatively small due to the large contribution of leakage power. Figure 9 shows the tradeoff between accuracy and the number of activation values. The results are obtained by restricting neural networks with different number of activation values based on a weight selection threshold 800 \(\mu\)W. The different numbers of activation values reflect different maximum delays on the MAC unit. In this figure, the left most point corresponds to the full activation space with 256 activation values. As the number of activation values and thus the maximum delay decreases, the inference accuracy is first well-maintained and then drops. Before the turning point, there is optimization potential we took advantage of to enhance computational performance or reduce power consumption by voltage scaling. ## 5. Conclusion In this paper, we have proposed PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations. The timing characteristics of the selected weights together with activation transitions are also evaluated. We then selected weights and activations that lead to small delays, so that either the clock frequency of the MAC units can be improved or voltage scaling can be applied to reduce power consumption further. Together with retraining, the proposed method can reduce power consumption of DNNs on hardware by up to 78.3% with only a slight accuracy loss. The proposed method does not modify MAC units and can be combined seamlessly with existing hardware architectures for power-efficient neural network acceleration.
2310.01515
Tensor Ring Optimized Quantum-Enhanced Tensor Neural Networks
Quantum machine learning researchers often rely on incorporating Tensor Networks (TN) into Deep Neural Networks (DNN) and variational optimization. However, the standard optimization techniques used for training the contracted trainable weights of each model layer suffer from the correlations and entanglement structure between the model parameters on classical implementations. To address this issue, a multi-layer design of a Tensor Ring optimized variational Quantum learning classifier (Quan-TR) comprising cascading entangling gates replacing the fully connected (dense) layers of a TN is proposed, and it is referred to as Tensor Ring optimized Quantum-enhanced tensor neural Networks (TR-QNet). TR-QNet parameters are optimized through the stochastic gradient descent algorithm on qubit measurements. The proposed TR-QNet is assessed on three distinct datasets, namely Iris, MNIST, and CIFAR-10, to demonstrate the enhanced precision achieved for binary classification. On quantum simulations, the proposed TR-QNet achieves promising accuracy of $94.5\%$, $86.16\%$, and $83.54\%$ on the Iris, MNIST, and CIFAR-10 datasets, respectively. Benchmark studies have been conducted on state-of-the-art quantum and classical implementations of TN models to show the efficacy of the proposed TR-QNet. Moreover, the scalability of TR-QNet highlights its potential for exhibiting in deep learning applications on a large scale. The PyTorch implementation of TR-QNet is available on Github:https://github.com/konar1987/TR-QNet/
Debanjan Konar, Dheeraj Peddireddy, Vaneet Aggarwal, Bijaya K. Panigrahi
2023-10-02T18:07:10Z
http://arxiv.org/abs/2310.01515v1
# Tensor Ring Optimized Quantum-Enhanced Tensor Neural Networks ###### Abstract Quantum machine learning researchers often rely on incorporating Tensor Networks (TN) into Deep Neural Networks (DNN) and variational optimization. However, the standard optimization techniques used for training the contracted trainable weights of each model layer suffer from the correlations and entanglement structure between the model parameters on classical implementations. To address this issue, a multi-layer design of a Tensor Ring optimized variational Quantum learning classifier (Quan-TR) comprising cascading entangling gates replacing the fully connected (dense) layers of a TN is proposed, and it is referred to as Tensor Ring optimized Quantum-enhanced tensor neural Networks (TR-QNet). TR-QNet parameters are optimized through the stochastic gradient descent algorithm on qubit measurements. The proposed TR-QNet is assessed on three distinct datasets, namely Iris, MNIST, and CIFAR-10, to demonstrate the enhanced precision achieved for binary classification. On quantum simulations, the proposed TR-QNet achieves promising accuracy of \(94.5\%\), \(86.16\%\), and \(83.54\%\) on the Iris, MNIST, and CIFAR-10 datasets, respectively. Benchmark studies have been conducted on state-of-the-art quantum and classical implementations of TN models to show the efficacy of the proposed TR-QNet. Moreover, the scalability of TR-QNet highlights its potential for exhibiting in deep learning applications on a large scale. The PyTorch implementation of TR-QNet is available on Github: [https://github.com/konar1987/TR-QNet/](https://github.com/konar1987/TR-QNet/). Quantum Computing, Tensor Networks, IBM quantum computer, qubit ## 1 Introduction Deep learning is a very effective and extensively used machine learning method, which has shown great performance in various tasks, including recognition, classification, regression, and clustering [1, 2, 3, 4]. Recent years have witnessed the surge of quantum machine learning [5], a new computational paradigm that blends quantum computing and machine learning. It employs quantum parallelism and non-classical connections, such as quantum entanglement, to possibly speed up or revolutionize existing classical algorithms [6]. Importantly, the convergence of these disciplines can result in synergistic improvements and new views on a wide range of difficult challenges [7]. Concurrently, combining physics principles and classical machine learning approaches has shown significant promise in tackling quantum computing issues [8]. Researchers demonstrated that the trainable weights of neural networks have a strong correlation with many-body wave functions [9, 10]. Furthermore, ideas for identifying phase transitions in quantum many-body systems using fully connected artificial neural networks (ANNs) and convolutional neural networks (CNNs) have been examined, with encouraging results [11, 12, 13]. Deep Neural Networks (DNN) have extremely high spatial and temporal complexity levels owing to densely stacked layers containing large-scale matrix multiplications. Hence, DNNs often need several days of training while requiring a considerable amount of memory for inference. Furthermore, substantial weight redundancy in DNNs has been demonstrated [14], demonstrating the possibility of condensing DNNs while preserving performance. As a result, a variety of compression approaches, including pruning [15], quantization [16], and low-rank decomposition [17], have been devised. Applying TNs to DNNs to generate TNNs is one of them since TNNs have outstanding potential to approximate the original weights with fewer parameters [18], particularly involving the reconstruction of convolutional and fully connected layers using a range of TD formats [19]. However, the scalability of DNN is hindered when a substantial number of neurons are taken into account, thereby restricting the feasible number of layers. This is primarily due to the time-consuming training process and the need for a lot of memory to store the large weight matrices. The accuracy and effectiveness of the DNN model will suffer with an increase in the hidden layers if the parameters for such large-weight matrices are not optimized. Therefore, decreasing the number of model parameters is imperative to maintain accuracy. Nevertheless, the present hardware used to train neural networks significantly restricts their scale and usefulness. These concerns have gained significance due to the imminent approach of physical limitations to impede the progress of performance enhancements in deep classical neural networks. In contemporary times, a correlation has been established between tensor networks (TN) and neural networks, whereby the former serves as an effective ansatz for representing quantum many-body wave functions [20; 21]. As a result, it is possible to substitute tensor networks (TN) for these weights and rely on variational optimization techniques to train them [22]. A plethora of TN-based efficient algorithms for classification [23], anomaly detection [24; 25], segmentation [26], and clustering [27] have been proposed in recent times. In addition to their capacity for effective expression, TN offers streamlined methodologies for compressing data through tensor factorization techniques [28; 29]. For instance, it is possible to significantly reduce the number of parameters in neural network models by retaining only the most significant degrees of freedom and discarding those that exhibit lower correlations. Tensor Neural Networks (TNN) [26] and Variational Tensor Deep Neural Networks [19; 30] are instances of neural networks that rely on tensor network structures to replace the weight tensors of the hidden layers. This is achieved by applying Singular Value Decomposition (SVD) methods. Recent research studies have validated that, despite having a limited parameter space, TNN exhibits superior performance and accuracy compared to conventional ANNs [28; 31]. Deep neural network low-rank tensor approximation has been extensively studied in the literature for effective model reduction, low generative error, and high prediction speed [32]. Recently Quantum Neural Networks (QNN) have emerged as a potential contender to circumvent the problems and to facilitate the training of DNN [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. Quantum states are mathematical entities of quantum systems compatible with higher-order tensors [45]. TNNs may thus be utilized as simulators in traditional computers to emulate genuine quantum circuits [46; 47]. Some particular TNNs can be realized on compact, near-term quantum devices using quantum computing's ultrahigh parallelism [48]. Rather than the more broad TN-based quantum circuit modeling paradigm, quantum circuit simulation on TNNs focuses on the functions of TNs as bridges between traditional ANNs and QNNs. ### Motivation Most contemporary TNN advancements involve tensorization solely at the level of hidden layers pertaining to trainable weights [49; 19; 30; 50; 51]. Training a model typically involves optimizing each layer's contracted trainable weights using established optimization techniques like gradient descent [26; 52]. The outcome of this is an adaptable architecture for TNN that can be effectively trained for a substantial quantity of neurons and layers. The variational algorithm employs a method of local gradient-descent, incorporating tensor gradients. This motivates us to propose hybrid TNN models incorporating both tensor and quantum layers. The training algorithm used in our study offers valuable insights into the entanglement structure of trainable weights for fully connected layers of TNN. Nevertheless, it helps to clarify the expressive power of a quantum neural state. ### Primary Contributions and Novelty Considering the neural network's entanglement structure, a novel multi-layer design of a Tensor Ring optimized variational Quantum learning classifier (Quan-TR) with cascading entanglement gates is introduced in the proposed hybrid quantum-enhanced TNN model (TR-QNet). Furthermore, our TR-QNet model's accuracy and efficiency are evaluated on numerical data and image classification on various datasets. The present study exhibits a tripartite novelty, which can be summarized as follows: 1. Our study presents for the first time a novel quantum-enhanced hybrid tensor neural network (TR-QNet) comprising classical tensor layers followed by quantum layers for data and image classification. The proposed TR-QNet incorporates the novel multi-layer design of Quan-TR, replacing the fully connected softmax layers in TNN, distinguishing it from the state-of-the-art TN models [19; 30; 50; 51]. 2. In addition, the quantum layers (Quan-TR) of the proposed TR-QNet model incorporate a cascading of quantum entangling gates, leading to the elimination of local minima. This is demonstrated by the convergence of the training loss of the proposed TR-QNet model. 3. Compared with the classical TN model, the binary-class classification accuracy of TR-QNet is improved by \(10.53\%\), \(7.28\%\) and \(12\%\) on the Iris [53], MNIST [54] and CIFAR-10 [55] datasets, respectively. This approach presents a distinctive and innovative effort towards expediting advancements in resolving computer vision issues through deep quantum learning. The subsequent sections of this manuscript are organized in the following manner. Section 2 explains the proposed Quantum-Enhanced Tensor Neural Network architecture, which includes an overview of classical Tensor Neural Networks (TNN) and Tensor Ring optimized variational Quantum circuit (Quan-TR). Section 3 contains the datasets, experimental settings, and experimental results. Section 4 elucidates the efficacy of the TR-QNet model and underscores its constraints. Finally, the concluding remarks and future research directions are discussed in Section 5. _Appendix_ section provides the convergence of the proposed TR-QNet. ## 2 Quantum-Enhanced Tensor Neural Network Architecture The TR-QNet model is a novel proposed framework that combines classical TN and quantum layers (Quan-TR) with tensor ring parameterized inputs and cascading of entangling gates. Recently, the authors _et al._[56] also proposed a similar type of Tensor Ring parameterized Variational Quantum Circuit (TR-VQC). However, the TR-VQC suffers from directly reduced input features due to a few available qubits and the limited entanglement between the parameters, resulting in Barren plateaus [57]. In contrast, our hybrid TR-QNet model exhibits a relationship between tensor neural networks and variational Quantum learning classifiers optimized by a tensor ring structure, as shown in Figure 1, enabling feeding the full input features through TNN layers with minimal loss of information. The TR-QNet model architecture incorporates a tensor neural network (TNN) with multiple hidden layers. It introduces a multi-layer tensor-ring optimized variational Quantum learning classifier with cascading entangling gates to address quantum entanglement among model parameters efficiently. This approach replaces the conventional soft-max layer typically employed at the end of TN models. A classical pooling layer is incorporated in integrating the TNN model and Quan-TR of the proposed TR-QNet architecture to match the dimension of the input of TNN and the input of Quan-TR. The VQC-based training algorithm resembling DMRG [58] enables a straightforward entanglement of the entanglement spectrum of the Matrix Product Operators' (MPO's) [28] trainable weights, thereby facilitating a lucid comprehension of the correlations within the parameters of our TR-QNet model. One can evaluate the MPO's entanglement structure and capacity as a quantum neural state through standard quantum information measures. ### Tensor Neural Networks A Tensor Network Network (TNN) is obtained after the tensorization of an ANN, enabling it to align with the MPO weights' size and dimensions [20, 21]. The hidden layers of the proposed TR-QNet model can reshape into a rank-\(d_{T}\) tensor, possessing a dimension size of \(\mathcal{N}_{T}\), which can subsequently be contracted to form a TN layer. This TN layer comprises six Matrix Product Operators (MPO) [28] weights, each having an input size of \(m\). Features that cannot be factorized to align with the MPOs in the TN layers are transformed during the preprocessing stage of the training TR-QNet to conform to the input size of the TN layer. A dense trainable layer of size \(\mathcal{N}_{s}\times\mathcal{N}_{q}\) is added as a connecting layer preceding the Tensor Ring optimized variational Quantum learning classifier (Quan-TR) layer to address the issue of reduction in the size of input data in classical TNN model. The length of the input feature vector of Quan-TR is denoted by \(\mathcal{N}_{s}\), while the output size of the contracted TNN layer is represented by \(\mathcal{N}_{q}\). The contraction of two rank-2 tensors, \(\mathcal{S}_{xy}\) and \(\mathcal{V}_{yz}\), can be represented diagrammatically by connecting the two tensors along their shared index \(y\). Mathematically, the contraction operation is described as follows: \[\mathcal{T}_{xz}=Tr(\mathcal{S}_{xy}\mathcal{V}_{yz})=\sum_{y}\mathcal{S}_{xy} \mathcal{V}_{yz} \tag{1}\] Here, \(Tr\) designates the trace over shared indices \(y\). A viable approach to intelligent data compression techniques that rely on TN and MPO decomposition to enhance the representation of weight matrices involves substituting weights with MPOs. The MPO form of the weight matrix of a hidden layer can be derived from the \(\mathcal{W}\) matrix by consecutively applying SVD, as demonstrated in Figure 1. The TN layers comprise a set of trainable weights denoted as \(\omega_{i}\) represented by MPO. A Bond tensor \(B_{j,j+1}\) is obtained by contracting a pair of neighboring MPO tensors, \(\omega_{j}\), and \(\omega_{j+1}\), along their shared virtual dimension. Adjusting the input feature vector \(\alpha\) to align with the MPO dimensions allows the network's output \(\mathcal{O}_{TN}\) to be derived through the contraction of the resulting tensor network. The activation function \(\sigma_{TN}\) is applied to the result of a tensor contraction operation [51] as follows. \[\mathcal{O}_{TN}=\sigma_{TN}(Tr(\alpha_{i},\alpha_{j},\alpha_{k},\cdots\omega_{ i},\omega_{j},\omega_{k},\cdots)+\omega_{0}) \tag{2}\] Here, the tensor contraction operation between the input tensor \(\alpha\) and the weight tensor \(\omega\) and \(i,j,k\), etc. represent the tensor indices. It may be noted that the activation function \(\sigma_{TN}\) is applied element-wise to the matrix obtained from the tensor contraction, and it cannot be directly applied to individual MPO tensors separately due to the non-linearity introduced by the activation function. In the proposed TR-QNet, one approach involves contracting the features and tensor network layers (MPOs) before applying the activation function and reshaping the resulting tensor to match the inputs of the next layer. This process is repeated until the entire TNN network is contracted. ### Tensor Ring Optimized Variational Quantum Learning Classifier The proposed Tensor Ring optimized variational Quantum learning classifier (Quan-TR) with Tensor Neural Networks is a hybrid classical-quantum algorithm combining tensor network elements and variational quantum circuits for data and image classification. The proposed Quan-TR introduces a multi-layer Tensor Ring optimized variational Quantum learning classifier with cascading entangling gates to address quantum entanglement among model parameters efficiently, which is the major distinction with our Tensor Ring Parameterized Variational Quantum Circuit (TR-VQC) [56]. The proposed Quan-TR framework consists of three main components: tensor ring encoding, variational learning parameters, and measurement. tensor ring encoding represents the quantum states in a compressed format. It leverages the tensor ring structure, a tensor network with a specific hierarchical ring-like connectivity pattern. The tensor ring approximation uses SVD to compress the quantum states while preserving important features. This ap Figure 1: A Tensor Ring optimized Quantum-enhanced tensor neural Network (TR-QNet) architecture with 4-qubits Quan-TR for tensorizing an (a) ANN with 2 hidden layers and 2 fully connected dense layers. The network prediction, \(y_{f}\), is derived by feeding the model the input feature vector \(\alpha\) as \(y_{f}=\sigma(\mathcal{W}\alpha+w_{0})\), where \(w_{0}\) is bias vector, and \(\sigma\) is the ReLu activation function; (b) the ANN’s TN representation using Matrix Product States (MPS) and Matrix Product Operators (MPO); (c) MPO decomposition of the weight matrix \(\mathcal{W}\) performing singular value decomposition (SVD) and truncating the inconsequential singular values includes MPO factorization for a matrix \(\mathcal{W}_{m^{3}\times m^{3}}\) followed by reshaping \(\mathcal{W}\) into a rank-6 tensor and using a suitable SVD, matrix \(\mathcal{W}\) may be represented as a 3-site MPO; (d) the resulting TNN with MPO trainable weights; (e) 2-layer Tensor Neural Networks (TNN) tensorizing the ANN using part (b), (c) and (d); and (f) Low-rank Quan-TR component employed in this proposed TR-QNet has three parts: tensor ring encoding (\(\tau\)), variational learning parameters, and quantum measurement. The cascading CNOT gates are preserved through tensor ring approximation relying on SVD. \(\mathcal{R}_{y}(\theta)\) and \(\mathcal{R}_{z}(\theta)\) are used for data encoding and measurements. proximation allows for efficient representation and manipulation of quantum states within Quan-TR. In the proposed Quan-TR, single-qubit rotation gates \(\mathcal{R}_{y}(\theta)\), and \(\mathcal{R}_{z}(\theta)\) are used to represent rotations along the Y and Z axes, respectively. These rotation angles (\(\theta\)) are learned during training to find the optimal values that minimize the objective function. By combining tensor ring encoding with variational learning parameters and measurement, the proposed TR-QNet architecture enables the training of quantum circuits for data and image classification. In our Quan-TR framework, the tensor ring parametrization represents a quantum state \(|\psi\rangle\) using \(V\) tensors, each with bond dimension \(\mathcal{B}\), denoted by \(\tau(\upsilon)\) as follows [52]: \[|\psi\rangle=\sum_{x_{i}}^{x_{V}}\sum_{y_{i}}^{y_{V}}\tau(1)^{y_{1}}_{x_{V}x_{1 }}\tau(2)^{y_{2}}_{x_{1}x_{2}}\cdots\tau(V)^{y_{V}}_{x_{V-1}x_{V}}|x_{i},x_{2}, \cdots x_{V}\rangle \tag{3}\] The physical indices \(x_{v}\in\{0,1\}\) span the \(2^{V}\)-dimensional Hilbert space, while the bond indices \(y_{v}\in\{1,\cdots N_{v}\}\), control the maximum amount of entanglement captured by the tensor ring, also known as tensor rank. A tensor ring parametrization of a 4-qubit state is illustrated in Figure 1. In Quan-TR, each \(\tau(\upsilon)\) in the ring represents a tensor of dimension \(\mathcal{B}\times\mathcal{B}\times\mathcal{X}\), signifying the connections between the tensors in the tensor ring. The tensor, \(\tau(\upsilon)\), has three indices, two of which have a bond dimension \(\mathcal{B}\), and the third index has a dimension \(\mathcal{X}\). Subsequently, the input characteristics are encoded through the utilization of single qubit rotation gates (\(\mathcal{R}_{y}(\theta)\)), which preserves the tensor ring configuration. The fundamental element of the parametrized circuit in every layer of the proposed Quan-TR model is the cascading entanglement of qubits, which is subsequently followed by parametrized single qubit rotations. The two-qubit gates, such as the CNOT gate, do not preserve the tensor ring representation. An approximation technique based on singular value thresholding is employed for this gate to address this issue. The tensor ring structure facilitates the computation of 2-qubit gates for adjacent qubits. By employing a cascading configuration of the tensor ring, executing a CNOT operation from the ultimate qubit to the initial qubit becomes feasible. It is worth noting that using the tensor ring format allows for the utilization of the same rank \(d_{q}\) in each decomposition, which may not be feasible with the conventional Matrix Product State (MPS) format. By employing this approximation, all calculations for the forward pass exhibit linearity concerning the number of gates. We develop a universal TR-QNet model that uses the intrinsic probabilistic behavior of qubit measurements to classify images using a hybrid classical-quantum framework. The aspects of variational quantum learning classifier concerning encoding, variational, and measurement are all accomplished within the implementation of Quan-TR. Single-qubit rotation gate, \(\mathcal{R}_{y}(\theta)\), is employed to encode rotations along the \(Y\)-axes in the encoding section. Quantum bits (qubits) represent the input state of VQC in the proposed TR-QNet as \[|\psi(\theta)\rangle=(\cos\theta|0\rangle+\sin\theta|1\rangle)|\mathcal{O}_{ TN}\rangle. \tag{4}\] In the VQC of the TR-QNet model, the quantum states \(|\psi(\theta)\rangle\) correspond to the quantum encoding of the classical inputs \(\mathcal{O}_{TN}\) from the classical layer of TNN. The Tensor Ring parameterized quantum circuit (Quan-TR) is dense and constitutes parametrized single qubit gates with CNOT gates to entangle quantum states from each qubit. To encode phase information, the dressed quantum layer of TR-QNet uses the rotation gates \(\mathcal{R}_{y}\) and \(\mathcal{R}_{z}\). Complementary quantum states are created with the help of the CNOT gate. In the Bloch sphere projection, the \(\mathcal{R}_{y}(\theta)\) and \(\mathcal{R}_{z}(\theta)\) gates represent the following single-qubit rotations about the \(Y\) and \(Z\)-axes, respectively, as follows: \[\mathcal{R}_{y}(\theta)=\exp{(-jY\theta/2)}=\left[\begin{array}{cc}\cos \theta/2&-\sin\theta/2\\ \sin\theta/2&\cos\theta/2\end{array}\right] \tag{5}\] and \[\mathcal{R}_{z}(\theta)=\exp{(-jZ\theta/2)}=\left[\begin{array}{cc}\exp{(-j \theta/2)}&0\\ 0&\exp{(j\theta/2)}\end{array}\right]. \tag{6}\] To perform the one-qubit rotation, we contract the \(2\times 2\) unitary rotation matrix \(\mathcal{R}\) with the original tensor \(\tau(\upsilon)\), and the resulting tensor \(\tau^{\prime}(\upsilon)\) represents the rotated state of the \(\upsilon^{th}\) qubit as follows: \[\tau^{\prime}(\upsilon)^{y^{\prime}_{\upsilon}}_{x_{\upsilon-1}x_{\upsilon}}= \sum_{y_{\upsilon}}\mathcal{R}_{y^{\prime}_{\upsilon}y_{\upsilon}}\tau( \upsilon)^{y_{\upsilon}}_{x_{\upsilon-1}x_{\upsilon}} \tag{7}\] To perform a two-qubit gate transformation on qubits \(\upsilon\) and \((\upsilon+1)\) in the proposed Quan-TR, the tensor network needs to be transformed into an orthogonal form centred around the qubits of interest \(\upsilon\) and \((\upsilon+1)\) before applying the gate operation. The shared bond index is contracted between the tensors \(\tau(\upsilon)\) and \(\tau(\upsilon+1)\) to create a new tensor as follows: \[\mathcal{M}^{y_{\upsilon}y_{\upsilon+1}}_{x_{\upsilon-1}x_{\upsilon+1}}=\sum_ {x_{\upsilon}}\tau(\upsilon)^{y_{\upsilon}}_{x_{\upsilon-1}x_{\upsilon}}\tau( \upsilon+1)^{y_{\upsilon+1}}_{x_{\upsilon}x_{\upsilon+1}} \tag{8}\] To apply the two-qubit gate \(\mathcal{U}\) on the two-qubit tensor computed from Equation 8, we reshape the gate \(\mathcal{U}\) into an operator acting on the joint state of qubits \(\upsilon\) and \((\upsilon+1)\). \[(\mathcal{M}^{\prime})^{y_{v}y_{v+1}}_{x_{v-1}x_{v+1}}=\sum_{y_{v}y_{v+1}} \mathcal{U}_{y^{\prime}_{v}y^{\prime}_{v+1}y_{v}y_{v+1}}\mathcal{M}^{y_{v}y_{ v+1}}_{x_{v-1}x_{v+1}} \tag{9}\] We perform SVD on the resultant tensor \(\mathcal{M}^{\prime}\) on reshaping it as \((y^{\prime}_{v}+x_{v-1})\times(y^{\prime}_{v+1}+x_{v+1})\) as follows: \[(\mathcal{M}^{\prime})^{y_{v}y_{v+1}}_{x_{v-1}x_{v+1}}=\sum_{x_{v}}\mathcal{P} ^{y^{\prime}_{v}}_{x_{v-1}x_{v}}\mathcal{S}_{x_{v}}\mathcal{Q}^{y^{\prime}_{v+ 1}}_{x_{v}x_{v+1}} \tag{10}\] Here, \(\mathcal{P}\) and \(\mathcal{Q}\) comprise orthogonal vectors, \(\mathcal{S}_{x_{v}}\) is composed of singular values of matrix \(\mathcal{M}^{\prime}\). The matrix has \(2N\) singular values irrespective of the two-qubit gate structure, where \(N\) denotes the bond dimension of the tensor ring. We then truncate the \(\mathcal{S}_{x_{v}}\) matrix to keep only the \(N\) largest singular values, and the resulting matrix is denoted by \(\mathcal{S}^{i}{}_{x_{v}}\). \(\mathcal{P}\) and \(\mathcal{Q}\) are truncated only to keep the orthogonal vectors corresponding to the \(N\) largest singular values. \[\tau^{\prime}(\upsilon)^{y_{v}}_{x_{v-1}x_{v}}=\mathcal{P}^{y_{v}}_{x_{v-1}x_{ v}}\mathcal{S}^{i}{}_{x_{v}} \tag{11}\] and \[\tau^{\prime}(\upsilon+1)^{y_{v+1}}_{x_{v+1}x_{v}}=\mathcal{Q}^{y_{v+1}}_{x_{v }x_{v}x_{v+1}} \tag{12}\] The preprocessed data from the TNN layer, denoted as \(\mathcal{O}^{i}_{TN}\), is transformed into a quantum state represented by \(|\psi(\mathcal{O}^{i}_{TN})\rangle\). Subsequently, the quantum state undergoes processing through Quan-TR with parameters \(\mathcal{U}(\theta_{1},\theta_{2},\cdots,\theta_{n})\). Finally, by performing measurements on particular qubits through the use of the Pauli-Z basis, we obtain a collection of outputs denoted as \(\lambda_{j}\) along with their corresponding probabilities as follows: \[\mathcal{Z}_{ij}=\langle\psi(\mathcal{O}^{i}_{TN})|\mathcal{U}^{\dagger}( \theta)|\lambda_{j}\rangle\langle\lambda_{j}|\mathcal{U}(\theta)|\psi( \mathcal{O}^{i}_{TN})\rangle \tag{13}\] where, complete operation is \(\mathcal{U}(\theta)\) is defined as \[\mathcal{U}(\theta)=\mathcal{U}_{n}(\theta_{n})\mathcal{U}_{n-1}(\theta_{n-1}) \cdots\mathcal{U}_{1}(\theta_{1}). \tag{14}\] The loss function, \(\mathcal{L}(\theta)\), can be defined as follows considering the input quantum state as \(|0\rangle^{\mathcal{N}}_{q}\). \[\mathcal{L}(\theta)=f(y_{j}(\theta),t_{j})=\mathcal{Z}(y_{j}( \theta)\neq t_{j})= \tag{15}\] \[\sum_{j}^{\mathcal{N}_{q}}f((\langle 0|\psi^{\dagger}(\mathcal{O}^{j}_ {TN})\mathcal{U}^{\dagger}(\theta)y_{j}\mathcal{U}(\theta)\psi(\mathcal{O}^ {j}_{TN})|0\rangle),t_{j})\] where, \(y_{j}(\theta)\in\{\overline{\lambda_{j}}\}\) and \(t_{j}\) corresponds to a target output. In order to train the proposed Quan-TR model, the gradient of the loss function is evaluated as follows: \[\frac{\delta\mathcal{L}(\theta)}{\delta\theta_{j}}=\langle 0|\psi^{ \dagger}(\mathcal{O}^{j}_{TN})\frac{\delta\mathcal{U}^{\dagger}(\theta)}{ \delta\theta_{j}}y_{j}\mathcal{U}(\theta)\psi(\mathcal{O}^{j}_{TN})|0\rangle +\langle 0|\psi^{\dagger}(\mathcal{O}^{j}_{TN})\mathcal{U}^{\dagger}( \theta)y_{j}\frac{\delta\mathcal{U}(\theta)}{\delta\theta_{j}}\psi(\mathcal{ O}^{j}_{TN})|0\rangle= \tag{16}\] \[\langle 0|\psi^{\dagger}(\mathcal{O}^{j}_{TN})\mathcal{U}^{\dagger} _{1}(\theta_{1})\cdot\frac{\delta\mathcal{U}^{\dagger}_{j}(\theta_{j})}{ \delta\theta_{j}}\cdot\mathcal{U}^{\dagger}{}_{n}(\theta_{n})y_{j}\mathcal{U }(\theta)\psi(\mathcal{O}^{j}_{TN})|0\rangle+\] \[\langle 0|\psi^{\dagger}(\mathcal{O}^{j}_{TN})\mathcal{U}^{\dagger} (\theta)y_{j}\mathcal{U}_{n}(\theta_{n})\cdot\frac{\delta\mathcal{U}_{j}( \theta_{j})}{\delta\theta_{j}}\cdot\mathcal{U}_{1}(\theta_{1})\psi(\mathcal{ O}^{j}_{TN})|0\rangle\] \[=\langle 0|\psi^{\dagger}(\mathcal{O}^{j}_{TN})\mathcal{U}^{\dagger} _{-}[i\psi_{j}]\mathcal{U}^{\dagger}_{+}y_{j}\mathcal{U}(\theta)\psi( \mathcal{O}^{j}_{TN})|0\rangle+\langle 0|\psi^{\dagger}(\mathcal{O}^{j}_{TN}) \mathcal{U}^{\dagger}(\theta)y_{j}\mathcal{U}_{+}[-i\psi_{j}]\mathcal{U}_{-} \psi(\mathcal{O}^{j}_{TN})|0\rangle\] where, \(\mathcal{U}_{j}(\theta_{j})=e^{-i\theta_{j}\psi(\theta^{j})}\). However, due to NISQ's limitations, classical simulators are now being utilized to optimize and update parameters and feed them back to TNN and Quan-TR separately until convergence conditions are reached. Hence, we have used cross-entropy loss to update the parameters. The loss function (\(\mathcal{L}_{\theta}\)) is derived with the hyper-parameters \(\theta\) of the proposed TR-QNet model as \[\operatorname*{argmin}_{\theta}\mathcal{L}_{(\theta)}=\sum_{j}^{\mathcal{N}_{q} }[t_{j}\log\overline{f}(\mathcal{O}^{j}_{TN})+(1-t_{j})\log\{1-\overline{f}( \mathcal{O}^{j}_{TN})\}]\;. \tag{17}\] where, \(\overline{f}(\mathcal{O}^{j}_{TN},\theta)\) can defined for binary optimization problem as follows. \[\overline{f}(\mathcal{O}^{j}_{TN},\theta)=\begin{cases}1,&\text{if }f(\mathcal{O}^{j}_{TN}, \theta)>0\\ -1,&\text{otherwise}\end{cases} \tag{18}\] The DMRG-like sweeping technique [58] for training the TNN uses a stochastic gradient-based optimization strategy in which a gradient descent step with a learning rate updates the local bond tensors \(Bj,j+1\) towards a global minimum of the loss function. In order to update the weights in TNN, a gradient of the bond tensors with respect to the loss, \(Bj,j+1\), is obtained by defining \(\overline{f}(\mathcal{O}^{j}_{TN})=TB\), where \(T\) represents the contraction of every tensor in the TNN other than the bond tensor \(B\). ## 3 Results ### Data Sets The Iris dataset [53] is often used as a benchmark to evaluate the performance of different machine learning algorithms. The Iris dataset contains a total of \(150\) samples, and each sample has four features: sepal length, sepal width, petal length, and petal width. We extracted three distinct binary data sets from the original Iris dataset. We added \(80\%\) samples to each training subset and the remaining \(20\%\) samples for each class as a test data set. Researchers studying computer vision often rely on the MNIST dataset [54] as a benchmark for Artificial Neural Networks. The MNIST dataset has \(70,000\)\(28\times 28\) grayscale images (\(60,000\) for training and \(10,000\) for testing), divided into \(10\) classes and each containing \(7,000\) images. The CIFAR-10 [55] dataset comprises a total of \(60,000\) images in \(10\) categories (\(6000\) for each class), with \(32\times 32\) colour images including \(50,000\) training images and \(10000\) test images. The tests, however, resize and transform CIFAR-10 images into \(28\times 28\) times their original size grey-scale images. However, owing to the limited qubit available at the NISQ processor, we perform binary classification jobs using this batch of images with values \(0\) or \(9\), \(1\) or \(8\), \(2\) or \(7\), \(3\) or \(6\), and \(4\) or \(5\) and multi-class classification with values \(0\), \(1\) or \(9\), \(2\), \(4\) or \(5\) and \(3\), \(6\) or \(7\). We had to restrict our datasets to two randomly selected classes in our investigations since the Qiskit Quantum Simulator only has access to a few qubits. ### Experimental Settings We compute the original input data sets' mean and variance. The data sets are then normalized using the zero-mean normalization procedure to have a zero mean and unit variance before feeding into the TNN. The proposed TR-QNet comprises TN layers with several trainable MPO tensors, and a stochastic gradient-based algorithm has been employed to train the MPOs [28], relying on a DMRG-like technique. The prior tensor gradient technique is appropriate for TNN models fully composed of TN layers. However, it is neither effective nor adaptable in models with hybrid architectures that combine TN layers and VQC. We employed automated differentiation techniques [59] and a classical back-propagation algorithm to determine the gradient of the TNN trainable weights as our TR-QNet is a feed-forward hybrid neural network combining TN and quantum layers. We have used _TensorLy-Torch_ library to compute the automatic differentiation of TN layers in PyTorch settings. However, being a hybrid classical-quantum framework, the classical TNN model is simulated on classical hardware and Quan-TR on the Qiskit simulator. The weights of the TN layers are updated using a layer-by-layer approach. The intermediate dense layer has merely been included to make up for the size mismatch between the features in the last TN layer and Quan-TR, and it is not trainable. Each \(6\) MPO trainable weight on the TN layers has virtual dimension \(V\) and a _ReLu_ activation function. The last layer of Quan-TR in the output chain is a dense layer with softmax activation, which outputs vectors that are one-hot encoded (OHE) and contain the predicted probabilities for the desired number of labels. We set up the initial \(V\) qubit state as \(|00\cdots 0\rangle\), which is afterward transformed into a Tensor Ring (TR) representation since \(\tau(\epsilon)\) is a \(\mathcal{B}\times\mathcal{B}\times 2\) tensor with only (\(0,0,0\))\({}^{th}\) element as \(1\) and rest as \(0^{\prime}s\). Our Quan-TR is repeated \(r\) times to illustrate the depth of Quan-TR. The tensor ring rank in Quan-TR is set to \(d_{q}=4\) for all tests. Experiments have been carried out using the varying numbers of qubits (\(4,6,8,10,12\)) and number of TN and Quan-TR layers on an Nvidia Tesla \(V100-SXM2\) GPU Cluster with \(32\) GB of memory and \(640\) Tensor cores with \(8\) cores of Intel(R) Xeon(R) CPU E5-2683 [email protected]. In the case of image classification, \(784\) input features (\(28\times 28\)) from the input images are received at the input layer of the proposed TR-QNet. With a maximum of \(25\) epochs, Quan-TR layers of the proposed TR-QNet model are rigorously trained using the Adam optimizer with an initial learning rate of \(0.01\) and weight decay (\(\delta\)) of \(0\). Figure 2 shows the convergence of loss during training of the proposed TR-QNet model varying number of qubits and TN and Quan-TR layers with \(5\)-fold cross-validation. In the Iris data classification experiments, the proposed Quan-TR is provided with the four feature vectors (\(\mathcal{N}_{q}=4\)) for training from the previous TNN layer through the dense layer with batch size \(4\). We chose three measures random to represent the three classes of the dataset out of the \(2^{4}\) available measurements acquired from Quan-TR. To further transform selected measurements into class probabilities, we employ the sigmoid activation function (Softmax) and the cross-entropy loss function as given in Equation 19. However, in the case of MNIST and CIFAR-10 datasets, for binary classification, we choose the final measurements \(|00\cdots 0\rangle\) and \(|11\cdots 1\rangle\) as the output values and batch size \(32\), where multiple readouts need to feed the results to TR-QNet. ### Experimental Results Extensive experiments have been conducted using large sets of Iris [53], MNIST [54], and CIFAR-10 [55] datasets with varying numbers of qubit count \(4\), \(6\), \(8\), \(10\), and \(12\) and tensor ring ranks (\(d_{q}\)) of \(2\), \(3\), and \(4\) as provided in Table 1. However, it has been found from the experimental data for the Iris dataset that the optimal result is found for \(4\) qubits Quan-TR model with tensor ring rank of \(4\) as reported in Table 1. It is noteworthy that the proposed TR-QNet, and its quantum counterparts, namely, Variational Quantum Tensor Networks classifier (VQTN) [50], Quantum Convolutional Neural Networks (QCNN) [60], Tensor Ring parametrized Variational Quantum Circuit (TR-VQC) [56], and fully classically simulated Variational Tensor Neural Network (VTNN) [51] are trained on the binary and ternary pair of classes from the datasets. In order to illustrate the resilience of the proposed model over the quantum counterpart and classical tensor neural network-based models, unforeseen test images on the Iris [53], MNIST [54], and CIFAR-10 [55] datasets are used for evaluation. The training loss curves for the proposed TR-QNet model are demonstrated on the Iris [53] in Figure 2. The convergence analysis of the proposed TR-QNet is also provided in the _Appendix_. Table 2 summarizes the numerical results obtained using our TR-QNet using \(4\), \(6\) and \(8\) number of qubits, VQTN [50], and QCNN [60] and TR-VQC [56] using \(4\) (Iris dataset) and \(8\) qubits (MNSIT and CIFAR-10 datasets) and fully classically simulated VTNN [51] on Iris, MNSIT and CIFAR-10 datasets. It has been observed from the experimental results reported in Table 2 that optimal accuracy has been achieved for class \(2\) or \(3\) in most cases of the Iris dataset. On the contrary, in the case of MNIST and CIFAR-10 datasets, class \(3\) or \(6\) reports optimal accuracy for most of the models discussed in the manuscript. Our TR-QNet achieves promising accuracy of \(94.5\%\), \(86.16\%\), and \(83.54\%\) with \(4\) qubits on the Iris and with \(6\) qubits on the MNIST and CIFAR-10 datasets, respectively. However, in the case of multi-class classification, despite TR-QNet's low accuracy, as provided in Table 3, it outperforms VQTN and VTNN. It may be noted that TR-VQC and QCNN are not feasible for multi-class classification owing to the limitations of their framework. In addition, we use a \(\gamma=0.05\) significant threshold for a two-sided paired Wilcoxon signed-rank test [61] to demonstrate the effectiveness of the proposed TR-QNet model over other methods. It is evident from the two-sided paired Wilcoxon signed-rank Figure 2: TR-QNet training loss is reported on a \(4\) qubits systems with varying layers of TNN and Quan-TR layers for randomly selected binary classes (a) 1 or 2, (b) 2 or 3 and (c) 1 or 3, (d) varying with qubits (\(4,6,8,10\), and \(12\) on Iris dataset [53]. test that the proposed TR-QNet model yields statistically significant results using \(4\) and \(6\) qubits Quan-TR for Iris data and image (MNIST and CIFAR-10) classification, respectively. This is primarily owing to the limited Iris data feature demanding fewer qubits, whereas the larger image size requires more qubits. Further increasing to \(8,10\), and \(12\) qubits resulted in a substantial decrease in accuracy for the proposed TR-QNet and the other methods, probably as a result of over-parametrization [62] and barren plateaus [57]. ## 4 Discussions The experimental results reported in the manuscript show that the proposed TR-QNet model outperforms its quantum and classical counterparts for binary classification and multi-class (ternary) classification in test datasets in the given experimental settings. This is due to the fact that the proposed TR-QNet is capable of modulating classification tasks by substituting the trainable weight matrices of the fully connected dense layers of standard TNN with Quan-TR, and hence, TNN acts as an efficient encoding tool, especially for large image features with minimal loss of information from the input images. The VQC-based training algorithm resembling DMRG [58] enables a straightforward entanglement of the entanglement spectrum of the MPO's [28] trainable weights, thereby facilitating a lucid comprehension of the correlations within the parameters of TN layers. For efficient training of the proposed Quan-TR model, we have presented a novel entanglement-aware training technique relying on hybrid classical-quantum algorithms and \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Qubit**} & \multicolumn{3}{c}{\(d_{q}\)**=2**} & \multicolumn{3}{c}{\(d_{q}\)**=4**} & \multicolumn{3}{c}{\(d_{q}\)**=6**} \\ \cline{2-11} & **1 or 2** & **2 or 3** & **1 or 3** & **1 or 2** & **2 or 3** & **1 or 3** & **1 or 2** & **2 or 3** & **1 or 3** & **1 or 2** & **2 or 3** & **1 or 3** \\ \hline 4 & 0.919 & 0.875 & 0.891 & 0.941 & 0.939 & 0.955 & 0.939 & 0.911 & 0.924 \\ 6 & 0.801 & 0.794 & 0.789 & 0.882 & 0.865 & 0.880 & 0.829 & 0.817 & 0.829 \\ 8 & 0.782 & 0.765 & 0.770 & 0.787 & 0.780 & 0.788 & 0.782 & 0.757 & 0.772 \\ 10 & 0.728 & 0.765 & 0.760 & 0.773 & 0.743 & 0.760 & 0.769 & 0.750 & 0.763 \\ 12 & 0.628 & 0.605 & 0.616 & 0.673 & 0.643 & 0.690 & 0.629 & 0.617 & 0.609 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparative analysis of the proposed 2-2 layers TR-QNet with varying number of qubits and tensor ranks (\(d_{q}\)) on Iris dataset \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Qubits**} & \multicolumn{3}{c}{**Iris**} & \multicolumn{3}{c}{**MNIST**} & \multicolumn{3}{c}{**CIFAR-10**} \\ \cline{2-11} & & **1 or 2** & **2 or 3** & **1 or 3** & **0 or 9** & **1 or 8** & **2 or 7** & **3 or 6** & **4 or 5** & **0 or 9** & **1 or 8** & **2 or 7** & **3 or 6** & **4 or 5** \\ \hline \multirow{3}{*}{**TR-QNet**} & 4 & **0.941** & **0.939** & **0.955** & 0.817 & **0.828** & 0.869 & 0.850 & 0.809 & 0.758 & 0.803 & 0.747 & 0.798 & 0.771 \\ & 0.882 & 0.865 & 0.880 & **0.828** & **0.836** & **0.891** & **0.863** & **0.870** & **0.819** & **0.849** & **0.833** & **0.857** & **0.809** \\ & 0.787 & 0.780 & 0.788 & 0.667 & 0.684 & 0.669 & 0.650 & 0.671 & 0.658 & 0.603 & 0.647 & 0.618 & 0.624 \\ \hline **VQTN** & 4/6 & **0.924** & 0.905 & 0.911 & 0.813 & 0.806 & 0.829 & 0.811 & 0.823 & 0.788 & 0.794 & 0.776 & 0.745 & 0.763 \\ \hline **QCNN** & 4/6 & 0.871 & 0.852 & 0.861 & 0.772 & 0.736 & 0.740 & 0.742 & 0.755 & 0.721 & 0.714 & 0.717 & 0.732 & 0.746 \\ \hline **TR-VQC** & 4/6 & 0.853 & 0.849 & 0.829 & 0.803 & 0.799 & 0.802 & 0.789 & 0.790 & 0.767 & 0.759 & 0.761 & 0.753 & 0.747 \\ \hline **YTNN** & N/A & 0.838 & 0.839 & 0.842 & 0.797 & 0.788 & 0.798 & 0.778 & 0.780 & 0.701 & 0.698 & 0.734 & 0.727 & 0.715 \\ \hline \hline \end{tabular} \end{table} Table 2: Mean accuracy of the proposed TR-QNet with VQTN [50], TR-VQC [56], QCNN [60], and fully classically simulated VTNN [51] on the test Iris [53], MNIST [54] and CIFAR-10 [55] datasets [The bold values sheds light to the two-sided paired Wilcoxon signed-rank test data [61]] \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Iris**} & \multicolumn{3}{c}{**MNIST**} & \multicolumn{3}{c}{**CIFAR-10**} \\ \cline{2-11} **Model** & **Qubit** & **1, 2 or 3** & **0, 1 or 9** & **2, 4 or 5** & **3, 6 or 7** & **0, 1, 9** & **2, 4 or 5** & **3, 6 or 7** \\ \hline 4 & **0.815** & **0.738** & 0.723 & **0.746** & 0.707 & 0.712 & 0.723 \\ **TR-QNet** & 6 & 0.802 & **0.741** & **0.736** & **0.739** & **0.719** & **0.725** & **0.731** \\ & 8 & \(0.785\) & 0.709 & 0.717 & 0.716 & **0.718** & 0.707 & 0.713 \\ \hline VQTN & 6 & **0.811** & \(0.714\) & 0.701 & 0.698 & 0.688 & 0.682 & 0.694 \\ \hline VTNN & N/A & \(0.774\) & \(0.689\) & 0.684 & 0.679 & 0.613 & 0.639 & 0.657 \\ \hline \hline \end{tabular} \end{table} Table 3: Mean accuracy of the proposed TR-QNet with tensor ranks (\(d_{q}=4\)), VQTN [50] and VTNN [51] for multi-class (3-class) classification [The bold numbers provide information about the two-sided paired Wilcoxon signed-rank test data] stochastic gradient-descent updates. This approach operates on a condensed parameter subspace obtained from the tensorization of trainable weights, leading to faster convergence and promising results. Moreover, our implementation enables the creation of hybrid architectures that combine TN layers, dense layers, and Quan-TR to create true instances of deep learning models. Moreover, it is worth noting that the multi-layer design of Quan-TR within our proposed TR-QNet has the potential to produce the cascading effect of entanglement between the neuronal inputs and their outputs. Our results indicate that the classical TNNs with DMRG-like training and Quan-TR methods work accurately and efficiently for data and image classification tasks. The direct access to the singular values throughout the virtual dimensions of the trainable MPOs of TN layers provided by the DMRG-like training method and tensor ring optimized variational learning algorithm is crucial as it enables the computation of a measure of entanglement (correlation) between the features and model parameters. Since more qubits signify a bigger Hilbert space to parametrize the input data [5], we see a general pattern of increasing classification accuracy with qubit count from \(4\) to \(6\). However, Further increasing to \(8,10\) and \(12\) qubits resulted in a substantial decrease in accuracy for the proposed TR-QNet, probably as a result of over-parametrization [62] and barren plateaus [57]. Due to the additional non-linearity caused by the truncated singular value decomposition over the MPOs and two-qubit gate transformations, we also notice that in the case of the Iris dataset, TR-QNet significantly outperforms the VQTN, QCNN, and TR-VQC with full quantum state information and classically simulated VTNN. Eight-qubit circuit topologies are used in a series of studies utilizing different rankings to examine the impact of tensor ring rank on the performance of TR-QNet as it yields optimal results regarding input qubit counts. However, the proposed TR-QNet model for multi-class image classification has achieved a comparable level of precision, primarily due to the inherent challenges faced by the slow convergence of Quan-TR. Hence, even though its promising performance is exhibited on relatively smaller datasets, the proposed TR-QNet is restricted due to the inherent difficulties in scaling and time-intensive training of Quan-TR. Nevertheless, TR-QNet has achieved higher accuracy in binary classification tasks when compared with its quantum and classical counterparts. Our method paves the way for developing novel deep neural network representations of a quantum state. It serves as a useful tool for investigating the expressive potential of quantum neural states. We aim to develop an efficient TR-QNet model comprising an optimized Quan-TR with fewer hyper-parameters. The total number of parameters in TNN is estimated as \(O(\mathcal{N}_{T}d_{T}n+(n-2)d_{T}^{3})\)[63]. In the case of Quan-TR, the computational complexity is \(O(\mathcal{N}_{q}d_{q})\) as each calculation of a single or two-qubit gate in the proposed Quan-TR is \(O(1)\)[56]. ## 5 Conclusion In line with the impressive advances in quantum machine learning, the proposed TR-QNet framework offers an improvement over fully classical TNN and has been developed as a proof of concept using hybrid classical-quantum algorithms for better training strategies for TNN. In this paper, we have investigated the benefits of a Tensor Ring optimized variational Quantum learning classifier (Quan-TR) to find a better optimization strategy for TR-QNet, which exploits the entanglement inherent between qubits. The experimental results on the test datasets using the proposed TR-QNet model show its efficiency over the quantum and classical counterparts in binary and multi-class classification. Moreover, the experimental results demonstrate the efficacy of the proposed TR-QNet in various settings, which is crucial for data classification and image recognition in noisy intermediate-scale quantum (NISQ) devices. Consequently, our TR-QNet model is a strong contender for deep learning and can revolutionize the studies in quantum machine learning. However, it remains to investigate the current TR-QNet architecture for deep convolutional neural networks and their training algorithms for regression and classification, which can be deployed immediately in near-term quantum devices. Authors are engaged in this direction. ## Appendix ### Convergence Analysis of TR-QNet Due to NISQ's limitations, classical simulators are now being utilized to optimize and update parameters and feed them back to Tensor Neural Network (TNN) and Quan-TR separately until convergence conditions are reached. Hence, we have used cross-entropy loss to update the parameters. The loss function (\(\mathcal{L}_{\theta}\)) is derived with the hyper-parameters \(\theta\) of the proposed TR-QNet model as \[\operatorname*{argmin}_{\theta}\mathcal{L}_{\theta}=\sum_{j}^{\mathcal{N}_{q }}[t^{j}\log\overline{f}(\mathcal{O}_{TN}^{j})+(1-t^{j})\log\{1-\overline{f} (\mathcal{O}_{TN}^{j})\}] \tag{19}\] \(t^{j}\) corresponds to a target output, \(\mathcal{N}_{q}\) is the number of qubits in Quan-TR and \(\overline{f}(\mathcal{O}_{TN}^{j})\) is the average outcome on quantum measurement of a qubit \(j\) concerning the network hyper-parameter set \(\theta\) as evaluated in the following subsection as follows. \[f(y_{j}(\theta),t^{j})=\sum_{j}^{\mathcal{N}_{q}}f((\langle 0|\psi^{\dagger}( \mathcal{O}_{TN}^{j})\mathcal{U}^{\dagger}(\theta)y_{j}\mathcal{U}(\theta) \psi(\mathcal{O}_{TN}^{j})|0\rangle),t^{j}) \tag{20}\] where, \(y_{j}(\theta)\in\{\overline{\lambda_{j}}\}\) and \(t^{j}\) corresponds to a target output and the preprocessed data from the TNN layer, denoted as \(\mathcal{O}_{TN}^{j}\), is transformed into a quantum state represented by \(|\psi(\mathcal{O}_{TN}^{j})\rangle\). In order to train the proposed Quan-TR model, the gradient of the loss function is evaluated as follows: \[\frac{\delta\overline{f}^{\prime}(\mathcal{O}_{TN}^{j})}{\theta^{j}}=\langle 0 |\psi^{\dagger}(\mathcal{O}_{TN}^{j})\frac{\delta\mathcal{U}^{\dagger}( \theta)}{\delta\theta_{j}}y_{j}\mathcal{U}(\theta)\psi(\mathcal{O}_{TN}^{j} )|0\rangle+\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}^{ \dagger}(\theta)y_{j}\frac{\delta\mathcal{U}(\theta)}{\delta\theta_{j}}\psi( \mathcal{O}_{TN}^{j})|0\rangle\] \[=\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}_{1}^{\dagger}( \theta_{1})\cdot\frac{\delta\mathcal{U}_{j}^{\dagger}(\theta_{j})}{\delta \theta_{j}}\cdot\mathcal{U}^{\dagger}{}_{n}(\theta_{n})y_{j}\mathcal{U}(\theta )\psi(\mathcal{O}_{TN}^{j})|0\rangle+ \tag{21}\] \[\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}^{\dagger}(\theta )y_{j}\mathcal{U}_{n}(\theta_{n})\cdot\frac{\delta\mathcal{U}_{j}(\theta_{j}) }{\delta\theta_{j}}\cdot\mathcal{U}_{1}(\theta_{1})\psi(\mathcal{O}_{TN}^{j} )|0\rangle\] where, \(\mathcal{U}_{j}(\theta_{j})=e^{-i\theta_{j}\psi(\theta^{j})}\). The global phase has no direct bearing on the results of the measurement, and hence, we disregard the global phase. Now, rotation gates can be written as follows. \[\frac{\delta\psi(\theta,\mathcal{O}_{TN})}{\delta\theta^{j}}=\frac{1}{2}\psi (\theta+\frac{\pi}{2},\mathcal{O}_{TN})\frac{\delta\psi^{\dagger}(\theta, \mathcal{O}_{TN})}{\delta\theta^{j}}=\frac{1}{2}\psi(\theta-\frac{\pi}{2}, \mathcal{O}_{TN}) \tag{22}\] Substituting Equation 21 by Equation 22, we obtain as follows. \[\frac{\delta\overline{f}^{\prime}(\mathcal{O}_{TN}^{j})}{\theta^{j}}=\langle 0 |\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}_{1}^{\dagger}(\theta_{1}) \cdot\frac{\delta\mathcal{U}_{j}^{\dagger}(\theta_{j})}{\delta\theta_{j}} \cdot\mathcal{U}^{\dagger}{}_{n}(\theta_{n})y_{j}\mathcal{U}(\theta)\psi( \mathcal{O}_{TN}^{j})|0\rangle+\] \[\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}^{\dagger}(\theta )y_{j}\mathcal{U}_{n}(\theta_{n})\cdot\frac{\delta\mathcal{U}_{j}(\theta_{j}) }{\delta\theta_{j}}\cdot\mathcal{U}_{1}(\theta_{1})\psi(\mathcal{O}_{TN}^{j} )|0\rangle\] \[=\frac{1}{2}\{-\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}_{1}^{ \dagger}(\theta_{1})\cdot\frac{\delta\mathcal{U}_{j}^{\dagger}(\theta_{j}- \frac{\pi}{2})}{\delta\theta_{j}}\cdot\mathcal{U}^{\dagger}{}_{n}(\theta_{n} )y_{j}\mathcal{U}(\theta)\psi(\mathcal{O}_{TN}^{j})|0\rangle+\] \[\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}^{\dagger}(\theta )y_{j}\mathcal{U}_{n}(\theta_{n})\cdot\frac{\delta\mathcal{U}_{j}(\theta_{j}+ \frac{\pi}{2})}{\delta\theta_{j}}\cdot\mathcal{U}_{1}(\theta_{1})\psi(\mathcal{ O}_{TN}^{j})|0\rangle\}\] \[=\frac{1}{2}\{\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j})\mathcal{U}_{-}^{ \dagger}[i\psi_{j}]\mathcal{U}_{+}^{\dagger}y_{j}\mathcal{U}(\theta)\psi( \mathcal{O}_{TN}^{j})|0\rangle-\langle 0|\psi^{\dagger}(\mathcal{O}_{TN}^{j}) \mathcal{U}^{\dagger}(\theta)y_{j}\mathcal{U}_{+}[-i\psi_{j}]\mathcal{U}_{-} \psi(\mathcal{O}_{TN}^{j})|0\rangle\}\] \[=\frac{1}{2}\Psi_{\theta_{+}}(\psi(\mathcal{O}_{TN}^{j}))-\frac{1}{2}\Psi_{ \theta_{-}}(\psi(\mathcal{O}_{TN}^{j}))\] For the rotation gates \(\mathcal{R}_{y}(\omega_{y})\) and \(\mathcal{R}_{z}(\omega_{z})\) of Quan-TR in TR-QNet, the angle of rotation [variational parameter (\(\theta\))] is \(\omega_{y}\) and \(\omega_{z}\), respectively. The rotation gates \(\mathcal{R}_{y}(\omega_{y})\) and \(\mathcal{R}_{z}(\omega_{z})\) of Quan-TR operate the qubits \(|\psi_{y}\rangle\) and \(|\psi_{z}\rangle\) as follows. \[|\psi_{y}(t+1)\rangle=\left(\begin{array}{cc}\cos\triangle\omega_{y}(t)&- \sin\triangle\omega_{y}(t)\\ \sin\triangle\omega_{y}(t)&\cos\triangle\omega_{y}(t)\end{array}\right)|\psi_{y }(t)\rangle \tag{24}\] \[|\psi_{z}(t+1)\rangle=\left(\begin{array}{cc}\exp(-j\triangle\omega_{z}(t))&0 \\ 0&(-j\triangle\omega_{z}(t))\end{array}\right)|\psi_{z}(t)\rangle \tag{25}\] where, \[\omega_{y}(\iota+1)=\omega_{y}(t)+\triangle\omega_{y}(t) \tag{26}\] and \[\omega_{z}(\iota+1)=\omega_{z}(\iota)+\triangle\omega_{z}(\iota) \tag{27}\] For the quantum layers in Quan-TR at epoch, \(\iota\), Equations 26 and 27 measure the change in the phase or angles \(\triangle\omega_{y}(\iota)\) and \(\triangle\omega_{z}(\iota)\), respectively. Let us Consider \[\mathcal{C}(\iota)=\omega_{y}(\iota)-\overline{\omega_{y}(\iota)} \tag{28}\] \[\mathcal{D}(\iota)=\omega_{z}(\iota)-\overline{\omega_{z}(\iota)} \tag{29}\] \[\mathcal{R}(\iota)=\omega_{y}(\iota+1)-\omega_{y}(\iota)=\mathcal{C}( \iota+1)-\mathcal{C}(\iota) \tag{30}\] \[\mathcal{S}(\iota)=\omega_{z}(\iota+1)-\omega_{z}(\iota)=\mathcal{D} (\iota+1)-\mathcal{D}(\iota) \tag{31}\] The optimal phases or angles are therefore \(\overline{\omega}_{y}(\iota)\) and \(\overline{\omega}_{z}(\iota)\) for the rotation gates \(\mathcal{R}_{y}(\omega_{y})\) and \(\mathcal{R}_{z}(\omega_{z})\), respectively. In order to update the weights in TNN, a gradient of the bond tensors with respect to the loss (\(\mathcal{L}_{\theta}\)), \(\mathcal{B}^{j,j+1}\), is obtained by defining \(\overline{f}(\mathcal{O}_{TN}^{j})=\mathcal{T}\mathcal{B}\), where \(T\) represents the contraction of every tensor in the TNN other than the bond tensor \(\mathcal{B}\). When considering \(\mathcal{B}^{j}(\iota)\), \(\omega_{y}^{j}(\iota)\) and \(\omega_{z}^{j}(\iota)\), the loss function \(\mathcal{L}_{\theta}(\mathcal{B},\omega_{y},\omega_{z})\) is differentiated as follows: \[\begin{split}\frac{\partial\mathcal{L}_{\theta}(\mathcal{B}, \omega_{y},\omega_{z})}{\partial\mathcal{B}^{j}(\iota)}=\sum_{j=1}^{\mathcal{ N}_{q}}\frac{\partial\overline{f}^{i}(\mathcal{O}_{TN}^{j})}{\mathcal{B}^{j}( \iota)}\left[\frac{t^{j}}{\overline{f}\left(\mathcal{O}_{TN}^{j}\right)}- \frac{t^{j}-1}{1-\overline{f}^{i}(\mathcal{O}_{TN}^{j})}\right]\\ =\sum_{j=1}^{\mathcal{N}_{q}}\mathcal{T}^{j}(\iota)\left[\frac{t^ {j}}{\overline{f}\left(\mathcal{O}_{TN}^{j}\right)}-\frac{t^{j}-1}{1- \overline{f}^{i}(\mathcal{O}_{TN}^{j})}\right]\end{split} \tag{32}\] Hence, the change in the bond tensor designated as \(\triangle\mathcal{B}^{j}(\iota)\) is evaluated as follows. \[\triangle\mathcal{B}^{j}(\iota)=-\gamma(\iota)\frac{\partial\mathcal{L}_{ \theta}(\mathcal{B},\omega_{y},\omega_{z})}{\partial\mathcal{B}^{j}(\iota)} \tag{33}\] Here, \(\gamma(\iota)\) is a learning rate in the gradient descent procedure for updating the bond tensors in TN layers. \[\frac{\partial\mathcal{L}_{\theta}(\mathcal{B},\omega_{y},\omega_{z})}{ \partial\omega_{y}^{j}(\iota)}=\sum_{j=1}^{\mathcal{N}_{q}}\frac{\partial \overline{f}^{i}(\mathcal{O}_{TN}^{j})}{\omega_{y}^{j}(\iota)}\left[\frac{t^{j }}{\overline{f}\left(\mathcal{O}_{TN}^{j}\right)}-\frac{t^{j}-1}{1-\overline{ f}^{i}(\mathcal{O}_{TN}^{j})}\right] \tag{34}\] \[\frac{\partial\mathcal{L}_{\theta}(\mathcal{B},\omega_{y},\omega_{z})}{ \partial\omega_{y}^{j}(\iota)}=\sum_{j=1}^{\mathcal{N}_{q}}\frac{\partial \overline{f}^{i}(\mathcal{O}_{TN}^{j})}{\omega_{y}^{j}(\iota)}\left[\frac{t^{ j}}{\overline{f}\left(\mathcal{O}_{TN}^{j}\right)}-\frac{t^{j}-1}{1-\overline{f}^{i}( \mathcal{O}_{TN}^{j})}\right] \tag{35}\] Here, parameter shift techniques are used to evaluate the gradient of the Quan-TR parameters \(\omega_{y}\) and \(\omega_{z}\)[64, 65, 50] as follows. \[\frac{\partial\overline{f}^{i}(\mathcal{O}_{TN}^{j})}{\omega_{y}^{j}(\iota)} =\frac{1}{2}\left[\Psi_{\omega_{y}+\frac{\pi}{2}}^{\iota+1}(\psi( \mathcal{O}_{TN}^{j}))-\Psi_{\omega_{y}-\frac{\pi}{2}}^{\iota}(\psi( \mathcal{O}_{TN}^{j}))\right] \tag{36}\] and \[\frac{\partial\overline{f}^{i}(\mathcal{O}_{TN}^{j})}{\omega_{z}^{j}(\iota)} =\frac{1}{2}\left[\Psi_{\omega_{z}+\frac{\pi}{2}}^{\iota+1}(\psi( \mathcal{O}_{TN}^{j}))-\Psi_{\omega_{z}-\frac{\pi}{2}}^{\iota}(\psi( \mathcal{O}_{TN}^{j}))\right] \tag{37}\] where, with rotation angles \(\omega_{y}^{j}(\iota)\) and \(\omega_{z}^{j}(\iota)\), respectively, \(\Psi(\iota)_{\omega_{y}\pm\frac{\pi}{2}}(\psi(\mathcal{O}_{TN}^{j}))\) and \(\Psi(\iota)_{\omega_{z}\pm\frac{\pi}{2}}(\psi(\mathcal{O}_{TN}^{j}))\) are the measured qubit \(\psi(\mathcal{O}_{TN}^{j})\). The changes in phase or angles are designated as \(\triangle\omega_{y}^{j}(\iota)\) and \(\triangle\omega_{z}^{j}(\iota)\), respectively, for the rotation gate used to update the qubits. The rotation angles are then modified using the formula below. \[\triangle\omega_{y}^{j}(\iota)=-\nu(\iota)\{\frac{\partial\overline{f}^{i}( \mathcal{O}_{TN}^{j})}{\omega_{y}^{j}(\iota)}\} \tag{38}\] \[\triangle\omega_{z}^{j}(\iota)=-\mu(\iota)\{\frac{\partial\overline{f}^{i}( \mathcal{O}_{TN}^{j})}{\omega_{z}^{j}(\iota)}\} \tag{39}\] Here, \(\nu(\iota)\) and \(\mu(\iota)\) are the learning rates in the gradient descent procedure for updating the rotation angles. ## 6 Data availability The Iris dataset [53], MNIST[54], and CIFAR-10 [55] datasets can be found in the following links:[https://archive.ics.uci.edu/dataset/53/iris](https://archive.ics.uci.edu/dataset/53/iris), [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/), and [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html), respectively. ## 7 Code Availability & Description The PyTorch implementation of TR-QNet is available on Github: [https://github.com/konar1987/TR-QNet/](https://github.com/konar1987/TR-QNet/). ## 8 Acknowledgements This work was partially supported by the Fulbright-Nehru Visiting Researcher Grant \(\#2858FNPDR/2022\).
2301.04338
Synthetic data generation method for data-free knowledge distillation in regression neural networks
Knowledge distillation is the technique of compressing a larger neural network, known as the teacher, into a smaller neural network, known as the student, while still trying to maintain the performance of the larger neural network as much as possible. Existing methods of knowledge distillation are mostly applicable for classification tasks. Many of them also require access to the data used to train the teacher model. To address the problem of knowledge distillation for regression tasks under the absence of original training data, previous work has proposed a data-free knowledge distillation method where synthetic data are generated using a generator model trained adversarially against the student model. These synthetic data and their labels predicted by the teacher model are then used to train the student model. In this study, we investigate the behavior of various synthetic data generation methods and propose a new synthetic data generation strategy that directly optimizes for a large but bounded difference between the student and teacher model. Our results on benchmark and case study experiments demonstrate that the proposed strategy allows the student model to learn better and emulate the performance of the teacher model more closely.
Tianxun Zhou, Keng-Hwee Chiam
2023-01-11T07:26:00Z
http://arxiv.org/abs/2301.04338v2
# Synthetic data generation method for data-free knowledge distillation in regression neural networks ###### Abstract Knowledge distillation is the technique of compressing a larger neural network, known as the teacher, into a smaller neural network, known as the student, while still trying to maintain the performance of the larger neural network as much as possible. Existing methods of knowledge distillation are mostly applicable for classification tasks. Many of them also require access to the data used to train the teacher model. To address the problem of knowledge distillation for regression tasks under the absence of original training data, previous work has proposed a data-free knowledge distillation method where synthetic data are generated using a generator model trained adversarially against the student model. These synthetic data and their labels predicted by the teacher model are then used to train the student model. In this study, we investigate the behavior of various synthetic data generation methods and propose a new synthetic data generation strategy that directly optimizes for a large but bounded difference between the student and teacher model. Our results on benchmark and case study experiments demonstrate that the proposed strategy allows the student model to learn better and emulate the performance of the teacher model more closely. ## 1 Introduction In the recent decade, advances in algorithms, computational hardware and data availability have enabled significant developments in artificial neural networks and deep learning (Lecun et al., 2015). Neural networks models are now state-of-the-art in many fields of application including computer vision ((O'Mahony et al., 2020), natural language processing (Otter et al., 2021), and signal processing (Purwins et al., 2019). However, as models become increasingly larger in size measured by number of parameters, they too become computationally expensive to store and perform inference on. Large neural networks can be unusable for real world deployment scenarios where hardware may be limited, such as on mobile devices or microcontrollers, or when deployed for as a service to a large number of users such as web applications (Cheng et al., 2018; Deng et al., 2020). Knowledge distillation is a class of method to address this problem by distilling the predictive capabilities of a larger neural network into a smaller neural network, allowing for faster inference and lower memory requirements (Gou et al., 2021). There have been several knowledge distillation methods proposed in the past, typically requiring the original data that was used to train the teacher model. However, in many real-world applications, the original data may not be available for performing knowledge distillation to student models due to reasons such as data size and data privacy (Chen et al., 2019; Gou et al., 2021). To deal with such situations, data-free knowledge distillation methods have been proposed to allow distillation of knowledge without the original training data (Hu et al., 2020; Lopes et al., 2017; Micaelli and Storkey, 2019; Ye et al., 2020; Yoo et al., 2019). Data-free knowledge distillation works by generating synthetic data and training the student model with these data and their teacher model predicted labels. Much of the existing research for knowledge distillation has been focused on classification tasks. However, regression tasks are common in many engineering applications (Guo et al., 2021; Schweidtmann et al., 2021) and there are limited methods available on knowledge distillation for regression neural networks. Recently, (Kang and Kang, 2021) proposed the first data-free knowledge distillation method for regression where a generator model was trained in an adversarial manner to generate synthetic data. Motivated by the need for data-free model distillation on regression models in real world applications, in this work we investigate the behaviors of several synthetic data generation methods including random sampling and adversarial generator. Based on the insights gained from this investigation, we propose an improved method to generate synthetic data for data-free knowledge distillation of regression neural networks by optimizing for a loss function defined using the student and teacher model predictions directly rather than implicitly through an additional generator model. Compared to existing methods, synthetic data generated through this process can provide large difference in prediction between the student and teacher model while mimicking real data better. We demonstrate that this method for synthetic data generation can provide better performance than existing methods through experiments in 7 standard regression datasets, as well as on the MNIST handwritten digit dataset adapted for regression, and a real-world bioinformatics case study of protein solubility prediction. ## 2 Related work ### Knowledge distillation As neural networks become increasingly large in number of parameters, the deployment of such models faces a difficult challenge for applications such as mobile devices and embedded systems due to limitations in computational resources and memory (Cheng et al., 2018; Deng et al., 2020). To address such problems, model compression through knowledge distillation has become an active area of research in recent years. Knowledge distillation is the technique where knowledge learned by a larger teacher model is transferred to a smaller student model (Gou et al., 2021; Hinton et al., 2015). The main idea is that the student model mimics the teacher model to achieve a similar or even a superior performance. Various methods of knowledge distillation define and focus on different forms of knowledge. Following the nomenclature in (Gou et al., 2021), these can be largely grouped as response-based knowledge, feature-based knowledge, and relation-based knowledge. For response-based knowledge, outputs of the teacher model are used to supervise the training of the student model. For example, (Hinton et al., 2015) uses soft targets from the logits output of the teacher model to train the student. For feature-based knowledge, outputs of intermediate layers, or feature maps learned by the teacher model can be to supervise the training of the student model. For example, (Romero et al., 2014) trains the student model to match the feature activations of the teacher model. For relationship-based knowledge, the relationships between different layers or data samples are used. For example, (Yim et al., 2017) uses the inner products between features from two layers to represent the relationship between different layers, while (Chen et al., 2021) trains the student model to learn to preserve the similarity of samples' feature embeddings in the intermediate layers of the teacher models. ### Data-free knowledge distillation In some situations, access to the original data used to train the teacher model is not available due to issues such as privacy and legal reasons. Data-free knowledge distillation methods have been proposed to allow model distillation in the absence of original training data. This is achieved by generating synthetic data for training. Many methods achieve this by using generative adversarial networks (GAN) (Chen et al., 2019; Hu et al., 2020; Micaelli and Storkey, 2019; Ye et al., 2020; Yoo et al., 2019). For example, (Micaelli and Storkey, 2019) train a generator model to generate synthetic images that maximizes the difference in prediction (measured by KL divergence) between the teacher and student models. The student model is then trained to minimize the difference on these synthetic images. Other methods such as (Lopes et al., 2017) make use of metadata collected during training of the teacher model, in the form of the layer activation records of the teacher model to reconstruct dataset for training the student model. ### Knowledge distillation for regression Most of the methods currently existing in knowledge distillation literature deal with classification problems. These methods generally are not immediately applicable to regression problems where the predictions are unbounded real values. For regression problems, [Chen et al., 2017] uses a teacher bounded regression loss where the teacher's predictions serve as an upper bound for the student model instead of using it directly as a target. [Takamoto et al., 2020] uses a teacher outlier rejection loss, that rejects outliers in training samples based on the teacher model predictions. [Kang and Kang, 2021] introduced the first work that addresses data-free knowledge distillation for regression, by using a generator model that generates synthetic datapoints that is trained adversarially together with the student model. ## 3 Material and methods ### Overview of methods Given a trained teacher model T, and a student model \(S_{\theta}\) parameterized by \(\theta\), we generate synthetic data \(x\) via some data generation method. The student model is trained by minimizing the student loss \(L_{S}(x)\) defined in equation 1 using gradient descent. This generic method is illustrated in Figure 1. \[L_{S}(x)=(T(x)-S_{\theta}(x))^{2} \tag{1}\] The performance of the student model in mimicking the performance of the teacher is dependent on the representation strength \(\theta\) of the student model, and the data \(x\) used to train it, and the optimization process of minimizing student loss. Hence for a fixed student model architecture and training process, the synthetic data generation process plays the key role in determining the performance of the student model. ### Synthetic data generation methods Three types of synthetic data generation methods are investigated in this study: random sampling, generative model, and direct optimization. #### 3.2.1 Random sampling Synthetic data are generated by sampling randomly from an input distribution. Assuming the input has been standardized, random samples can be drawn from a Gaussian distribution \(\sim\mathcal{N}(0,I)\). Random sampling can also be drawn through quasi-Monte Carlo method such as Latin Hypercube sampling and Halton sequences which are designed to evenly cover the input space. Input space bounds may be defined using the maximum and minimum values of an available validation or test set, or based upon some prior knowledge. Figure 1: Generic data-free knowledge distillation method #### 3.2.2 Generator model Generator model for generating synthetic data was proposed for data-free knowledge distillation for regression tasks by (Kang and Kang, 2021), follows similar methods in classification tasks (Micalli and Storkey, 2019). In this method, a generator model \(G_{\phi}\) parameterized by \(\phi\) is trained to output samples that would result in a large difference between the student and teacher model's predictions. This generator model is trained in an adversarial manner against the student model during the distillation process by optimizing the generator loss function in equation 2. \[L_{G}(z)=\mathbb{E}_{x_{g}~{}G_{\phi}(z)}[-(T(x_{g})-S_{\theta}(x_{g}))^{2}] \tag{2}\] The student is trained using the student loss to minimize the difference between teacher and its own predictions, the two opposing learning objectives are trained in a sequential adversarial manner, and the student model is able to learn to match the predictions of the teacher model as training continues. This process is illustrated in Figure 2. In practice, regularization terms may be added to the generator loss to prevent complete deviation from underlying data distribution, for e.g. by adding the square of \(L_{2}\)-norm of \(x_{g}\) and \(S_{\theta}(x_{g})\), yielding: \[L_{G}(z)=\mathbb{E}_{x_{g}~{}G_{\phi}(z)}[-(T(x_{g})-S_{\theta}(x_{g}))^{2}+ \beta\|x_{g}\|^{2}+\gamma S_{\theta}(x_{g})^{2}] \tag{3}\] #### 3.2.3 Direct optimization from random samples The generator model approach attempts to train the generative model \(G_{\theta}\) to approximate the inverse function of the student loss implicitly, where the generative model predicts \(x\) given the objective of high student loss. It is not immediately clear whether the generative model is able to learn this inverse function easily. Since the goal of the generative model approach is to generate samples that maximize the student loss, it is more straightforward to maximize the student loss directly, as formulated below. \[\max_{x_{g}}~{}(T(x_{g})-S_{\theta}(x_{g}))^{2}\] Or following conventions: \[\min_{x_{g}}~{}-(T(x_{g})-S_{\theta}(x_{g}))^{2} \tag{4}\] In practice, following the generator method, we may add regularization terms as well, such as in equation 5. Figure 2: Data-free distillation with generator method \[\min_{x_{g}}-(T(x_{g})-S_{\theta}(x_{g}))^{2}+\beta\|x_{g}\|^{2}+\gamma(x_{g})^{2} \tag{5}\] It is later shown in 3.5 and 3.6 that the methodology is very flexible, and any arbitrary loss function may be used to incorporate loss terms designed to capture important properties of the data. This minimization can be done through various optimization algorithms. If both the student and teacher models are differentiable, gradient descent can be used. Black box metaheuristic optimization methods such as genetic algorithms and simulated annealing may also be used, especially if the teacher model gradients are unavailable. The method is illustrated in Figure 3. When using direct optimization of the student loss with gradient descent, it is possible to derive theoretical guarantees for (a) generating samples that are better than random sample and (b) generating samples that are bounded in their deviation away from underlying distribution. The gradient descent updates as such: \[x_{g,t+1}=x_{g,t}+\eta\frac{\partial}{\partial x_{g,t}}[T(x_{g})-S_{\theta}(x_ {g})]^{2} \tag{6}\] Assuming the neural networks are locally smooth (Lipschitz continuous), given some sufficiently small learning rate \(\eta\), \(x_{g,t+1}\) always improves upon \(x_{g,t}\) fulfilling guarantee (a). Given some learning rate \(\eta\) and number of gradient descent steps \(t_{max}\), \(x_{g,t+1}\) deviates from \(x_{g,0}\) randomly sampled from underlying distribution by an arbitrary bound, fulfilling guarantee (b). Proof for guarantee (a) is provided in [1] p.466 and proof for guarantee (b) is provided in the supplementary materials. It is not obvious to us that the generator model method can fulfil guarantee (a) because \(x_{g}\) is generated from Gaussian noise \(z\) of an arbitrary dimension and is not related to random samples in input space; and to fulfil guarantee (b), a bound on the deviation of \(x_{g}\) from 0 exist only if a regularization term is applied to \(x_{g}\). The proof for bound on magnitude of \(x_{g}\) for generator method with \(L_{2}\) regularization is provided in the supplementary materials. #### 3.2.4 Proposed method for knowledge distillation The proposed data-free knowledge distillation method generates training data \(x_{g}\) through direct optimization of student loss with gradient descent. In the synthetic data generation step, assuming inputs are standardized, a batch of random samples are drawn from a Gaussian distribution \(\sim\mathcal{N}(0,1)\). Gradient descent is used to perturb these random samples to the direction of maximizing their student Figure 3: Data-free model distillation with direct optimization method loss values, obtaining \(x_{g}\). In the student training step, the student weights are updated to minimize the student loss with respect to the synthetic data \(x_{g}\). Following the methods proposed in (Kang and Kang, 2021), generated data is also supplemented with random samples \(x_{p}\) drawn from Gaussian distribution \(\sim\mathcal{N}(0,1)\). The sample weights for the generated samples \(x_{g}\) and random samples \(x_{p}\) are controlled by a factor \(\alpha\), which can be a fixed value or follow a schedule based on the training epoch. \[L_{S}=\alpha L_{S}(x_{g})+(1-\alpha)L_{S}(x_{p}) \tag{7}\] Setting \(\alpha\) to \(0\) is equivalent to the random sampling strategy. Setting \(\alpha\) to \(1\) is a pure generative sampling strategy. Note that for both edge cases, since the loss of only \(1\) set of samples contributes to the training, the number of training samples in each epoch needs to be doubled for a fair comparison with cases where \(\alpha\) is between \(0\) and \(1\). We investigate a decreasing alpha schedule as well as a pure \(x_{g}\) training strategy in the experiments. The training process is provided in algorithm 1 & 2. In the main procedure Data-free model distillation where the data distillation training happens, the number of training epochs for the student model is defined as \(t_{max}\), and the number of batches per epoch is defined as \(n_{s}\). In the sub-procedure Optimize, where direct optimization to generate synthetic data is done via gradient descent, the number of gradient descent steps is defined as \(\tau_{max}\). ``` Input: teacher model, \(T\) Output: student model, \(S_{\theta}\) 1fort=1 to \(t_{max}\)do 2forl to n\({}_{s}\)do 3\(z\sim\mathcal{N}(0,I)\) 4\(x_{g}\leftarrow\textbf{Optimize}(z)\) 5\(x_{p}\sim\mathcal{N}(0,I)\) 6\(L\leftarrow\alpha L_{S}(x_{g})+(1-\alpha)L_{S}(x_{p})\) 7 Update \(S_{\theta}\) with gradient descent w.r.t. \(L\) ``` **Algorithm 1**Main procedure: **Data-free model distillation** ``` Input:\(z\) Output:\(x_{g}\) 1\(x_{g}\gets z\) 2for\(\tau\)=1 to \(\tau_{max}\)do 3\(L_{S}\leftarrow-(T(x_{g})-S_{\theta}(x_{g}))^{2}+\beta\|x_{g}\|^{2}+\gamma(x_ {g})^{2}\) 4\(x_{g}\gets x_{g}-\eta\frac{\partial}{\partial x_{g}}L_{S}\) ``` **Algorithm 2**Sub-procedure: **Optimize** ### Regression datasets for experiments To facilitate comparison with the previous work by (Kang and Kang, 2021), the experiments were conducted on the same datasets. These 7 datasets are regression problem sets available from UCI machine learning repository (Dheeru and Casey, 2019) and KEEL dataset repository (Alcala-Fdez et al., 2010). 'longitude' was selected as the output variable for Indoor. Details of the datasets are provided in the Table 1. The data are split into training and test set. The training set consists of 5000 samples for each dataset. 10% of the remainder samples are placed into the validation set, and the remaining 90% is the test set. The validation set is used to periodically evaluate the training of the student model. For data processing step, all values were standardized to a mean of 0 and a standard deviation of 1. Two processing workflows were tested where the scaling factors were calculated for the training set only and then applied to the test set, and where the scaling was done on the whole dataset prior to splitting of training and testing data. No significant differences were observed for both workflows, and the second workflow was used for the results for simplicity. ### Experiment setup for regression datasets To facilitate comparison, we used the same experiment setup for the neural networks as was used in [10]. The teacher model is a fully connected feed forward network containing 1 hidden layer of 500 units with Tanh activation function. The student model is also a fully connected feed forward network containing 1 hidden layer of either 25, or 50 units with Tanh activation function. The teacher model is trained with the training data, while student models are trained without access to any real data from the training set. RMSProp optimizer is used for gradient descent, with a learning rate of \(10^{-3}\) and weight decay regularization of \(10^{-5}\). Batch size \(m\) is set to be 50, and the number of batches in each epoch, \(n_{s}\) is set to be 10. \(\beta\) and \(\gamma\) are selected to be \(10^{-5}\). The number of epochs is selected as 2000. Models that performed the best on the validation loss was used to evaluate on the test set. For the direct optimization method to generate synthetic data, RMSProp optimizer with a learning rate of \(10^{-1}\), and 2 epochs were used, how these two hyperparameters were selected are elaborated in the results section 4.1. ### Experiments on MNIST dataset To further test the applicability of our method on different types of inputs, and on deeper and more complex neural network architectures, we designed an experiment for data-free knowledge distillation for regression on the MNIST handwritten digits dataset. The MNIST dataset is originally intended to be used for classification task, following the method presented in [23], we adapt it for regression task by making the neural network to predict a continuous number that represent the class value of the digit label of the input image. The performance of the model is measured in mean absolute error (MAE) between the predicted value and the actual value of the digit. For e.g. for perfect performance, the model should predict a value of 3.0 for an image with the handwritten digit 3. A prediction of 2.9 will result in a MAE of 0.1. The input image in MNIST is a single channel image of size 28 by 28 pixels, each pixel taking a value between 0 - 1. The mean \(\mu\) and standard deviation \(\sigma\) of each pixel position is calculated for the entire dataset and is used to generate random datapoints with a normal distribution \(\mathcal{N}(\mu,\sigma)\) clipped between 0 - 1. As proposed by [23], we used a multi-layer convolutional neural network with the architecture specified in Table 2. The teacher and student network follow the same architecture, except that the number of filters, \(f\) for each convolutional layer in the teacher network is higher than that in the student network. \(f\) is chosen to be 10 for the teacher network and 5 for the student network. Log hyperbolic cosine (Log-Cosh) loss was used instead of mean squared error as the loss function to improve training [23]. \begin{table} \begin{tabular}{l l l} \hline \hline **Dataset** & **Number of features** & **Number of samples** \\ \hline Compactiv & 21 & 8192 \\ Cpusmall & 12 & 8192 \\ CTScan & 384 & 53500 \\ Indoorloc & 520 & 19337 \\ Mv & 10 & 40768 \\ Pole & 26 & 14998 \\ Puma32h & 32 & 8192 \\ \hline \hline \end{tabular} \end{table} Table 1: List of regression datasets Due to the different nature of the input, which are images rather than standardized tabular data in the regression datasets, and the output which are natural number, we designed a different loss function for generating synthetic data. This loss function differs from equations 3 and 5 by replacing the mean-squared error loss with Log-Cosh loss and by changing the regularization terms to better capture the distribution of real data. Firstly, instead of penalizing the \(L_{2}\) norm of \(x_{g}\), we penalize the \(L_{1}\) norm of \(x_{g}\) because the handwritten digits image tends to be sparse. Secondly, instead of penalizing the student prediction on \(x_{g}\), we randomly sample a whole number from 0 - 9 and penalize the distance of the teacher's prediction to the random whole number. The purpose of this penalty is to allow the synthetically generated sample \(x_{g}\) to match more closely with the actual data distribution, as real data should generally not be predicted too far away from whole number by the teacher model for this task. \[y_{rand}\sim\{n\in\mathbb{Z}:0\leq n\leq 9\}\] \[L_{G}(x_{g})=-\epsilon\ log[cosh(T(x_{g})-S_{\theta}(x_{g}))]+\beta|x_{g}|+ \gamma(T(x_{g})-y_{rand})^{2} \tag{8}\] To train the student model, RMSProp optimizer was used for gradient descent, with a learning rate of \(10^{-3}\) and weight decay regularization of \(10^{-5}\). Batch size \(m\) is set to be 50, and the number of batches in each epoch, \(n_{s}\) is set to be 10. The number of epochs is selected as 1000. For the direct optimization method to generate synthetic data, RMSProp optimizer with a learning rate of \(10^{-3}\), and 20 epochs were used. For the generator network, the number of rounds for training the generator per epoch was also set to 20. ### Case study on protein solubility prediction A bioinformatics problem, predicting continuous protein solubility value with the constituent amino acids [10], was used as a case study to test the effectiveness of data-free knowledge distillation for regression on a real-world scientific problem. Predicting continuous solubility value is useful for _in-silico_ screening and design of proteins for industrial applications. We also want to test how the method can be used when the gradients of the teacher model are not available. For example, many bioinformatics tools such as protein solubility prediction are hosted on servers that allow users to query proteins and obtain predictions. However, both the model and data used to train the model are not available to the user. To recreate the model, data-free knowledge distillation without gradient access to the teacher model is required. If gradient information of the teacher model is unavailable, it is not possible to train the generative network as described in 3.2.2 directly. However, for direct optimization, it is possible to use metaheuristics optimization that does not require gradients instead of gradient descent. \begin{table} \begin{tabular}{l l l} \hline \hline **Name** & **Filters/units** & **Activation function** \\ \hline Conv2D-1 & 3 x 3 x f & ReLU \\ Conv2D-2 & 3 x 3 x 2f & softplus \\ Maxpool2D-1 & 2 x 2 & \\ Conv2D-3 & 3 x 3 x 4f & softplus \\ Maxpool2D-2 & 2 x 2 & \\ Flatten & & \\ Fully connected-1 & 500 & softplus \\ Dropout-1 (0.5) & & \\ Fully connected-1 & 100 & softplus \\ Dropout-2 (0.25) & & \\ Fully connected-1 & 20 & softplus \\ Fully connected-1 & 1 & softplus \\ \hline \hline \end{tabular} \end{table} Table 2: Architecture of neural network for MNIST regression The dataset used contains 3148 proteins with solubility represented as a continuous value between 0 - 1 from the eSol database (Niwa et al., 2009). The input features are the proportion of each of the 20 amino acids within the protein sequence. 2500 proteins are selected for the training set, and the remaining as test set. The teacher model used is a support vector machine, which represents the black-box teacher model that contains no gradient information and only output prediction value is available. As in the MNIST example, we introduce diversity in the predicted value by the teacher model on \(x_{g}\) with a penalty term on distance away from a random \(y\) value sampled for every batch. \[y_{rand}\sim\{n\in\mathbb{R}^{+}:0\leq n\leq 1\}\] \[L_{G}(x_{g})=-\epsilon~{}(T(x_{g})-S_{\theta}(x_{g}))^{2}+(1-\epsilon)(T(x_{g} )-y_{rand})^{2} \tag{9}\] The student model is made up of a fully connected Gaussian kernel radial basis function layer with output size of 100, followed by a fully connected linear layer that outputs the prediction. For training the student models in both baseline and direct optimization method, RMSProp optimizer is used for gradient descent, with a learning rate of \(10^{-3}\) and weight decay regularization of \(10^{-6}\). Batch size \(m\) is set to be 50 with decreasing \(\alpha\) schedule. Random sampling was used for training baseline model and providing initial points for direct optimization method. The mean \(\mu\) and standard deviation \(\sigma\) of each amino acid feature is calculated from the training dataset and is used to generate random datapoints with a normal distribution \(\mathcal{N}(\mu,\sigma)\) clipped between 0 - 1. The feature values are then normalized such that the value sums to 1. This is done as the features which are proportion of each of the 20 amino acids within the protein sequence must sum to 1. For the direct optimization method to generate synthetic data, differential evolution algorithm (Storn and Price, 1997) with 25 iterations of _best2bin_ strategy was used, with initial points generated with the random sampling just described. ## 4 Results ### Properties of synthetic data generated We first investigate the properties of the synthetic data generated by various methods, namely the student loss value of the synthetic data, and the distribution of the synthetic data. #### 4.1.1 Student loss value of synthetic data Intuitively, the goal of the synthetic data generation process is to generate data that gives large differences in student and teacher prediction (i.e. student loss \(L_{S}\) in equation 1) in the hope that by learning to correct these large mistakes, the student model is able to learn faster and better mimic the outputs of the teacher model. To verify the actual behavior of the various methods at achieving this goal, we compare the student loss values of synthetic data generated by the various methods at different stages of training a student model with random samples: when student is first randomly initialized at \(0^{\text{th}}\) epoch, during the middle stage of training at the \(50^{\text{th}}\) and \(100^{\text{th}}\) epoch, and when the student model has converged at the \(500^{\text{th}}\) epoch. The results shown in Figure 4 are for Indooroc dataset. As expected, it can be observed that the synthetic data generated by the generator method and the direct optimization method have higher student loss than random Gaussian samples at all stages of training. Compared to the direct optimization method, the generator method tends to generate data with smaller loss at the early stages of training, and larger loss at later stages of training. Directly optimizing with metaheuristics algorithms, in this case differential evolution, appears to also produce synthetic data with high loss. However, the running speed of metaheuristics algorithms is much slower than gradient descent and is not ideal practically unless gradient information is unavailable. #### 4.1.2 Distribution of synthetic data Synthetic data generated should reasonably overlap with the underlying distribution. Out of distribution data generated may either be not useful or even detrimental to model performance on test data. Ideally the synthetic data generated should also be well spread out from each other rather than clustered closely together to allow for better coverage of the data distribution. To verify the actual behavior of the generator and direct optimization methods at achieving this goal, we visualize the distribution of synthetic datapoints generated by the generator method and direct optimization (gradient descent) method at different stages of training using plots of 2D UMAP (Uniform Manifold Approximation and Projections) shown in Figure 5. It is observed that the synthetic data generated by the generator approach tends to converge around one or two tight clusters, leaving the rest of the input space untouched. Even though the direct optimization approach also tends to have some datapoints concentrated at a few clusters, the rest of the datapoints tends to be much better spread out in the input space, while still maintaining similarity with real data. This suggests greater diversity of synthetic data generated with direct optimization should be helpful for training the student model. It is also observed that at the later stage of training, many more datapoints generated by the generator method cluster at regions where there are no real datapoints compared to datapoints generated by the direct optimization method. This may explain the larger student loss for the generator method than Figure 4: Boxplot of student loss of synthetic data at different epochs direct optimization method at later stage of training. This suggests that the decreasing schedule for sample weights parameter \(\alpha\) which controls how much the generated data, \(x_{g}\) influence the training loss compared to random samples \(x_{p}\), would likely play a much more important role when using the generator method. Because at later stage of training, \(x_{g}\) generated by the generator method will likely deviate more from the underlying distribution and may lead to negative learning, which necessitates a smaller weight \(\alpha\). We have experimented and found that direct optimizing for 2 steps with a step size of \(10^{-1}\) leads to generating synthetic data that do not deviate much from the underlying distribution while still providing a substantially higher student loss than random samples. Hence these two hyperparameters were selected for the direct optimization method. ### Comparison of different methods for data-free distillation on regression datasets Table 3 and Table 4 shows the comparison of root mean squared error (RMSE) for 5 methods of data-free distillation using student size of 25 and 50 respectively. For the generator method and direct optimization method, both a decreasing \(\alpha\) schedule and \(\alpha\) value of 1 are tested. The \(\alpha\) value of 1 means that the training uses the generated synthetic data xg entirely without any randomly sampled datapoints. Comparing the results for student model size of 25 and 50 hidden units, it is observed that with an increase in student model size, the RMSE is lower for all datasets due to the greater representation power of the student model. For most of the datasets tested, the direct optimization method achieves the lowest RMSE and most closely matches the performance of the teacher model. Compared against Figure 5: Distribution of synthetic datapoints at different epochs random sampling, direct optimization with decreasing alpha achieves lower RMSE on 6 out of 7 datasets. Compared against generator method with decreasing \(\alpha\), direct optimization with decreasing \(\alpha\) achieves lower RMSE on 5 out of 7 datasets. When \(\alpha\) is set to 1, we observe a substantial increase in RMSE for the generator method. However, \begin{table} \begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline **Dataset** & **Teacher Model** & **Random Sampling** & **Generator; decreasing** & **Generator; \(\alpha\) = 1** & **Direct optimizer; decreasing \(\alpha\)** & **Direct optimizer; \(\alpha\) = 1** \\ \hline Compactv & 0.1450 \(\pm\) 0.0062 & 0.15534 \(\pm\) 0.0077 & 0.1551 \(\pm\) 0.0060 & 0.1837 \(\pm\) 0.0124 & **0.1514 \(\pm\) 0.0068** & 0.1531 \(\pm\) 0.0066 \\ Cpusmall & 0.1663 \(\pm\) 0.0037 & 0.1760 \(\pm\) 0.0043 & 0.1744 \(\pm\) 0.0040 & 0.1842 \(\pm\) 0.0079 & **0.1737 \(\pm\) 0.0027** & **0.1737 \(\pm\) 0.0049** \\ CTScan & 0.1032 \(\pm\) 0.0048 & 0.1980 \(\pm\) 0.0111 & 0.1458 \(\pm\) 0.0058 & 0.2165 \(\pm\) 0.0092 & 0.1320 \(\pm\) 0.0047 & 0.1316 \(\pm\) 0.0050 \\ Indoorloc & 0.0844 \(\pm\) 0.0039 & 0.0965 \(\pm\) 0.0043 & 0.1020 \(\pm\) 0.0035 & 0.1549 \(\pm\) 0.0076 & **0.0890 \(\pm\) 0.0021** & 0.1599 \(\pm\) 0.0027 \\ Mv & 0.0226 \(\pm\) 0.0027 & 0.0237 \(\pm\) 0.0023 & 0.0023 \(\pm\) 0.0019 & 0.0059 \(\pm\) 0.0059 & **0.0021 \(\pm\) 0.0022** & 0.1320 \(\pm\) 0.0022 \\ Pole & 0.1539 \(\pm\) 0.0055 & 0.2163 \(\pm\) 0.0143 & **0.1964 \(\pm\) 0.0074** & 0.2094 \(\pm\) 0.0059 & 0.2092 \(\pm\) 0.0114 & 0.2092 \(\pm\) 0.0114 & 0.2092 \(\pm\) 0.0048 \\ \hline \end{tabular} \end{table} Table 4: RMSE results achieved with different methods for student model size of 50 \begin{table} \begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline **Dataset** & **Teacher Model** & **Random Sampling** & **Generator; decreasing** & **Generator; \(\alpha\) = 1** & **Direct optimizer; decreasing \(\alpha\)** & **Direct optimizer; \(\alpha\) = 1** \\ \hline Compactv & 0.1441 \(\pm\) 0.0039 & 0.1588 \(\pm\) 0.0050 & 0.1606 \(\pm\) 0.0061 & 0.1693 \(\pm\) 0.0061 \(\pm\) 0.0069 & **0.1562 \(\pm\) 0.0043** & 0.1599 \(\pm\) 0.0067 \\ Cpusmall & 0.1672 \(\pm\) 0.0031 & 0.1840 \(\pm\) 0.0065 & 0.1875 \(\pm\) 0.0070 & 0.1918 \(\pm\) 0.0101 & **0.1817 \(\pm\) 0.0042** & 0.1822 \(\pm\) 0.0048 \\ CTScan & 0.1058 \(\pm\) 0.0060 & 0.2248 \(\pm\) 0.0170 & 0.1601 \(\pm\) 0.0044 & 0.2091 \(\pm\) 0.0090 & 0.1649 \(\pm\) 0.0058 & 0.1593 \(\pm\) 0.0054 \\ Indoorloc & 0.0847 \(\pm\) 0.0018 & 0.105 \(\pm\) 0.0051 & 0.1034 \(\pm\) 0.0034 & 0.1629 \(\pm\) 0.0134 & **0.0944 \(\pm\) 0.0015** & 0.0957 \(\pm\) 0.0035 \\ Mv & 0.0236 \(\pm\) 0.0022 & 0.0250 \(\pm\) 0.0019 & 0.0255 \(\pm\) 0.0016 & 0.00428 \(\pm\) 0.0045 & 0.0252 \(\pm\) 0.0016 & 0.0284 \(\pm\) 0.0017 \\ Pole & 0.1549 \(\pm\) 0.0064 & 0.2893 \(\pm\) 0.0141 & **0.2748 \(\pm\) 0.00161 \(\pm\) 0.0034** & 0.2836 \(\pm\) 0.0304 & 0.3523 \(\pm\) 0. for the direct optimization method, setting \(\alpha\) to 1 generally does not lead to much worse performance. This matches our hypothesis that a decreasing \(\alpha\) schedule is much more important for the generator method as the synthetic datapoint generated tends to deviate more from the underlying distribution at a later stage of training. Compared with generator method with decreasing \(\alpha\), direct optimization \(\alpha=1\) (i.e. training on \(x_{g}\) only) achieves lower RMSE on 5 out of 7 datasets. Figure 6 shows the RMSE on the validation set over the course of training of the student model of size 50. The direct optimization method shows a faster decrease in RMSE and a generally more stable learning behavior than the generator method. (Plot values have been smoothed with a Savitzky-Golay filter of window size of 15 epochs to reduce noise for better visualization) We also examine the student loss on \(x_{g}\) for the two models where \(\alpha=1\). As seen in Figure 7, the generator method often produce unexpectedly large losses during training that could results in negative learning for the model, whereas the direct optimization method generally produces a stable and consistently decreasing loss. ### Comparison of different methods for data-free distillation on MNIST We experimented with different settings of \(\beta\), \(\gamma\) and \(\epsilon\) weights for the various components in the loss function (equation 8) and found that a low \(\beta\) value (\(10^{-6}\)), high \(\gamma\) value (set to 1), low \(\epsilon\) value (set to \(10^{-6}\)) and provides good regularization that encourages synthetic data generated to be diverse and resemble real data distribution more closely. Table 5 below shows the comparison of mean absolute error (MAE) achieved by training the student Figure 6: RMSE on the validation set against training epochs Figure 7: Student loss on synthetic data \(x_{g}\) against training epochs model with synthetic data sampled randomly, generated by the generator method and by the direct optimization method. Results are averaged over 5 runs. Note that the best performing random model that outputs a constant value of 4.5 would give a MAE of approximately 2.5 for a class balanced test set. We can observe that random sampling synthetic data trains a student model that performs worse than a random prediction while the generator method is only slightly better than random prediction. The direct optimization method was able to provide a substantial improvement in performance compared to the other methods. We examine samples of the synthetic data generated by each method in Figure 8. It is observed that the direct optimization method generates synthetic data closer to what appears to be handwritten digits compared to the other methods. We also examine the histograms of predicted values by teacher networks on a batch of 50 synthetically generated data in Figure 9. It is observed that direct optimization method generates samples with the most diversity while maintaining closeness to integer values. The closer resemblance to real data distribution is likely the reason the student model trained on those synthetic data distills more useful knowledge from the teacher model and outperforms the other methods. This experiment demonstrates that the direct optimization method can be easily and effectively adapted for data-free knowledge distillation for regression tasks on image inputs, different types of student or generator loss functions and for multilayer networks with non-MLP architectures which potentially addresses the limitation raised in (Kang and Kang, 2021) on poor applicability of the generator method for data-free knowledge distillation of multilayer networks for regression. ### Case study of data-free distillation for protein solubility predictions We experimented with different settings of \(\epsilon\) value and found that a value of 0.05 encourages synthetic data generated to be diverse and provides the best training results. Table 6 shows the comparison of root mean squared error (RMSE) for the teacher model. The performance obtained for the teacher model is comparable with those obtained in the original study (Han et al., 2019) on regression predictive model for protein solubility. Results are averaged over 5 runs. Note that a random model that outputs values uniformly drawn from 0 - 1 will give a RMSE of approximately 0.43. It is observed that direct optimization method with differential evolution outperforms random sampling significantly and approaches the RMSE of the teacher model. This case study demonstrated that direct \begin{table} \begin{tabular}{l l l l} \hline **Teacher Model** & **Random Sampling** & **Generator** & **Direct optimizer** \\ \hline 0.157 & 2.872 & 2.422 & **1.179** \\ & \(\pm\) 0.052 & \(\pm\) 0.027 & \(\pm\) **0.132** \\ \hline \end{tabular} \end{table} Table 5: MAE results achieved with different methods on MNIST regression Figure 8: Samples of a synthetic image generated by different methods optimization can be easily and effectively applied to cases where gradient information from teacher model is not available, or if the teacher model is not a differentiable neural network at all, such as the support vector machine teacher model in this case. This is achieved by simply swapping the gradient descent with a metaheuristics algorithm for direct optimization. Whereas, using a conventional neural network generative model for synthetic data generation is not possible as the training for the generative model relies on gradients of both the teacher and student model. The limitation however is that metaheuristics optimization methods tend to be much slower than gradient based optimization and thus incur and significant increase in runtime over the baseline method during training. This may be improved by using a faster metaheuristics algorithm, or one that has been optimized to run on GPU, but that is beyond the scope of this paper. ## 5 Conclusion In this study, we investigated the behavior of various synthetic data generation methods including random sampling and using an adversarially trained generator. We propose a straightforward synthetic data generation strategy that optimizes the difference between the student and teacher model predictions directly, with additional flexibility to incorporate arbitrary regularization terms that capture \begin{table} \begin{tabular}{l l l} \hline **Teacher Model (SVM)** & **Random Sampling** & **Direct optimizer** \\ \hline 0.250 & 0.287 & **0.267** \\ & \(\pm\) 0.001 & **\(\pm\) 0.005** \\ \hline \end{tabular} \end{table} Table 6: RMSE results achieved with different methods on protein solubility prediction Figure 9: Histograms of predicted values by teacher networks for synthetic data generated by different methods properties of the data. We show that synthetic data generated by an adversarially trained generator tends not to represent underlying data distribution well, requiring the need to supplement training with random samples and balancing the loss contributions. Our results demonstrate that the proposed strategy of direct optimization generates synthetic data with higher loss than random samples while deviating less from underlying distribution than the generator method. This allows the student model to learn better and emulate the performance of the teacher model more closely. In the experiments, the proposed method achieves lower RMSE than baseline and generator method for most regression datasets tested. We also demonstrate the applicability and flexibility of the method applied to image inputs and deeper convolutional networks on the MNIST dataset, as well as performing distillation on a non-differentiable model in the case study for predicting protein solubility. We hope that this study furthers the understanding of data-free distillation for regression and highlights the key role of the synthetic data generation process in allowing the student model to effectively distill the teacher model. All codes and data used in this study are available at [https://github.com/zhoutianxun/data_free_KD_regression](https://github.com/zhoutianxun/data_free_KD_regression).
2306.09863
Transferability of Winning Lottery Tickets in Neural Network Differential Equation Solvers
Recent work has shown that renormalisation group theory is a useful framework with which to describe the process of pruning neural networks via iterative magnitude pruning. This report formally describes the link between RG theory and IMP and extends previous results around the Lottery Ticket Hypothesis and Elastic Lottery Hypothesis to Hamiltonian Neural Networks for solving differential equations. We find lottery tickets for two Hamiltonian Neural Networks and demonstrate transferability between the two systems, with accuracy being dependent on integration times. The universality of the two systems is then analysed using tools from an RG perspective.
Edward Prideaux-Ghee
2023-06-16T14:18:47Z
http://arxiv.org/abs/2306.09863v1
# Transferability of Winning Lottery Tickets in Neural Network Differential Equation Solvers. ###### Abstract Recent work has shown that renormalisation group theory is a useful framework with which to describe the process of pruning neural networks via iterative magnitude pruning. This report formally describes the link between RG theory and IMP and extends previous results around the Lottery Ticket Hypothesis and Elastic Lottery Hypothesis to Hamiltonian Neural Networks for solving differential equations. We find lottery tickets for two Hamiltonian Neural Networks and demonstrate transferability between the two systems, with accuracy being dependent on integration times. The universality of the two systems is then analysed using tools from an RG perspective. ###### Contents * 1 Introduction * 2 Renormalisation Group and Scaling * 2.1 Ising Model and Block Spins * 2.2 RG * 2.3 Fixed points and relevant parameters * 2.4 Scaling of Variables * 2.5 Universality Classes * 3 DNNs and Iterative Magnitude Pruning (IMP) * 4 IMP as an RG * 4.1 Proof IMP is an RG scheme * 4.2 IMP Flow * 5 Neural Network DE Solvers * 5.1 Nonlinear Oscillator * 5.2 IMP Flow * 5.3 Chaotic Henon-Heiles * 5.4 Universality * 6 Conclusions Introduction Within the past few decades, machine learning has developed into an ever more popular and successful method in solving a variety of problems in fields such as computer vision and natural language processing (NLP). A subset of these methods is deep neural networks (DNNs) in which potentially billions of parameters are trained to find an optimal solution to a problem [1]. The large size of these networks means they demand extensive computation power as well as a lot of memory. However, many DNNs are over-parameterised [2] which means that there is unnecessary computing power being used. This has led to research into the Lottery Ticket Hypothesis [3] - a form of transfer learning - which states: **Lottery Ticket Hypothesis.**_A dense neural network contains a sparse subnetwork that, when trained under the same initialisation, can perform with similar or even better accuracy than the original network when trained for the same number of epochs._ One current method for finding sparse subnetworks with potentially similar performance to the full network is through a process of pruning: removing weights from the network. There are many different types of pruning procedures, such as magnitude based, random and single-shot [4],[5],[6],[7],[8],[9] with very little consensus on which methods should be used in different situations. In this report, we will focus on one common technique - Iterative Magnitude Pruning (IMP) - as it has been shown to give a "state-of-the-art sparsity-accuracy trade-off" [10]. Winning Lottery Tickets have been found in several areas such as computer vision [3],[11],[12][13] and natural language processing (NLP) [14] however the methods used to find them are very computationally expensive. Therefore, research has been done into the universality of winning lottery tickets across different tasks. The existence of universal lottery tickets has been proven [15] suggesting that they could be a general occurrence across several fields. In fact, transferability of winning lottery tickets has been shown in several fields [16],[17],[18] including across networks with different architectures [19]. Until recently, there was very little theory around the transferability of lottery tickets and therefore transfer experiments were the only indication as to whether it would be successful or not. However, work by Redman et.al.[20] has provided a link between sparsifying neural networks and renormalisation group (RG) theory. Specifically, it has been shown that IMP is a renormalisation group scheme and therefore tools from RG theory can be used to study the LTH. RG theory has historically been used predominantly in statistical mechanics, explaining universal behaviour in thermodynamic systems during a phase transition [21]. One important parallel between RG theory and pruning DNNs is the concept of power-law scaling. During a phase transition, variables scale via a power-law relationship with a critical exponent. The same behaviour has been observed in IMP [22] with error scaling via the following relationship within a critical parameter density \[\epsilon\sim cd^{-\gamma} \tag{1}\] where \(\epsilon\) is error, \(d\) is density and \(\gamma\) is a constant. One area where the LTH and universality have been less explored is that of scientific solvers. These address a range of problems, such as handling high-dimensionality [23],[24],[25][26], and have been applied to a wide range of situations [27],[28],[29],[30],[31],[32]. Specifically, we will be looking into neural network differential equation (DE) solvers. These aim to solve an optimisation problem to provide an analytic, closed form differentiable function as a solution [33]. These types of solvers have been studied individually [34][35][36][37] but there exists very little knowledge of universality between solvers. As an example, we take Hamiltonian Neural Networks [36] as they provide useful test cases which should hopefully give an insight into the wider nature of scientific solvers. ## 2 Renormalisation Group and Scaling In this section, we aim to formally define a renormalisation group (RG) so that we may later use its related tools for our case of understanding IMP and the LTH. First, we use the example of block spins in the 2D Ising model from thermodynamics to introduce the concept of power scaling macro-observables near a phase transition [38]. This idea is then developed further to give the concept of the renormalisation group and universality. The development of this theory will follow the work of Goldenfeld (Lectures on Phase Transitions and the Renormalization Group, chapter 9, p229-p256) [21] which views renormalisation group from a thermodynamic phase transition perspective. For a quantum field theory perspective see [39]. Conceptually, the procedure of the renormalisation group as thought of by Kandoff contains three steps. The first step is course-graining which involves reducing the resolution of the system by increasing the smallest length scale from \(a\) to \(la>a\). [40]. Then in order to restore the original resolution, the system is rescaled by reducing all length scales by factor \(l\). The final stage is to renormalise the system so that variables vary on the same scale as originally. ### Ising Model and Block Spins Consider a system of N spins arranged on a 2-dimensional square lattice with spacing \(a\). For now, we make the assumptions of locality and rotational and translational symmetry in the Hamiltonian. That is each spin only interacts with an external magnetic field and its nearest neighbours, with couplings determined by the constants \(k_{1}\) and \(k_{2}\) respectively. In this case, we have the Hamiltonian of the system [21]: \[\mathcal{H}(\textbf{s,k})=-\sum_{i}k_{1}s_{1}-\sum_{\langle i,j\rangle}k_{2} s_{i}s_{j} \tag{2}\] Where \(s_{i}\) are the spins \(s_{i}\in\{-1,+1\}\), \(k_{1}\) and \(k_{2}\) are the strengths of the coupling constants and \(\langle\cdot,\cdot\rangle\) denotes the nearest neighbours in the lattice. This is the simplest model except the trivial case where atoms only interact with themselves. As we are interested in behaviour near critical points and phase transitions for our analysis of IMP, we denote the singular part of the free energy of the system near the phase transition as \(f_{s}(t,h)\)[21]. Within the vicinity of the phase transition, we must consider quantities on the length scale of the correlation length \(\xi\), which describes the length scale on which variables are correlated. The system can then be investigated at different scales by using a process of "course graining" to view a block of multiple spins as one singular spin with properties aggregated from the constituent sub-spins [21]. If we take a block of spins of side length \(al\), we then have a system of \(Nl^{-2}\) blocks each containing \(l^{2}\) individual spins. We define this block-spin transformation in the following way: \[s^{\prime}=\frac{1}{|\tilde{m}_{l}|}\frac{1}{l^{2}}\sum_{i\in I}s_{i} \tag{3}\] where I is the set of indices of spins in the new block and \(\tilde{m}_{l}\) is the average magnetisation of the block defined by \[\tilde{m}_{l}=\frac{1}{l_{2}}\sum_{i\in I}\langle s_{i}\rangle. \tag{4}\] Re-normalising the block spins like this ensures that they can take the same values as the original individual spins (\(\pm 1\)). Now we assume that, similar to the original system, each block spin interacts only with its nearest neighbouring blocks and the external field. Hence we can define new coupling constants \(k^{\prime}_{i}\) which determine the nature of these interactions. This assumption leads us to finding the new effective Hamiltonian of the system to be: \[\mathcal{H}(\textbf{s',k'})=-k^{\prime}_{1}\sum_{i}s^{\prime}_{i}-k^{\prime} _{2}\sum_{\langle i,j\rangle}s^{\prime}_{i}s^{\prime}_{j} \tag{5}\] which is of the same form as the original Hamiltonian. Considering the correlation length \(\xi\), initially we had \(\xi_{1}\) measured on a scale of the lattice spacing \(a\). After performing the course graining, we now have \(\xi_{l}\) which is measured on a scale of the new spacing between blocks \(la\). This therefore gives the relation \(\xi=\xi_{l}la=\xi_{1}a\) which gives: \[\xi_{l}=\xi_{1}/l \tag{6}\] One key property near the critical point during phase transition is the divergence of the correlation length \(\xi\rightarrow\infty\). As \(l>1\), \(\xi_{l}<\xi_{1}\) which means the new system is further from the critical point than in the original system, and hence the new system is at an effective reduced temperature \(t^{\prime}\) and magnetisation \(h^{\prime}\)[21]. Combining these results, the form of the free energy of the effective system \(f_{s}(t^{\prime},h^{\prime})\) is the same as the original system but with new reduced temperature and magnetisation variables. \[f_{s}(t^{\prime},h^{\prime})=l^{2}f_{s}(t,h) \tag{7}\] Therefore, the process of course graining has retained the structure of the system, although with scaled parameters. So, denoting this block spin transformation as \(\mathcal{R}\), we can describe the effect on the Hamiltonian as: \[\mathcal{RH}(\textbf{s,k})=\mathcal{H}(\textbf{s}^{\prime},\mathcal{T} \textbf{k})=\mathcal{H}(\textbf{s',k'}) \tag{8}\] Where \(s^{\prime},k^{\prime}\) are the new spins and coupling constants respectively, and the mapping of the coupling constants is denoted by \(\mathcal{T}:\mathbb{R}^{K}\rightarrow\mathbb{R}^{K}\). As \(\mathcal{R}\) is repeatedly applied, we get a flow of coupling constants, known as the RG flow, which is determined by the eigenvectors of \(\mathcal{T}\) when linearized near a fixed point. The corresponding eigenvalues are classified into three classes: relevant if \(\lambda_{i}>1\); irrelevant if \(\lambda_{i}<1\); or marginal if \(\lambda_{i}=1\). As our interest is in understanding the power law scaling, we further assume that the temperature and magnetism in the critical region transform under course graining by: \[t^{\prime}=tl^{y_{t}} y_{t}>0 \tag{9}\] \[h^{\prime}=hl^{y_{h}} y_{h}>0\] Inputting this into (7) gives us \[f_{s}(t,h)=l^{-2}f_{s}(tl^{y_{t}},hl^{y_{h}}) \tag{10}\] As there are no restrictions on the level of scaling being implemented (\(l\)), taking \(l\) such that \(|t|l^{y_{t}}=1\) allows for the convenient form: \[f_{s}(t,h)=|t|^{d/y_{t}}f_{s}(1,h|t|^{-y_{h}/y_{t}}) \tag{11}\] This is solely a function of \(h|t|^{-y_{h}/y_{t}}\) and hence can be written \[f_{s}(t,h)=|t|^{d/y_{t}}F_{f}(h|t|^{-y_{h}/y_{t}}) \tag{12}\] where \(F_{f}(x)=f_{s}(1,x)\). This can therefore be written in the form of the static scaling hypothesis by denoting \(2-\alpha=\frac{d}{y_{t}}\) and \(\Delta=\frac{y_{h}}{y_{t}}\): \[f_{s}(t,h)=|t|^{2-\alpha}F_{f}(h|t|^{\Delta}) \tag{13}\] This is a specific example of how a course-graining procedure can lead to the observed scaling of macro-observables by discounting the irrelevant couplings between degrees of freedom on a smaller scale. ### Rg The above example of block spins in the 2D Ising model demonstrates how it makes sense to take this form of the scaling for certain variables when near the critical point. However, the assumptions made cannot make sense when extended to different situations. For example, in the case where spins do not interact with their neighbours but do interact with spins further away. In this case, it would not make sense for block spins, comprised of multiple spins, to not interact with their neighbours. Therefore, the concepts of renormalisation group theory are now developed for a more general Hamiltonian. In order to concretely define a renormalisation group, the course-graining process used above must be formally analysed, and the origin of singular behaviour must be understood. As in the block spin case, the fundamental concept is that if given a system with interactions on a scale \(a\), we take blocks of length \(la\), then rescaling and normalising the resulting system gives one similar to the original system with respect to the degrees of freedom. This gives us the characteristic that slightly different systems (e.g interactions between nearest neighbours/next-nearest neighbours) can have the same effective Hamiltonians after course-graining which is important in the context of universality classes. For example, in the block spin case where individual spins do not interact with their nearest neighbours, the resulting system after course-graining could be similar to that where individual spins do. Now we consider Hamiltonians of the general form \[\mathcal{H}=\sum_{n}K_{n}\Theta_{n}\{S\} \tag{14}\] with coupling constants \(K_{n}\), and \(\Theta_{n}\) local operators as functions of the degrees of freedom \(\{S\}\)[21]. Now we perform course-graining similarly to before, collecting degrees of freedom together in a linear block of length \(la\). We call this general transformation a "renormalisation group transformation" \(R_{l}\). As before, we are free to choose the level of course-graining \(l\) so can denote the transformation of coupling constants: \[[\mathbf{k}^{\prime}]=\mathcal{R}_{l}[\mathbf{k}] \tag{15}\] The renormalisation transformations \(R_{l}\) in fact form a semi-group where successive transformations with course-graining scales \(l_{1},l_{2}\) are equivalent to a single transformation with scale \(l_{1}\cdot l_{2}\)[21]: \[R_{l_{1}l_{2}}[K]=R_{l_{2}}\circ R_{l_{1}}[K] \tag{16}\] In order to calculate \(R_{l}\), we now properly define the course graining procedure. First we will need to utilise the partition function \(Z_{n}\) and quantity \(g\) relating to free energy per degree of freedom. \[Z_{N}[K]= Tr~{}~{}e^{\mathcal{H}} \tag{17}\] \[g[K]= \frac{1}{N}\log Z_{N}[K]\] In order to perform the course graining process and reduce the number of degrees of freedom, we perform a partial trace over the degrees of freedom. \[e^{\mathcal{H}^{\prime}_{N}\{[K^{\prime}],S^{\prime}_{I}\}} =Tr^{\prime}_{\{S_{i}\}}e^{\mathcal{H}_{N}\{[K],S_{I}\}} \tag{18}\] \[=Tr_{\{S_{i}\}}\mathcal{P}(S_{i},S^{\prime}_{I})e^{\mathcal{H}_ {N}\{[K],S_{I}\}}\] Where \(Tr_{\{S_{i}\}}\) is the trace operator over the values that \(S_{i}\) can take \(\{\pm 1\}\). Here, to allow the trace to be unrestricted, a projection operator \(P(S_{i},S^{\prime}_{I})\) is used. This is constructed in such a way that the block degrees of freedom \(S_{I}\) take the same values as the original degrees of freedom, thus maintaining the characteristics of the original system. As an example of how the projection operator is constructed, we use the case of the Ising 2D system as previous explained. Having Ising spins on a 2D lattice, we use an RG transformation with blocks of length \((2l+1)a\) so that there is an odd number of spins \((2l+1)^{2}\) in each block. In order for the block spins have the same values as the original degrees of freedom, we define \[S^{\prime}_{I}=sign(\sum_{i\in I}S_{i})=\pm 1 \tag{19}\] Therefore, there is the associated projection operator \[P(S_{i},S^{\prime}_{I})=\Pi_{I}\delta(S^{\prime}_{I}-sign(\Sigma_{i\in I}S_{i})) \tag{20}\] where \(\delta\) is a Kronecker delta function. Clearly this is not a unique RG transformation that ensures \(S^{\prime}_{I}=\pm 1\). However the projection operator must satisfy the following three properties [21]: * \(P(s_{i},s^{\prime}_{I})\geq 0\) * \(P(s_{i},s^{\prime}_{I})\) respects the symmetries of the system * \(\sum_{\{s^{\prime}_{I}\}}P(s_{i},s^{\prime}_{I})=1\) Condition (i) is necessary to ensure that the exponential of the effective Hamiltonian \(e^{{\cal H}^{\prime}_{N}\{[K^{\prime}],S^{\prime}_{I}\}}\geq 0\) and is thus \({\cal H}^{\prime}\) is well defined as the effective block spin Hamiltonian. The second condition (ii) guarantees that there exist no new forms of couplings or symmetries in the new system that were not possible in the original system i.e. the effective Hamiltonian can be written in the same form as originally, but with different values of coefficients. i.e. Given a system of N degrees of freedom described by Hamiltonian: \[{\cal H}_{N}=NK_{0}+h\sum_{i}S_{i}+K_{1}\sum_{i,j}S_{i}S_{j}+... \tag{21}\] the effective Hamiltonian can be written in an equivalent way: \[{\cal H}^{\prime}{}_{N^{\prime}}=N^{\prime}K^{\prime}_{0}+h^{\prime}\sum_{I}S ^{\prime}_{I}+K^{\prime}_{1}\sum_{I,J}S^{\prime}_{I}S^{\prime}_{J}+... \tag{22}\] This can include cases where certain coefficients in the original system are zero but are not zero in the effective system. For example, in a spin system where every pair of spins interact, but there are no interactions of spin triples (\(K_{2}\sum_{I,J,K}S_{I}S_{J}S_{K},\ K_{2}=0\)), the effective system could have interactions of this type (\(K^{\prime}_{2}\neq 0\)). Finally, condition (iii) ensures that there is a unique well-defined mapping of old degrees of freedom to the new ones. Furthermore, in the probabilistic case, this condition ensures it is well-defined by ensuring the sum of all the probabilities of transformed variables is 1. Furthermore, (iii) means that the partition function \(Z_{N}\) is invariant under RG transformation [21]: \[\begin{split} Z^{\prime}_{N}[K^{\prime}]=& Tr_{\{S^{ \prime}_{I}\}}\ e^{{\cal H}^{\prime}_{N^{\prime}}\{[K^{\prime}],S^{\prime}_{I} \}}\\ =& Tr_{\{S^{\prime}_{I}\}}Tr_{\{S_{i}\}}\ P(S_{i},S^ {\prime}_{I})e^{{\cal H}_{N}\{[K],S_{i}\}}\\ =& Tr_{\{S_{i}\}}\ e^{{\cal H}_{N}\{[K],S_{i}\}} \cdot 1\\ =& Z_{N}[K]\end{split} \tag{23}\] As this is invariant, we can clearly see that the previously defined quantity \(g\) for the free energy follows the relation: \[\begin{split} g[K^{\prime}]=&\frac{1}{N^{\prime}}\log Z _{N^{\prime}}[K^{\prime}]\\ =&\frac{1}{l^{-d}N}\log Z_{N}[K]\\ =& l^{d}g[K]\end{split} \tag{24}\] clearly conserving total energy as the free energy per degree of freedom has been scaled by the scaling factor \(l^{d}\). One benefit of RG theory is that it is easier to establish approximate parameters [k'] as opposed to calculating the partition function. ### Fixed points and relevant parameters Now in order to analyse the behaviour of systems after repeated applications of the RG transformations, we consider the "flow" of parameters [\(K^{(n)}\)]. This group of constants from all sets of initial parameters [\(K^{(0)}\)] is known as the renormalisation group flow. It has been noted [21] that the trajectories of parameters are usually attracted to certain fixed points, near to which the systems demonstrate characteristic scaling behaviour. Denote a fixed point of the RG transformation \(R_{l}[K]\) in parameter space as [\(K^{*}\)]. It therefore has the property \[[K^{*}]=R_{l}[K^{*}] \tag{25}\] Looking at the correlation length at the fixed point, relation (6) gives \[\xi[K*]=\xi[K*]/l \tag{26}\] which implies \(\xi\) is either \(\infty\) of \(0\). These two cases are denoted as "critical" and "trivial" fixed points respectively. Within a vicinity of the fixed point, the parameters and Hamiltonian can be written \[K_{n}=K^{*}_{n}+\delta\hat{K}_{n},\qquad\quad\mathcal{H}=\mathcal{H}^{*}+ \delta\hat{\mathcal{H}} \tag{27}\] where \(\delta\) is a small parameter. Performing the RG transformation \(R_{l}\) gives \[R_{l}[K_{n}]=K^{\prime}_{n}=K^{*}_{n}+\delta\hat{K}^{\prime}_{n} \tag{28}\] where \(\hat{K}^{\prime}_{n}\) can be represented by the Taylor expansion: \[\hat{K}^{\prime}_{n}=K^{*}_{n}+\sum_{m}\frac{\partial\hat{K}^{\prime}_{n}}{ \partial K_{m}}|_{K=K^{*}}\cdot\delta K_{m}+O((\delta K)^{2}) \tag{29}\] so \(\delta\hat{K}^{\prime}_{m}=\sum_{m}M_{nm}\delta K_{m}\) where \(M_{nm}=\frac{\partial\hat{K}^{\prime}_{n}}{\partial K_{m}}|_{K=K^{*}}\) is the linearisation of the RG transformation near the fixed point. To analyse the RG flow in the vicinity of the fixed point \(K^{*}\), the eigenvalues and eigenvectors of this linearisation are required. Denote the eigenvalues of \(M^{(l)}\) (associated with transformation scale \(l\)) as \(\lambda^{(l)}_{i}\) with corresponding eigenvectors \(\mathbf{v}_{i}^{(l)}\). Due to the associativity of the RG transformation, we have the following property: \[\begin{split} M^{(l)}M^{(l^{\prime})}=M^{(ll^{\prime})}\\ \implies\lambda_{i}^{(l)}\lambda_{i}^{(l^{\prime})}=\lambda_{i}^{ (ll^{\prime})}\end{split} \tag{30}\] Clearly we can see that \(\lambda_{i}^{1}=1\) as \(\lambda_{i}^{(l)}\lambda_{i}^{1}=\lambda_{i}^{(l)}\). Therefore, differentiating and setting \(l^{\prime}=1\), we get the differential equation: \[\begin{split} l\frac{d\lambda^{(l)}}{dl}=y_{i}\lambda^{(l)}\\ \rightarrow&\lambda^{(l)}=l^{y_{i}}\end{split} \tag{31}\] where \(y_{i}\) is independent of \(l\) and is to be determined. The different values of \(y_{i}\) for each eigenvector are important as they describe in what directions the components of \(\delta K\) grow or shrink. 1. \(y_{i}>0\) (\(|\lambda_{i}|>1\)) implies \(\delta K\) grows in the \(\mathbf{v}_{i}\) direction. 2. \(y_{i}=0\) (\(|\lambda_{i}|=1\)) implies \(\delta K\) is invariant in the \(\mathbf{v}_{i}\) direction. 3. \(y_{i}<0\) (\(|\lambda_{i}|<1\)) implies \(\delta K\) shrinks in the \(\mathbf{v}_{i}\) direction. In case (1) they are called "relevant" eigenvalues/eigenvectors, in case (2), they are called "marginal" eigenvalues/eigenvectors, and case (3) are called "irrelevant" eigenvalues/eigenvectors. Therefore, the relevant and irrelevant directions determine the flow of parameters near a fixed point, and hence the critical behaviour. When the initial point is slightly away from the critical manifold of a fixed point (in parameter space), it will begin to travel towards the fixed point until it gets repelled according to the relevant eigenvectors. This gives one part to the idea of universality as different initial conditions in parameter space can develop the same critical behaviour when \(\mathcal{R}\) is repeatedly applied. ### Scaling of Variables Now that the critical behaviour has been explained using RG theory, it can be seen how RG leads to the scaling behaviour around criticality. In the example of the Ising model, taking temperature and magnetism as \(T\) and \(H\), there are two relevant directions to consider: \(t\) and \(h\). As stated earlier, the singular part of the free energy density obeys the relation: \[f_{s}(t,h)=l^{-d}f_{s}(t^{\prime},h^{\prime})\] Additionally, \(T\) and \(H\) both undergo individual course-graining processes \(\mathcal{R}_{l}^{T}\) and \(\mathcal{R}_{l}^{H}\). Following the RG analysis, using the translated variables \[\begin{split}\Delta T=& T-T^{*}\\ \Delta H=& H-H^{*}\end{split} \tag{32}\] we find the linearisation of the RG transformation \[\begin{pmatrix}\Delta T^{\prime}\\ \Delta H^{\prime}\end{pmatrix}=M\begin{pmatrix}\Delta T\\ \Delta H\end{pmatrix}\] where \[M=\begin{pmatrix}\frac{\partial R_{T}^{T}}{\partial l_{l}^{m}}&\frac{\partial R_ {l}^{m}}{\partial H_{l}^{m}}\\ \frac{\partial R_{l}^{m}}{\partial T}&\frac{\partial R_{l}^{m}}{\partial H} \end{pmatrix}_{T=T^{*},H=H^{*}}.\] Then writing the eigenvalues as before (\(\lambda_{l}^{t}=l^{y_{t}}\), \(\lambda_{l}^{h}=l^{y_{h}}\)), the linearised RG transformation becomes \[\begin{pmatrix}t^{\prime}\\ h^{\prime}\end{pmatrix}=\begin{pmatrix}\lambda_{l}^{t}&0\\ 0&\lambda_{l}^{h}\end{pmatrix}\begin{pmatrix}t\\ h\end{pmatrix} \tag{33}\] Now considering the free energy and correlation length, it can be seen that free energy evolves as \[f_{s}(t,h)=l^{-nd}f_{s}(t^{(n)},h^{(n)})=l^{-nd}f_{s}(l^{ny_{t}}t,l^{ny_{h}}h) \tag{34}\] which is of the same form as in the block spin case (10). Correlation length evolves according to (6) hence after n iterations of the renormalisation group transformation \[\xi(t,h)=l^{n}\xi(l^{ny_{t}}t,l^{ny_{h}}h) \tag{35}\] As the only restriction on \(l\) is that \(l>0\), taking \(l^{n}=bt^{-1/y_{t}}\) for some arbitrary value b gives what is known as the static scaling hypothesis: \[f_{s}(t,h)=t^{d/y_{t}}b^{-d}f_{s}(b,h/t^{y_{h}/y_{t}}) \tag{36}\] The static scaling hypothesis is usually written in the form \[f_{s}(t,h)=t^{2-\alpha}F_{f}(\frac{h}{t^{\Delta}}) \tag{37}\] where \(F_{f}(x)=f_{s}(1,x)\), and \(\alpha,\Delta\) are critical exponents [21]. Therefore, RG has provided a method to approximate the critical exponents from the eigenvalues of the linearised RG transformation: \[\begin{split} 2-\alpha=&\frac{d}{y_{t}}\\ \Delta=&\frac{y_{h}}{y_{t}}\end{split} \tag{38}\] ### Universality Classes As described in the LTH, we are very interested in the idea of transferability across multiple different systems in order to improve training time. Therefore, the concept of universality classes is now briefly discussed in more detail. It has been seen that iterating the renormalisation group transformation causes the coupling constants to flow towards a fixed point, irrelevant of initial conditions. However, assuming the start point was off of the critical manifold, eventually, the irrelevant directions will have been "iterated out" [41] leaving only the relevant directions controlling the macro-behaviour of the system. Therefore, different systems that share the same critical exponents will have the same relevant directions, and therefore demonstrate the same behaviour in this limit. ## 3 DNNs and Iterative Magnitude Pruning (IMP) Now that the surrounding RG theory has been introduce, it must now be applied to the context of DNNs and pruning for winning lottery tickets. Therefore, the structures of neural networks and mechanisms of pruning are now set up. The case that will be considered is that of a feed-forward DNN being pruned via iterative magnitude pruning (IMP). A feed-forward DNN consists of layers of neurons/units, with neurons in different layers being interconnected by weighted links. When an input is given to the first layer (input layer) of neurons, the following layers have values dependent on the values in the previous layer and the weighted connections. Denoting the weights connecting unit \(i\) in layer \(l\) and unit \(j\) in layer \(l+1\) as \(\theta_{ij}\), the value of neuron \(j\) in layer \(n\) can be written \(a_{j}=h(\sum_{i\in\mathbf{n}}w_{ij}a_{i}+b)\), where \(b\) is an added bias term and \(h\) is a non-linear activation function (eg relu, \(\tanh\)...). Once this process had continued through the network and reached the final/output layer, a final function can be applied to perform a task, such as data classification (sigmoid). In cases where supervised learning is used, the performance of the network can be measured via a loss function comparing the accuracy of the output compared to a known result (e.g % of images correctly classified). However, in cases of unsupervised learning, the output accuracy can be quantified by comparing the performance to a desired result. This is controlled via a loss function \(\mathcal{L}\). Gradient descent with respect to the loss is then performed on the weights in the network to minimise the loss. This stage is called backpropagation. Iterating this process (if hyperparameters are carefully selected) leads to improved performance at the task as the parameters are optimized to minimise the loss. As previously stated, training DNNs is a computationally expensive task which has motivated research into a variety of methods to increase efficiency of training. This includes recent work in the areas of transfer learning [42], pruning [4], and utilising Koopman Operator Theory [43]. We are most interested in reducing the over-parameterisation of DNNs and finding sparse subnetworks with similar performance to the original fully connected network and hence will be using pruning. IMP is one such method of doing this, removing weights with the smallest magnitude after t iterations of training the network. For more detailed analysis of the theory behind IMP, see [44],[45]. An algorithm for IMP, removing one parameter from the network each iteration of pruning, can be written as follows [44]: **Algorithm 1: IMP** For loss function \(\mathcal{L}:\mathbb{R}^{p}\rightarrow\mathbb{R}\), training time \(T\in\mathbb{R}_{+}\), initial weights \(w^{init}\in\mathbb{R}^{p}\) and \(q<p\) iterations of pruning: * Set \(M=\mathbf{1}_{P}\) * For \(k=0\) to \(q\): Initialise \(w^{(k)}(0)=Mw^{init}\) Train \(\dot{w}^{(k)}(t)=-M\nabla L(w^{(k)}(t))\) for \(t\in[0,T]\) Set \(i=\operatorname*{argmin}_{j\in[p]}\{|w^{(k)}_{j}|:M_{jj}=1\}\) Set \(M_{ii}=0\) * Return \(w^{(q)}(T)\) When applying the IMP algorithm to a DNN, it can either be applied across the whole network, or to individual layers, removing a specified proportion of non-zero weights each iteration. In practice, this is a computationally expensive process so to reduce the number of times that the network must be trained, a percentage of the remaining weights must be removed each iteration. This must be large enough to reduce the number of iterations, be small enough to not over-prune. Work by Frankle and Carbin [3] shows the importance of resetting the weights to their original values before retraining. This is because when finding a potential lottery ticket, the initialisation is crucial to the efficiency of training the subnetwork and random initialisation results in decreased accuracy. This is because the mask is specific to the initialisation and therefore will only find a sparsified subnetwork with similar efficiency if the initial point is the same, as stated in the LTH. Furthermore, once pruning is complete, the sparsified network must be fully trained again from the initial point in order to give accurate results. This is because the just pruned network is not fully optimised due to certain connections being set to zero, even if they were small. The importance of the initialisation in terms of the LTH means that a winning ticket is comprised of both the mask \(M\) and the weight initialisation \(w(0)\). ## 4 IMP as an RG ### Proof IMP is an RG scheme In order to use the RG tools defined in the previous section for the purposes of analysing IMP, it must be shown that IMP can be seen as a renormalization group transformation. To do this, comparisons are made between DNNs with IMP and the 2D Ising model in order to motivate the construction of a similar course graining process to that of the block spins. It can in fact be shown that they are both systems of an equivalent form. Although some features of RG theory are not yet fully understood, such as correlation length, following the work of Redman et.al.[20], we can prove IMP is a renormalisation group operator and hence gather useful tools from this link. On a conceptual basis, it is clear to see how a DNN under IMP can be visualised as an Ising model with course-graining. A DNN can be visualised as a lattice structure of nodes with each node characterised by the activation value \(a_{j}=h(\sum_{i\in\mathbf{n}}w_{ij}a_{i}+b)\). Then each node is connected by a weighted connection to the nodes in the neighbouring layer. Taking a more formal approach, we can immediately see the similarities between the effects of IMP \(\mathcal{I}\) operator and the \(\mathcal{R}\) operator from the block spin system. Consider a DNN with parameters \(\boldsymbol{\theta}\), activations \(\mathbf{a}\), and loss function \(\mathcal{L}\). Applying the IMP operator \(\mathcal{I}\) gives: \[\mathcal{I}\mathcal{L}(\mathbf{a},\boldsymbol{\theta})=\mathcal{L}(\mathbf{a} ^{\prime},\boldsymbol{\theta}^{\prime})=\mathcal{L}(\mathbf{a}^{\prime}, \mathcal{T}\boldsymbol{\theta}) \tag{39}\] This is analogous to the \(\mathcal{R}\) operator acting on the Hamiltonian in the classical spin system, taking \(\mathcal{L}\) as analogous to \(\mathcal{H}\), \(\mathbf{a}\) for \(\mathbf{s}\) and \(\boldsymbol{\theta}\) for \(\mathbf{K}\). Here the operator \(\mathcal{T}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) (N being the total number of parameters in the network) is a composition of a masking operator \(\mathcal{M}\) and a training operator \(\mathcal{F}\), thus only retraining the non-pruned parameters. \(\mathcal{M}\) is defined by the chosen method of pruning, which in our case is magnitude pruning, and \(\mathcal{F}\) is defined by the optimizer used and whether the weights are being rewound to their initialisation. Hence the full process of IMP can be written: \[\mathcal{I}^{n}\mathcal{L}(\mathbf{a}^{0},\boldsymbol{\theta}^{0})=\mathcal{ L}(\mathbf{a}^{\prime n-1},\boldsymbol{\theta}^{\prime n-1})=\mathcal{L}( \mathbf{a}^{\prime n-1},\mathcal{T}^{n}\boldsymbol{\theta}^{0}) \tag{40}\] This sequence of evolving parameters \(\{\boldsymbol{\theta}^{i}\}_{i}\), determined by the eigenvectors of \(\mathcal{T}\), gives an equivalent to the RG flow described before. In the context of IMP, we call this sequence the IMP flow. In order to show that this is an RG scheme, we consider analogous elements of the system to that of the block spin model. Analogous to the spins \(\mathbf{s}\) is the activations \(\mathbf{a}\) of each unit in the model; the couplings between spins are analogous to the model parameters \(\boldsymbol{\theta}\); and the Hamiltonian \(\mathcal{H}\) is analogous to the loss function \(\mathcal{L}(\mathbf{a},\boldsymbol{\theta})\)[20]. For a general DNN, we define the activation \(a_{j}^{(i)}\) of the j-th unit in layer i as: \[a_{j}^{(i)}=h[\sum_{k}g_{k}(\mathbf{a},\boldsymbol{\theta})] \tag{41}\] where \(g_{k}(\mathbf{a},\boldsymbol{\theta})\) are the functions that determine the effect of other parameters and activations on \(a_{j}^{(i)}\) and \(h\) is the selected activation function. For example, in a feed-forward DNN, each activation is the sum of the product of the previous layers' activations and the weights of parameters connecting them. There is also an additional bias term added to each activation. Thus, there are two functions: 1. Bias: \(g_{0}=\theta_{j}^{(i)}\) 2. Weighted input from previous layer: \(g_{1}=\sum_{k=1}^{N^{i-1}}\theta_{jk}^{(i)}a_{k}^{(i-1)}\) Where \(\theta_{j}^{(i)}\) are biases for unit \(j\) in layer \(i\), \(\theta_{jk}^{(i)}\) are weights connecting \(a_{k}^{(i-1)}\) to \(a_{j}^{(i)}\) and \(N^{i-1}\) is the number of activations in layer \((i\)-1). Therefore after applying the IMP operator \(\mathcal{I}\), the parameters \(\boldsymbol{\theta}\) have been transformed hence giving: \[a_{j}^{\prime(i)}=h[\sum_{k}g_{k}(\mathbf{a}^{\prime},\mathcal{T}\boldsymbol{ \theta})]=h[\sum_{k}g_{k}(\mathbf{a}^{\prime},\mathcal{F}\circ\mathcal{M} \boldsymbol{\theta})] \tag{42}\] Therefore, we can define the associated projection operator for \(\mathcal{I}\) in the same way as we did for the block spin case. \[P(a_{j}^{(i)},a_{j}^{\prime(i)})=\prod_{I}\delta(a_{j}^{\prime(i)}-h[\sum_{k}g _{k}(\mathbf{a}^{\prime},\boldsymbol{\theta}^{\prime})]) \tag{43}\] To ensure this is an RG projection operator, it can verified that it meets the previously defined requirements of a projection operator (2.2): [20] 1. As \(P()\) is a product of Kronecker deltas, it clearly satisfies (i) \(P(s_{i},s_{I}^{\prime})\geq 0\) 2. Property (ii) is met as the process of IMP only removes parameters from the model. Therefore, until the point where every weight in a layer has been pruned (layer collapse), the activations \(a_{j}^{(i)}\) still have the same form, and thus the loss function is still of the same form. 3. Property (iii) is met when the masking and refining operators \(\mathcal{M}\) and \(\mathcal{F}\) are deterministic. This requires fixing the seed so that the sampling order of test and train data (in the case of supervised learning) is fixed for every epoch. This ensures that \(P\) is a unique projection operator. As the constructed projection operator fulfils the requirements of a renormalisation group projection operator, it has been shown that \(\mathcal{I}\) is and RG operator. ### IMP Flow Now that working in an RG framework has been justified, we can use the tools available in this theory to analyse IMP behaviour. Clearly understanding the critical manifold is of great importance as it can give a clear indication of which parameters are relevant and irrelevant during pruning. The parameters \(\theta_{i}\) are transformed by the transformation \(\mathcal{T}\). Therefore, by finding eigenfunctions of \(\mathcal{T}\), we can find the relevant and irrelevant eigenvalues/eigenvectors [20]. This is important in the context of the LTH as different models which have the same relevant eigenvectors \(\mathbf{v}_{i}\) with \(\lambda_{i}>1\), should share the same subset of parameters which remain after pruning. One proposed eigenfunction of \(\mathcal{T}\) by [20] which will be considered relates to the proportion of total magnitude of parameters remaining in each layer after n iterations of IMP. Consider the function \[M_{i}(n)=\frac{\sum_{j}|\theta_{j}^{(i)}(n)|}{\sum_{i}\sum_{j}|\theta_{j}^{(i )}(n)|} \tag{44}\] where \(\theta_{j}^{(i)}(n)\) is the jth weight in layer i after n iterations of IMP. In order to find the eigenvalues associated with this eigenfunction, we use the following relation \[M_{i}(n+1)=\lambda_{i}M_{i}(n) \tag{45}\] Therefore, to find \(\lambda_{i}\) we invert this to get \(\lambda_{i}=\frac{M_{i}(n+1)}{M_{i}(n)}\). As in the process of defining renormalisation groups, in order to having a meaningful value for the relevance of each layer, we write \(\lambda_{i}=l_{i}^{\sigma}\) where \(\sigma_{i}\) is invariant of the course-graining \(l\). Therefore, \(\sigma_{i}\) may be used to compare behaviour with different models which may use a different pruning rate \(l\). ## 5 Neural Network DE Solvers To support the use of RG tools in finding universal lottery tickets, we apply this theory to the case of neural networks for solving differential equations. This is an important field as, unlike traditional solvers which generate a solution at a range of individual points, DNNs can provide a differentiable solution in a closed analytic form [33]. Another advantage is that they are very capable when solving high dimensional problems while exploiting parallel computing. They have been found to be applicable to a wide range of PDEs and dynamical systems [46],[35],[33],[28],[24],[47],[48] so RG theory can allow us to find "similar" systems amongst these to help improve training times by utilising transferable lottery tickets. A useful example to consider is Hamiltonian Neural Networks (HNNs). These are designed to use unsupervised learning to solve problems while conserving physical properties - such as energy - and are applicable to wide range of problems of different complexities in Hamiltonian mechanics [49]. Even though these networks are less complex than some solvers, they provide a useful insight into how RG tools can be used to analyse IMP and universality in scientific solvers for well-posed problems. As an example, we perform IMP on a existing neural network. The codebase for this can be found here [50]. Work by Mattheakis et.al.[35] explores the behaviour of Hamiltonian neural networks being used to solve Hamilton's equations in order to obtain equations of motion for two different dynamical systems - a non-linear oscillator and a Henon-Heiles system. For more analysis of these models see [35, 51, 52]. ### Nonlinear Oscillator The first case considered is a neural network designed to solve the nonlinear one-dimensional an-harmonic oscillator described by the Hamiltonian \[\mathcal{H}=\frac{p^{2}}{2}+\frac{x^{2}}{2}+\frac{x^{4}}{4} \tag{46}\] where mass and natural frequency are taken to be 1. The Hamiltonian \(\mathcal{H}\) represents the total energy \(E\) of the system while the associated Hamilton's equations for this system, given by \(\dot{x}=\frac{\partial H}{\partial p}\) and \(\dot{p}=-\frac{\partial H}{\partial x}\), are: \[\dot{x}=p,\hskip 56.905512pt\dot{p}=-(x+x^{3}) \tag{47}\] These are solved by the neural network by minimising the loss function \[L=\frac{1}{K}\sum_{n=1}^{K}[(\dot{\hat{x}}^{(n)}-\dot{\hat{p}}^{(n)})^{2}+(\dot{ \hat{p}}^{(n)}+\dot{\hat{x}}^{(n)}+(\dot{\hat{x}}^{(n)})^{3})^{2}] \tag{48}\] which is the mean square error from Hamilton's equations. The neural network used to solve this optimisation problem has 2 hidden layers, each with 50 neurons, and an output layer of 2 neurons (corresponding to the two degrees of freedom in the problem). Therefore, there are a total of (50 + 50x50 + 50x2 =) 2650 weights in the neural network and 102 biases. The hyperparameters were set to the same values as used by Mattheakis et.al. (lr=\(8\cdot 10^{-3}\), \(5\cdot 10^{4}\) epochs,...) as these were shown to achieve a high accuracy solution which we aim to maintain during pruning. For now, we will ignore the effect of pruning biases directly, as much of that behaviour will be implicitly included when pruning weights. This is because when weights connecting to a neuron are pruned, the bias at that neuron becomes less significant. In order to obtain an effective theory of IMP in neural networks for solving DEs, different pruning procedures are performed in order to understand different scalings that exist in the model. First, the effect of pruning individual layers is explored. Therefore, to carry out the previously explained algorithm in this case, we only use the masking operator over parameters in the desired layer - by which the process for one iteration of pruning on layer \(l\) becomes: * Fully train neural network * Prune \(p\%\) of remaining weights in layer \(l\) * Reset remaining weights to initial values * Fully train remaining parameters To ensure IMP is implemented correctly, the weights are reset to the initial random values after each pruning iteration. Furthermore, to ensure that the pruned weights are no longer trained, the corresponding gradient is set to zero during the backpropagation stage. We expect to find similar behaviour to that found by Rosenfeld et.al.[22], namely that error scales with a power law relationship \(\epsilon=cd^{-\gamma}\) within in a critical region of pruning. This would support the use of RG theory to analyse IMP. For each layer, numerical experiments have been performed, pruning at 1%, 5% and 10% each iteration to gain an understanding of how pruning rate can affect the efficiency of IMP. Work by Vandersmissen et.al.[12] has suggested that the lottery ticket hypothesis is not affected by pruning rate so we expect to see similar behaviour but at different resolutions. Additionally, during each experiment, the layer was only pruned until 10% of weights remain to prevent full layer collapse. Below are the results for these experiments: Figure 1: Pruning of layer 1 with an initial 50 weights Figure 2: Pruning of layer 2 with an initial 2500 weights Here we can clearly begin to see the expected behaviour in loss. As more of the parameters are pruned, we see that the network is less able to learn how to solve the equations accurately. From the finest level of pruning (1%) we can see how there is a critical Figure 3: Pruning of layer 3 with an initial 100 weights percentage of weights that can be pruned in each layer before we see the power-law relationship between layer density and the error in the solution (e.g layer \(1\approx 50\%\)). This exemplifies the LTH as there are clearly redundant weights which can be removed without negatively impacting on accuracy. This is because other weights in the network are able to account for the lack of weights in other layers. This explains why, when pruning layers individually, the deeper layers can withstand less pruning before an increase in error. When weights are removed front the top of the network, there are many layers below which can account for this. However, when weights are removed from deeper in the network, information is being lost from layers above which cannot be accounted for below. Using the gradient of the graphs in the interval where the power-law relationship is visible, the critical exponents corresponding to error when pruning each layer individually can be observed. The case of \(1\%\) pruning is used as this gives the finest resolution of the behaviour in each layer. This gives an idea of each layer's sensitivity to pruning. For example, pruning either the input or output layer has the most significant impact on the accuracy of the model. Furthermore, comparing the different pruning rate, we see that each rate shows the same behaviour but to a different resolution. Therefore IMP should be able to find a winning ticket for a range of pruning rates, so long as it is not too high. This corroborates findings by Vandermissen et.al.[12]. So far, only the "macro" effect of pruning on the error from the network one layer at a time. The next experiment to be considered is that where all layers are pruned together. This gives a more detailed picture of how the coupling in the network develops during pruning, and therefore relates strongly with the RG theory previously explained. Therefore, comparing critical exponents from this experiment will be able to be used to give some idea of transferability. For this experiment, at each pruning stage, the \(5\%\) of smallest weights from the entire network are set to \(0\). This is the most useful case in finding potential lottery tickets as one aims to remove as many parameters from the network as possible without negatively affecting accuracy. Similarly to in the previous experiments, each layer is no longer pruned after reaching \(5\%\) density to prevent layer collapse. As described early in section 4, this scenario directly compares to a 2D Ising model where we considered a lattice of activations coupled by weights in the network. The results we find for this case give a fuller picture of the potential of IMP in finding winning tickets and the power-law scaling of error during pruning. \begin{table} \begin{tabular}{|c|c|} \hline layer & \(\gamma\) \\ \hline layer 1 & 1.36 \\ \hline layer 2 & 0.44 \\ \hline layer 3 & 1.22 \\ \hline \end{tabular} \end{table} Table 1: Critical exponents retaining to error as a function of density of on individual layer. n these results, the behaviour of power law scaling within a certain critical region can clearly be observed, which helped motivate the use of RG theory to describe IMP. Furthermore, this corroborates findings from Rosenfeld et al.(2021) [53] which show that different machine learning models have three different periods during pruning. The first of these is the low-error plateau where error is similar to that of the fully-connected network. The end of this region is the point of interest when trying to find a winning lottery ticket. The next stage is where the power-law scaling is observed. As stated earlier, this is modelled by \(\epsilon\approx cd^{-\gamma}\). The final segment is where the error levels off at a high value as the network is no longer to learn anything significant. This is called the high-error plateau \(\epsilon^{\dagger}\). [53] Extracting the gradient of the log-log graph for the power-law scaling section of the pruning process gives the critical exponent for the error as a function of density. So, for this system, the critical component \(\gamma=9.61\). ### IMP Flow For the case where parameters were pruned from any layer in the model, we use the procedure previously defined in (44) to obtain a value for \(\sigma_{i}\) in each layer of the DNN to gather a representation of the significance of each layer in the pruning process. For the Non-linear Oscillator, the \(\sigma_{i}\) from each eigenvalue for each hidden layer are as follows. These results were collected from pruning 5% of the networks weights during each iteration \begin{table} \begin{tabular}{|c|c|} \hline \(\sigma_{1}\) & 0.313 \\ \hline \(\sigma_{2}\) & -0.010 \\ \hline \(\sigma_{3}\) & 0.736 \\ \hline \end{tabular} \end{table} Table 2: Critical exponents corresponding to magnitude remaining in each layer of the non-linear oscillator system throughout pruning. Figure 4: Error of NN solving the non-linear oscillator system during IMP pruning at 5%. of IMP, however they are independent of the pruning rate. Clearly we see that this supports the data from the previous section that layers 1 and 3 are relevant whereas layer 2 is not relevant (using the definition of relevant and irrelevant eigenvalues from previously). ### Chaotic Henon-Heiles As previously stated, one of the most useful links between the LTH and RG theory is the concept of universality. In application, once one successful mask has been found to give efficient results on one network, it may be transferred to another network without having to re-perform the computationally expensive pruning process. Work by Morcos et al.[18] has already shown that for certain tasks such as computer vision, winning tickets can be transferred across different tasks/models. Furthermore, work by Redman et.al.[20] has suggested that in the cases where this is possible, the blocks/layers have similar relevance (\(\sigma_{i}\)) and can therefore be considered in the same universality class. Therefore, in an attempt to find another model in the same universality class as the non-linear oscillator solver analysed, we analyse a new neural network DE solver for a Henon-Heiles oscillator system. [54] The Henon-Heiles system is a chaotic system describing the non-linear planar trajectory of body around a galactic centre. The degrees of freedom in the system are \((x,y,p_{x},p_{y})\), representing position and momentum respectively. The total energy of the system is given by the following Hamiltonian [35]: \[\mathcal{H}=\frac{1}{2}(p_{x}^{2}+p_{y}^{2})+\frac{1}{2}(x^{2}+y^{2})+(x^{2}y -\frac{1}{3}y^{3}) \tag{49}\] Therefore we can extract Hamilton's equations: \[\begin{split}\dot{x}=\frac{\partial\mathcal{H}}{\partial p_{x}} =p_{x}\\ \dot{y}=\frac{\partial\mathcal{H}}{\partial p_{y}}=p_{y}\\ \dot{p_{x}}=-\frac{\partial\mathcal{H}}{\partial x}=-(x+2xy)\\ \dot{p_{y}}=-\frac{\partial\mathcal{H}}{\partial y}=-(y+x^{2}-y^ {2})\end{split} \tag{50}\] As in the one-dimensional case, the loss function is taken to be the mean squared error from Hamilton's equations in order to conserve the Hamiltonian: \[L=\frac{1}{K}\sum_{n=0}^{K}[(\dot{\hat{x}}^{(n)}-\dot{\hat{p}}_{x}^{(n)})^{2}+ (\dot{\hat{y}}^{(n)}-\dot{\hat{p}}_{y}^{(n)})^{2}+(\dot{\hat{p}}_{x}^{(n)}+ \hat{x}^{(n)}+2\hat{x}^{(n)}\hat{y}^{(n)})^{2}+(\dot{\hat{p}}_{y}^{(n)}+\hat{y }^{(n)}+(\hat{x}^{(n)})^{2}-(\hat{y}^{(n)})^{2})^{2}] \tag{51}\] The neural network used to solve this problem is of a very similar structure to that of the non-linear oscillator. However, it has some slight differences. As before, two hidden layers of 50 neurons are used, but the output layer is now 4 nodes instead of 2, to account for the extra dimension in the problem. In order to keep the rest of the system similar, the same activation function (\(\sin(\cdot)\)) and optimiser (Adam) are used. With both structural similarities and similarities in the problems being solved, it is hoped that the two systems considered are in the same universality class. Repeating the procedure as for the non-linear oscillator system, the IMP flow of parameters in the new system is found. The \(\sigma_{i}\) corresponding to each eigenvalue is found to be as follows: Clearly we can see that the two systems have the same relevant layers and similar values for \(\sigma_{i}\) in each layer. This therefore suggests that a winning ticket can be transferred between the two models. Plotting the error as the model is pruned, we see that it follows a similar behaviour to that of the non-linear oscillator in that the periods of low-error plateau, power-law scaling, and high-error plateau fall within similar regions of density. Furthermore, relating IMP to pruning, the two systems have similar values of critical \begin{table} \begin{tabular}{|c|c|c|} \hline & Non-linear Oscillator & Hénon-Heiles Oscillator \\ \hline \(\sigma_{1}\) & 0.313 & 0.375 \\ \hline \(\sigma_{2}\) & -0.010 & -0.012 \\ \hline \(\sigma_{3}\) & 0.736 & 0.630 \\ \hline \end{tabular} \end{table} Table 3: Critical exponents corresponding to magnitude remaining in each layer of both systems. Figure 5: Error of NN solving Hénon-Heiles system during IMP at 5% pruning rate exponent \(\gamma\), from equation (1), for the macro-observable error. Therefore, according to RG theory, the two systems should exhibit the same behaviour under the IMP operator. ### Universality So far, it has been observed that these two similar systems have the same relevant layers and similar \(\sigma_{i}\) and \(\gamma\) values. This implies that a winning ticket in one case can be transferred between the two systems and still provide improved performance. However, the difference in architecture means that the mask and initialisation cannot simply be carried between the two networks. Therefore, we utilise a strategy as suggested by experiments around the Elastic Lottery Hypothesis (ELH) [19]. The Elastic Lottery Hypothesis is an extension of the LTH as it suggests that a winning ticket for a certain system can be transferred to a different system with different architecture while still maintaining performance and improving training time. In order to do this, winning tickets must be "stretched" or "squeezed" into the new architecture. For the case being considered here, the winning ticket for the non-linear oscillator must be stretched into the wider Henon-Heile architecture. The suggested method for doing this, which has shown success in other types of neural network [19], is by duplicating a block/layer of the winning ticket to extend it. Therefore, a winning ticket for the non-linear oscillator must have the third layer duplicated in order to be applied to the Henon-Heiles system. Now using this method to stretch the winning ticket, the masks found while pruning the non-linear oscillator are now applied to the Henon-Heiles network. Due to the nature of two different systems, the effect of certain hyperparameters must also be investigated. Although hyperparameters such as width, depth and learning rate are easily kept the same, the time span for which the system is being solved over is also important. In the non-linear oscillator case, the network is solving Hamilton's Equations for 200 equally spaced points on the time interval \([0,4\pi]\). Therefore, it would be useful to know whether the lottery ticket for this model is applicable to solutions over multiple time intervals. Plotting the results below, we fit a curve to gather the general trend for each system. What can be seen is that a lottery ticket is most effective when applied to a Henon-Heiles system solving over a similar time period. This is because the complexity and hyperparameters of the original system must have the capability of solving the new system effectively. Solving over a longer integration time requires a network of greater complexity and size. Hence the periods of low-error plateau, power-law scaling, and high-error plateau are much less visible. The optimal integration time to be used is dependent on the system being considered. The best candidate would be one which provides a similar error for the fully-connected network. Comparing the results for \(t=4\pi\) with existing experiments in the field of computer vision, it is seen that transferability is not as strong in neural network DE solvers. One potential reason for this is that the neural networks in this case are far smaller than in convolutional networks. For example, Resnet-50 models contain several millions of trainable parameters, whereas simple DE solvers contain less than 10,000. Therefore, there is a more clearly defined region for the low-error plateau as each trainable parameter is far less significant in the entire network, and hence has less effect when removed. Repeating this process in the other direction, we take masks throughout the pruning process of the Henon-Heiles network and apply them to the non-linear oscillator system. Due to the differing architectures of the systems, the ticket must now be "squeezed" as opposed to stretched into the new network. This involves removing blocks from the lottery ticket. In this case, half of the final layer of weights will be removed so that it outputs 2 values as opposed to 4. Work by Chen et.al.[19] suggests that systems are not sensitive to whether consecutive or non-consecutive blocks are removed. Therefore we decided to remove the second and fourth blocks, as these correspond to position and momentum in the second dimension. Figure 6: Transferability of winning ticket to Hénon-Heiles system solving over different integration times. As we can see, the transferability works in both directions with systems showing similar behaviour at similar densities. Again, the value of error is dependent on the integration time. As transferability works in both directions, it is strong evidence that the two systems are in fact in the same universality class. ## 6 Conclusions By comparing DNNs under IMP with extensively studied thermodynamic systems, we have been able to motivate the use of RG theory to explain neural network behaviour during pruning. Following the work of Redman et.al.[20], we were able to formally show that IMP is a renormalisation group transformation. As renormalisation group theory has been the primary method of explaining universality across different systems sharing the same critical exponents during a phase transition, we aimed to extend existing results in the field of computer vision to neural network DE solvers. In the context of the winning lottery ticket hypothesis, this involved transferring lottery tickets from a Hamiltonian Neural Network solving Hamilton's equations in a non-linear oscillator system to one solving Hamilton's equations in the Henon-Heiles system. Firstly, by pruning individual layers of the non-linear oscillator system, we saw that the system is in fact over-parameterised and therefore a winning ticket does exist to increase training time without negatively impacting accuracy. Then pruning both the non-linear oscillator and the Henon-Heiles system showed that they both have the save relevant layers (shown by \(\sigma_{i}\) values). Hence work by Redman et.al. suggests that these two systems will show transferability of winning lottery tickets. Experiments for this showed that transferring lottery tickets preserves the pruning behaviour in the new system. However, the suc Figure 7: Transferability of winning ticket to non-linear oscillator system cess of this depends on the integration time used in each system. As winning lottery tickets only enable a neural network to achieve the accuracy of the fully connected network, the fully connected network must be capable of reaching a high enough accuracy. Therefore, transferring winning lottery tickets is most successful when the new system is similarly over-parameterised and has a similar accuracy to the original system. To solve for larger integration times, larger networks with more complex architectures must be used. The issue when doing this is that either the winning ticket must be stretched to fit the new architecture, or the system must be pruned again to find a new winning ticket. The experiments performed provide further supporting evidence for results found by Chen et.al.[19] on the elastic lottery hypothesis that duplicating winning tickets into wider network architectures can provide high accuracy during transfer. ### Future Directions There are many steps that can be taken in furthering the knowledge of transferability of winning lottery tickets between different neural network models to solve DEs. Firstly, the behaviour of deeper networks and whether this leads to more disparities in layer relevance. Currently, we have only looked at simple networks with 2 hidden layers, but more complex systems and longer integration times will require more complex architectures, in which pruning will be even more important to reduce computation time. This also enables research into transferability between models of different depths, extending knowledge around the Elastic Lottery Hypothesis. Additionally, considering more systems will allow for them to be categorised by their critical exponent \(\gamma\) to find different classes of "similar" solvers. As using RG theory to analyse pruning IMP has shown promising results, filling in gaps in the connection between the two, such as correlation functions, could give new insights into the pruning process.
2304.10593
DeepReShape: Redesigning Neural Networks for Efficient Private Inference
Prior work on Private Inference (PI) -- inferences performed directly on encrypted input -- has focused on minimizing a network's ReLUs, which have been assumed to dominate PI latency rather than FLOPs. Recent work has shown that FLOPs for PI can no longer be ignored and incur high latency penalties. In this paper, we develop DeepReShape, a technique that optimizes neural network architectures under PI's constraints, optimizing for both ReLUs and FLOPs for the first time. The key insight is strategically allocating channels to position the network's ReLUs in order of their criticality to network accuracy, simultaneously optimizes ReLU and FLOPs efficiency. DeepReShape automates network development with an efficient process, and we call generated networks HybReNets. We evaluate DeepReShape using standard PI benchmarks and demonstrate a 2.1% accuracy gain with a 5.2$\times$ runtime improvement at iso-ReLU on CIFAR-100 and an 8.7$\times$ runtime improvement at iso-accuracy on TinyImageNet. Furthermore, we investigate the significance of network selection in prior ReLU optimizations and shed light on the key network attributes for superior PI performance.
Nandan Kumar Jha, Brandon Reagen
2023-04-20T18:27:02Z
http://arxiv.org/abs/2304.10593v4
# DeepReShape: Redesigning Neural Networks for Efficient Private Inference ###### Abstract Prior work on Private Inference (PI)-inferences performed directly on encrypted input-has focused on minimizing a network's ReLUs, which have been assumed to dominate PI latency rather than FLOPs. Recent work has shown that FLOPs for PI can no longer be ignored, and also have high latency penalties. In this paper we develop DeepReShape: a network redesign technique that tailors architectures to PI constraints, optimizing for both ReLUs and FLOPs for the first time. The key insight is that a strategic allocation of channels such that the network's ReLUs are distributed in their criticality order simultaneously optimizes both ReLU and FLOPs efficiency. DeepReShape automates network development with an efficient process, and we call generated networks HybReNets. We evaluate DeepReShape using standard PI benchmarks and demonstrate a 2.1% accuracy gain with a 5.2\(\times\) runtime improvement at iso-ReLU on CIFAR-100, and a 8.7\(\times\) runtime improvement at iso-accuracy on TinyImageNet. Finally, we show that prior PI-specific network optimizations are complimentary, and apply them to HybReNets for further benefit. ## 1 Introduction As machine learning inferences are increasingly performed in the cloud, privacy concerns have emerged, resulting in the development of private inference (PI) where a client sends encrypted input to the cloud service provider, enabling inferences without exposing their data. While effective, the usage of complex cryptographic primitives [1; 2; 3; 4; 5; 6; 7] in PI results into substantially higher computational and storage overheads [8; 9; 10; 11; 12; 13]. Prior work on PI-specific network optimization [14; 15; 16; 17; 18; 19; 20; 21] has primarily focused on mitigating overheads associated with non-linear (ReLU) operations, assuming FLOPs are free. Specifically, methods such as CryptoNAS [14] and Sphynx [19] use neural architecture search to optimize ReLU efficiency and disregard FLOP implications. However, a recent work [13] has challenged this assumption, emphasizing that FLOPs do carry significant latency penalties. Improving PI efficiency is further constrained by the limitations associated with current ReLU-optimization techniques. Their effectiveness largely depends on the selection of the input network, resulting in significant performance disparities that cannot be solely attributed to the FLOP count or accuracy of the input networks. Moreover, techniques such as DeepReDuce [17] encounter scalability issues and require considerable manual effort. While fine-grained ReLU optimization [20; 21] shows potential, its effectiveness is _confined_ to networks with specific ReLU distributions and tends to _underperform_ in networks with higher ReLU counts or altered ReLU distribution. Another major challenge that persists in this domain is identifying network attributes that enhance PI performance. Current ReLU optimization methods [17; 20; 21] offer limited insight into the network features contributing to the improved PI performance. Moreover, it remains elusive whether a network with specific features can consistently outperform across various ReLU counts or if targeted ReLU counts determine the desired network attributes. Addressing these issues, we introduce a novel design principle, "ReLU-equalization," which utilizes our main _insight_ is that by expanding the network's width while distributing its ReLUs according to their criticality order, we can control the FLOPs growth in deeper layers without sacrificing ReLU efficiency. Thus, achieving ReLU and FLOPs efficiency simultaneously. Our _key observation_, termed "Capacity-Criticality-Tradeoff," demonstrate that different network features are desirable for superior performance at varying ReLU counts. We find that wider networks are beneficial only for higher ReLU counts, while the proportion of non-critical ReLUs is crucial for lower ReLU counts. This observation _challenges the commonly held belief_ that a network's overall ReLU count is the key determinant for superior performance at lower ReLU counts. By utilizing this, we achieved a significant reduction, up to **45\(\times\)**, in FLOPs when targeting lower ReLU counts. Leveraging the above insights, we develop "DeepReShape," to redesign the classical networks and synthesize PI-efficient networks "HybReNet" with a computational complexity of \(\mathcal{O}(1)\). Our approach results in a substantial FLOPs reduction with fewer ReLUs, outperforming the state-of-the-art in PI. Specifically, compare to SENet [21], we achieve a 2.3\(\times\) ReLU and 3.4\(\times\) FLOPs reduction at iso-accuracy, and a 2.1% accuracy gain with a **12.5\(\times\)** FLOPs reduction at iso-ReLU on CIFAR-100. On TinyImageNet our approach saves **12.4\(\times\)** FLOPs at iso-accuracy. Our key contributions are summarized as follows. 1. Perform an exhaustive characterization to understand the essential network characteristics, architecture and ReLUs' distribution, for PI efficiency and demonstrate their generalizability. 2. Propose _ReLU-equalization_, a novel design principle for distributing the network's ReLU in their criticality order, and designed a family of networks, _HybReNet_, tailored to the PI constraints. 3. Propose _ReLU-reuse_, a channel-wise ReLU dropping technique to systematically reduce the networks ReLUs count up to **16\(\times\)**. ## 2 Preliminary **Private inference protocols and threat model:** We use Delphi [9] protocols, as also used in [17; 20], for private inference. In particular, for linear layers Delphi performs compute-heavy homomorphic operations [1; 2; 3] in the offline phase (preprocessing) and additive secret sharing [22] in the online phase, once the client's input is available. Whereas, for nonlinear(ReLU) layers it uses garbled circuits [23; 24]. Further, similar to, [25; 8; 9] we assume an honest-but-curious adversary where parties follow the protocols and learn nothing beyond their output shares. **Architectural building blocks:** Figure 2 illustrates a schematic view of a standard four-stage network, with design hyperparameters. Similar to ResNet [26], it has a stem cell (to increase the channel count from 3 to \(m\)), followed by the network's main body (composed of linear and nonlinear layers, performing most of the computation), followed by a head (a fully connected layer) yielding the scores for the output classes. The network's main body is composed of a sequence of four stages, and the spatial dimensions of feature maps (\(d_{k}\times d_{k}\)) are progressively reduced by \(2\times\) in each stage (except Stage1), and feature dimensions remain constant within a stage. We keep the structure of the stem cell and head fixed and change the structure of the network's body using design hyperparameters. Figure 1: HybReNet outperforms state-of-the-art (SOTA) ReLU-optimization methods [21; 20; 17], achieving higher accuracy (CIFAR-100) and significant reduction in FLOPs while using fewer ReLUs. **Definitions and design hyperparameters:** Each stage is composed of identical blocks1 repeated \(\phi_{1}\), \(\phi_{2}\), \(\phi_{3}\), and \(\phi_{4}\) times in Stage1, Stage2, Stage3, and Stage4 (respectively), and known as _stage compute ratios_. The output channels in stem cell (\(m\)) is known as _base channels_, and the number of channels progressively increases by a factor of \(\alpha\), \(\beta\), and \(\gamma\) in Stage2, Stage3, and Stage4 (respectively), and we termed it as _stagewise channel multiplication factors_. These width and depth hyperparameters primarily determine the distribution of FLOPs, ReLUs, and parameters in the network. When we widen the network: (1) by augmenting \(m\), which increases the #channels in each layer by the same factor, we denote this network as **BaseCh** (e.g., from \(m\)=64 to \(m\)=128); and (2) by homogeneously augmenting the (\(\alpha\), \(\beta\), \(\gamma\)), we denote this network as **StageCh** (e.g., from (\(\alpha\), \(\beta\), \(\gamma\)) = (2, 2, 2) to (\(\alpha\), \(\beta\), \(\gamma\)) = (3, 3, 3)). Footnote 1: Except the first block (in all but Stage1) which performs downsampling of feature maps by \(2\times\). ## 3 Essential Network Attributes for Efficient Private Inference We presents our key observations highlighting the influence of network architecture and their ReLUs' distribution on the efficacy of private inference. **Observation 1: ReLU's criticality-aware network widening approach can address the FLOPs-ReLU-Accuracy imbalance, a key shortcoming of existing network widening methods.** Despite a line of seminal work on the network's width expansion [27; 28; 29; 30], the approaches to leverage the potential benefits of increased width without incurring FLOPs-ReLU-Accuracy imbalance remains elusive. The prevailing network widening method (BaseCh), including WideResNet [27], offers _limited_ ReLU-efficiency due to the conservative (\(\alpha\), \(\beta\), \(\gamma\)) = (2, 2, 2) values 2, which constrain channel growth in subsequent stages and restrict the network's complexity per ReLU unit (see Table 1). Since prior ReLU optimization methods rely on classical networks, this limitation _prevents_ the full realization of potential benefits associated with increased network width. Footnote 2: Even the state-of-the-art vision model RegNets [31] have \(1.5\leq\)(\(\alpha\), \(\beta\), \(\gamma\)) \(\leq\) \(3\), for FLOPs-efficiency reason. In contrast, StageCh networks significantly improve ReLU efficiency compared to BaseCh networks by removing the constraint on (\(\alpha\), \(\beta\), \(\gamma\)) and requiring fewer ReLUs for a given complexity (Figure 3(a)). However, the superiority of StageCh networks remains evident until reaching accuracy saturation, which varies with network configuration. In particular, as shown in Figure 3(b), accuracy saturation for StageCh networks of ResNet18, ResNet20, ResNet32, and ResNet56 models begins (\(\alpha\), \(\beta\), \(\gamma\)) = (4, 4, 4), (5, 5, 5), (5, 5, 5), and (6, 6, 6), respectively, suggesting deeper StageCh network plateau at higher (\(\alpha\), \(\beta\), \(\gamma\)) values, and accurate prediction of saturation is challenging. This observations challenge the assertion made in [14], that model capacity per ReLU peaks at (\(\alpha\), \(\beta\), \(\gamma\)) = (4, 4, 4). Consequently, _the extent to which a network can benefit from increased width for superior ReLU efficiency remains an open question_. Our proposed ReLUs' criticality-aware network widening approach effectively optimizes both ReLU and FLOPs efficiency, which a StageCh network fails to achieve as distinct homogeneous sets of (\(\alpha,\beta,\gamma\)) are required for FLOPs and ReLU efficiency. We leverage the _insight_ that ReLU efficient StageCh networks suffer from rapidly increasing FLOPs in deeper layers, while our approach limits excessive FLOPs in deeper layers without compromising ReLU efficiency. Consequently, our approach attains ReLU efficiency on par with StageCh networks while significantly lowering FLOPs. For instance, Figure 3(c,d) demonstrates that the ReLUs' criticality-aware ResNet18 network 5x5x3x maintains similar ReLU efficiency with a \(2\times\) reduction in FLOPs count compared to the StageCh network 5x5x5x. This FLOPs reduction is consistently attained across the entire spectrum of ReLU counts, employing both fine-grained and coarse-grained ReLU optimization. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Stage1 & Stage2 & Stage3 & Stage4 \\ \hline \(\frac{\sigma_{max}}{\sigma_{max}}\) & \(m(\frac{\sigma_{max}}{\sigma_{max}})\) & \(\alpha\)\(\beta\)\(m(\frac{\sigma_{max}}{\sigma_{max}})\) & \(\alpha\)\(\beta\)\(m(\frac{\sigma_{max}}{\sigma_{max}})\) \\ \(\frac{\sigma_{max}}{\sigma_{max}}\) & \(m^{\ell}\) & \(\alpha\)\(m^{\ell}\) & \(\alpha\)\(\beta\)\(m^{\ell}\) & \(\alpha\)\(\beta\)\(m^{\ell}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Network’s complexity (FLOPs and Params) per unit of nonlinearity varies with network’s width, and independent of the network’s depth. _Wider network require_ maintains similar ReLU efficiency with a 2\(\times\) reduction in FLOPs count compared to the StageCh network 5x5x5x. This FLOPs reduction is consistently attained across the entire spectrum of ReLU counts, employing both fine-grained and coarse-grained ReLU optimization. Figure 2: Depiction of architectural hyperparameters and feature dimensions in a four stage network. For _fewer ReLUs for a given complexity_. (\(f\times f\) is the spatial size of kernel (e.g., \(3\times 3\))) **Observation2: Performance of ReLU optimization methods is strongly correlated with the choice of input networks, leading to substantial performance disparities.** Table 2 lists input networks used in previous ReLU optimization methods with their relevant characteristics, while Figure 4 demonstrates how different input networks impact DeepReDuce [17] and SNL [20]. For DeepReDuce, accuracy differences of **12.9%** and **11.6%** at higher and lower iso-ReLU counts are observed. _These differences cannot be ascribed to the FLOPs or accuracy of the baseline network alone._ For instance, ResNet18 outperforms WideResNet22x8 despite having 4.4\(\times\) fewer FLOPs and a lower baseline accuracy, and ResNet32 outperforms VGG16 even though the latter has 4.76\(\times\) more FLOPs and a higher baseline accuracy. **Observation3: Distinct network attributes are required for superior PI efficiency at higher and lower ReLU counts (Capacity-Criticality-Tradeeff).** To examine the essential network characteristics for PI efficiency across a broad spectrum of ReLU counts, we select three iso-ReLU ResNet18 variants with distinct ReLUs' distribution and FLOPs counts, achieved by altering the channel allocation per stage, in Table 3. In particular, 2x2x2x(\(m\)=32), 4x4x4x(\(m\)=16), and 3x7x2x(\(m\)=16) have stagewise channel allocation as [32, 64, 128, 256], [16, 64, 256, 1024], and [16, 48, 336, 672], respectively. We apply DeepReDuce and SNL on these networks and results are shown in Figure 5. We observe that wider networks are superior _only_ at higher ReLU counts while networks with a higher proportion of non-critical ReLUs excel at lower ReLU counts. This trend is consistent with both DeepReDuce and SNL. Specifically, wider models 4x4x4x(\(m\)=16) and 3x7x2x(\(m\)=16) outperform 2x2x2x(\(m\)=32) at higher ReLU counts; however, 2x2x2x(\(m\)=32) excel at lower ReLU counts despite having \(\approx 4\times\) fewer FLOPs. This performance is attributed to the higher fraction (58.82%) of non-critical Stage1 ReLUs in the 2x2x2x(\(m\)=32) model, as ReLU optimization methods primarily target these ReLUs when aiming for low counts. Consequently, networks with more Stage1 ReLUs need to eliminate fewer critical ReLUs, resulting in less accuracy degradation. For a thorough validation and explanation of the above observations, refer to Appendices C.2 and C.3. The above findings offer insight into the accuracy trends for SNL in Figure 4(b), higher the Stage1 ReLU proportion (58.8% for ResNet18, 47.7% for WRN22x4, and 43.9% for WRN16x8) higher \begin{table} \begin{tabular}{c the accuracy at lower ReLU counts. Moreover, it elucidates the rationale for choosing WRN22x8 (48.2% Stage1 ReLU proportion) for higher ReLU counts while ResNet18 for lower ReLU counts, for ReLU-accuracy Pareto in SNL and SENet [21]. **Observation 4: ReLU-Thinning improves the efficacy of fine-grained ReLU optimization, predominantly for networks possessing higher ReLU counts.** We employ a _hybrid_ ReLU optimization approach and incorporate ReLU Thinning, a coarse-grained ReLU optimization step used in DeepReDuce, before SNL optimization. Interestingly, _even when baseline Thinned models are less accurate_, a significant boost (up to **3%** at iso-ReLUs) in accuracy is observed, more pronounced for networks with higher #ReLUs (ResNet34 and WRN22x8, in Table 4). Since ReLU-Thinning drops the ReLUs from the network's alternate layers, _irrespective of their criticality_, its integration into existing ReLU optimization methodologies would not impact their overall computational complexity and remains effective for reducing the search space required to identify critical ReLUs. **Observation 5: Altering the network's ReLUs' distribution causes suboptimal performance in fine-grained ReLU optimization, and employing ReLU-Thinning reduces the performance gap.** We perform a contrastive analysis of fine-grained ReLU optimization (SNL) with DeepReDuce on the PI-amenable wider models 4x4x4x(\(m\)=16) and 3x7x2x(\(m\)=16), listed in Table 3. As shown in Figure 6(a) and 6(b) DeepReDuce outperforms SNL by a significant margin (upto 3%-4%); however, performing ReLU-Thinning before SNL optimization reduces this accuracy gap. This suggests that the benefit of fine-grained, over coarse-grained, ReLU optimization is _limited_ to a specific ReLUs' distribution, and it diminishes in the network with a lower proportion of the network's ReLU in Stage1. This constraint applies to ReLU's criticality-aware networks as well, as shown in Figure 18. ## 4 DeepReShape **Intuition for ReLUs' criticality-aware network widening:** We first examine the impact of existing network widening approaches (BaseCh and StageCh) on the network's ReLU distribution. Interestingly, we observe that increasing the network's width by augmenting \(\alpha\), \(\beta\), and \(\gamma\) in StageCh networks results in a distinctive ReLU distribution, unlike BaseCh networks where all layers undergo the same scaling of #ReLUs. In particular, as illustrated in Figure 8, the proportion of Stage1 ReLUs decreases while that in other stages increases, and at higher (\(\alpha\), \(\beta\), \(\gamma\)), where accuracy starts saturating, proportion in Stage4 dominates. This implies that the proportion of non-critical ReLUs is decreasing while the distribution of ReLUs among the other stages does not strictly adhere to their criticality order (see criticality evaluation in Table 8). This leads us to propose a network widening approach to increase the width until the ReLU distribution follows the criticality order, where _most-critical stage dominates the distribution_. We find that widening beyond the point where the network's ReLUs align with their criticality order does not significantly alter their relative distribution (Figure 8(c)). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & C100 & Baseline & 2.02\% & 1.08\% & 1.08\% & 1.08\% & 1.08\% & 1.08\% & 1.08\% \\ \hline \hline ReLUs18 & Wallin & 20.48 & 27.03 & 15.36 & 26.02 & 73.53 & 73.53 & 73.53 & 72.26 \\ (37.666) & **37.18** & 27.95 & 76.02 & 25.54 & 27.02 & 25.22 & 27.64 & 77.74 \\ \hline \hline \end{tabular} \end{table} Table 4: A significant accuracy boost (on CIFAR-100) is achieved when ReLU-Thinning is employed prior to SNL, despite the least amount (**up to 4%**, CIFAR-100) when altering the less accurate ReLU-Thinnned models. \(\Delta=\) ReLU distribution in networks; however, using SNL Acc(w/ Th.)-Acc(Vanilla). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Acc(\%)} & \multirow{2}{*}{H.CO} & \multirow{2}{*}{Ref.} & \multicolumn{2}{c}{Shadows ReLU} & \multirow{2}{*}{Implementation} \\ \cline{3-4} \cline{6-7} & & & & & **Superfit** & **Superfit** & **Superfit** \\ \hline \hline \multirow{2}{*}{2ReLUs2(32)} & 75.00 & 41.04 & 27.06 & 38.42\% & 13.97 & 13.91 & 38.98 \\ & 78.16 & 86.01 & 77.02 & 27.04 & 24.43 & 25.39 & 25.35 & 25.35 \\ & 78.02 & 66.04 & 66.04 & 38.05 & 31.05 & 31.05 & 31.05 & 18.54 \\ \hline \hline \end{tabular} \end{table} Table 3: A case study to investigate the capacity-criticality-tradeoff: Three Iso-ReLU ResNet18 models having distinct FLOPs count and ReLUs’ distribution, achieved by altering channels allocation per stage(Stage1) & (a) DeepReDuce at iso-ReLU (b) SNL at iso-ReLU (c) SNL at iso-ReLU (d) SNL at iso-ReLU (e) **ReLU equalization and formation of HybReNet:** As illustrated in Figure 7, the ReLU-equalization step redistributes the network's ReLU in their criticality order, i.e., the (most)least critical stage having a (highest)lowest fraction of the network's ReLU. This is performed using an iterative process, as outlined in Algorithm 1, where in each iteration the relative distribution of ReLUs in two stages is rearranged in their criticality order, by altering network design hyperparameters. We select the sets of minimum values of (\(\alpha\), \(\beta\), \(\gamma\)) required for ReLU equalization and obtain four distinct HRN networks: HRN-5x5x3x, HRN-5x7x2x, HRN-6x6x2x, and HRN-7x5x2x (see Appendix A). Figure 7 shows the channel allocation, after ReLU-equalization, in the successive stages of HRN-5x5x3x, allocating fewer channels to initial stages more to deeper ones, compared to the input network. **Network design for efficient PI at lower ReLU counts:** To enhance PI efficiency with lower ReLU counts, Stage1 needs to dominate the distribution of ReLUs (see observation 3); however, ReLU equalization in HRNs led to the lowest proportion of network ReLUs in Stage1. To address this, the \(\alpha\) value in HRNs is reduced to 2 as the decreasing \(\alpha\) value results in an increased proportion of Stage1 ReLUs. Consequently, HRN-2x5x3x, HRN-2x5x2x, HRN-2x6x2x, and HRN-2x7x2x have Stage1 dominating the network's ReLUs' distribution (see Table 9), and distribution of ReLUs in all but Stage1 follow their criticality order. Figure 7 shows the channel reallocation in HRN-2x5x3x after considering Capacity-Criticality-Tradeoff, allocating fewer channels even in deeper stages which results in significant FLOPs, up to \(\sim\)**45\(\times\)**, reduction. **ReLU-reuse (Re2):** We propose _ReLU-reuse_, to drop the ReLUs selectively from all but a (contiguous)fraction of feature maps in a layer. Inspired by [32, 33], feature maps of the layer are divided into \(N\) groups and ReLUs are employed only to the last group (Figure 9). Empirically, increasing the value of \(N\) results in a significant accuracy loss, despite \(1\times 1\) convolution being employed for cross-channel interaction. This is likely due to the loss of cross-channel information arises from a greater number of divisions in the feature Figure 8: Impact of network’s width expansion on ReLU’s distribution: for ResNet18-based BaseCh (a), StageCh (b), and proposed HRN network (c). Once HRN network’s ReLU align in their criticality order (at 5,5,3), the relative distribution of ReLUs remains stable with increasing \(\alpha\) values. Figure 7: The DeepReshape network redesigning pipeline. ReLU’s criticality-aware strategic allocation of channels (gray boxes) outputs FLOPs-balanced ReLU-efficient baseline networks for various ReLU counts (blue boxes). Numbers in green denote criticality order (Stage3 is most critical). maps (see our ablation study in Table 14, Appendix G). To address this issue, we devise a mechanism that decouples the number of divisions in feature maps from the ReLU reduction factor \(N\). Precisely, one-fourth of channels are utilized for feature reuse, while a \(Nth\) fraction of feature maps are activated using ReLUs, and the remaining feature maps are processed solely with convolution operations, resulting in only three groups. It is important to note that using the ReLUs in the last group of feature maps increases the effective receptive field as those neurons can take into account a greater portion of the input feature maps, using the skip connections. ## 5 Experimental Results **Analysis of HybReNets Pareto points:** Figure 1 shows that HybReNet advances the ReLU-accuracy Pareto with a substantial reduction in FLOPs counts - a factor overlooked in prior PI-specific network optimization. Now, we present a detailed analysis of network configurations and ReLU optimization steps and quantify their benefits for ReLUs and FLOPs reduction. We use ResNet18-based HRN-5x5x3x for ReLU-accuracy comparison with SOTA PI methods in Figure 1, as its FLOPs efficiency is superior to other HRNs (see Table 16). The key takeaway from Table 5 is that tailoring the network features for PI constraint significantly reduces FLOPs along with ReLUs. Specifically, lowering \(\alpha\) value and base channel count led to **23.6\(\times\)** fewer FLOPs in HRN-2x5x3x(\(m\)=8), compared to HRN-5x5x3x(\(m\)=16). Moreover, the criticality-aware ReLUs' distribution in HRNs simplifies the complexity of ReLU optimization steps to \(\mathcal{O}(1)\), as opposed to the iterative steps in DeepReDuce that lead to \(\mathcal{O}(D)\) complexity for a \(D\) stage network. In particular, we apply ReLU Culling only to Stage1 if it dominates the network ReLU distribution, such as HRNs with \(\alpha\)=2, and employ ReLU-Thinning for the remaining stages. Further reduction in ReLU count is achieved by implementing ReLU-reuse with a suitable reduction factor, as shown in Table 5. Finally, an accuracy boost is achieved by employing DKD [35], as the ReLU-reduced models greatly benefit from decoupling the target and non-target class distillation. Note that we exclusively employ coarse-grained ReLU optimization steps for HRNs, based on the observation that fine-grained ReLU optimization techniques underperform when the distribution of ReLU is altered in classical networks (see Figure 6). Therefore, fine-grained ReLU optimization fails to take advantage of increased network complexity per ReLU unit in HRNs and remains subpar compared to classical networks. For an in-depth discussion, see Appendix C.4. **HybReNets outperform state-of-the-art in private inference:** Table 6 presents competing design points for SENet [21] and SNL [20], and we select HybReNet points offering both accuracy and latency benefits for a fair comparison. The runtime breakdown is presented as homomorphic (HE) latency [1; 2; 3], arises from linear operations (convolution and fully-connected layers), and Garbled-circuit (GC) latency [36; 36; 24], resulting from non-linear (ReLU) operations [9; 10; 13]. On CIFAR-100, SENet requires 300K ReLUs and 2461M FLOPs to reach 80.54% accuracy; whereas, HRN-5x5x3x achieves 80.86% accuracy with only 163K ReLUs and 1055M FLOPs, providing 1.8\(\times\) ReLU and 2.3\(\times\) FLOPs saving. Similarly, at 25K ReLUs, our approach achieves a 2.1% accuracy gain with 12.5\(\times\) FLOPs reduction, thereby saving 5.2\(\times\) runtime. Even at an extremely low ReLU count of 13K, HRN is 1.7% more accurate and achieves 2.2\(\times\) runtime saving, compared to the SNL. On TinyImageNet, HybReNets outperform SENet at both 300K and 142K ReLUs, improve runtime by 1.7\(\times\) and 8.7\(\times\), respectively. Compared to SNL at 489K ReLUs, HybReNets are 3.2% (1.7%) more accurate with a 1.8\(\times\) (2.8\(\times\)) reduction in runtime. At lower ReLU counts of 100K and 59K, \begin{table} \begin{tabular}{c c|c c|c c c c c} \hline \multirow{2}{*}{HybReNet} & \multirow{2}{*}{\(m\)} & \multicolumn{2}{c|}{ReLU optimization steps} & \multirow{2}{*}{\#ReLU} & \multirow{2}{*}{\#FLOPs} & \multicolumn{2}{c}{Accuracy(\%)} & \multirow{2}{*}{Acc./ReLU} \\ \cline{3-3} \cline{5-10} & & & Culled & & Thinned & Re2 & & KD[34] & DKD[35] \\ \hline 5x5x3x & 16 & NA & S1+S2+S3+S4 & NA & 163.3K & 1055.4M & 79.34 & 80.86 & 0.50 \\ 2x5x3x & 32 & S1 & S2+S3+S4 & NA & 104.4K & 714.1M & 77.63 & 79.96 & 0.77 \\ 2x5x3x & 16 & S1 & S2+S3+S4 & NA & 52.2K & 178.5M & 74.98 & 77.14 & 1.48 \\ 2x5x3x & 8 & S1 & S2+S3+S4 & NA & 26.1K & 44.6M & 70.36 & 72.65 & 2.78 \\ 2x5x3x & 16 & S1 & S2+S3+S4 & 4 & 13.1K & 121.6M & 67.30 & 68.25 & 5.23 \\ 2x5x3x & 16 & S1 & S2+S3+S4 & 8 & 6.5K & 130.5M & 62.68 & 63.29 & 9.70 \\ 2x5x3x & 16 & S1 & S2+S3+S4 & 16 & 3.2K & 137.2M & 56.24 & 56.33 & 17.26 \\ \hline \end{tabular} \end{table} Table 5: Network configurations and ReLU optimization steps used for the Pareto points in Figure 1, on CIFAR-100. Re2 denotes ReLU-reuse, used for achieving very low ReLU-counts. HybReNets match the accuracy with SNL and achieve a 12.4\(\times\) and 3.1\(\times\) FLOPs reduction, which results in 8.7\(\times\) and 2.8\(\times\) runtime improvement, respectively. Our primary insight from Table 6 is that FLOPs reduction does not inherently guarantee a proportional reduction in HE latency, whereas a direct correlation exists between ReLU reduction and GC latency savings. In particular, a \(\sim\)12.5\(\times\) FLOPs reduction translates to 5.2\(\times\) and 8.7\(\times\) latency reduction on CIFAR-100 and TinyImageNet, respectively. This is due to the fact HE latency has an intricate dependency on the input/output packing [8; 37], rotational complexity [38; 39; 40] and slot utilization [41]. We refer the readers to [8; 10] for details. **Generality case study on ResNet34:** We select ResNet34 for DeepReShape generality study for two key reasons: (1) its consistent use for the case study in prior PI-specific network optimization studies [17; 20; 21], and (2) its stage compute ratio (\(\phi_{1}\)=3, \(\phi_{2}\)=4, \(\phi_{3}\)=6, and \(\phi_{4}\)=3) distinguishes it from ResNet18, results in different sets of HRN networks, HRN-4x6x3x and HRN-4x9x2x, upon applying Algorithm 1. We use HRN-4x6x3x for comparison with SOTA in Table 7. HybReNet advances the ReLU-accuracy Pareto on both CIFAR-100 and TinyImageNet, shown in Figures 10 (a, b). Table 7 quantifies the FLOPs-ReLU-Accuracy benefits along with runtime savings. On CIFAR-100, compared to SOTA, HybReNet improves runtime by 3.1\(\times\) with a significant gain in accuracy -- 9.8%, 7.2%, 5.9%, and 2.1% at 15K, 25K, 30K and 50K ReLUs (respectively). Further on TinyImageNet, SNL requires 300K ReLUs and 4646M FLOPs to reach 64% accuracy; whereas, HybReNet matches this accuracy with 8.8\(\times\) fewer FLOPs, leading to a runtime improvement of 6.3\(\times\). Conclusively, it highlights the effectiveness of DeepReShape and validates its generality. **HybReNets outperform SOTA vision models ConvNeXt and RegNet:** We select SOTA vision models ConvNeXt-V2 [42] and RegNet [31] for our comparative analysis with HybReNets as these models possess distinctive depth/width hyperparameters, compared to the ResNet (Appendix E.4). First, we compare the baseline RegNet-X models with (ResNet18-based) HybReNets on CIFAR-100, without applying any ReLU-optimization steps. Results are shown in Figure 10(c) where HRNs are evaluated with \(m\in\) {16, 32, 64 }. All HRNs achieve a substantial reduction in ReLU count, at iso-accuracies. For instance, to achieve accuracies of 78.26% and 80.63%, RegNet-X models \begin{table} \begin{tabular}{c c c c c c c c c|c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c|}{SOTA in Private Inference} & \multicolumn{3}{c|}{HybReNet(Ours)} & \multicolumn{3}{c}{Improvements} & \multicolumn{3}{c}{Improvements} \\ \cline{3-14} \multicolumn{1}{c}{} & \#Re & \#FL & Acc. & HE & GC & Lat. & \#Re & \#FL & Acc. & HE & GC & Lat. & \#Re & \#FL & Acc. & HE & GC & Lat. \\ \hline \multirow{8}{*}{**FullyRecher**} & 300 & 2461 & 80.54 & 1004 & 33.7 & 1037 & 163 & 1055 & 80.86 & 770 & 18.4 & 788 & 1.8\(\times\) & 2.3\(\times\) & 0.3 & 1.3\(\times\) & 1.8\(\times\) & 1.3\(\times\) \\ & 240 & 261 & 79.81 & 1004 & 27.0 & 1031 & 163 & 1055 & 80.86 & 770 & 18.4 & 788 & 1.5\(\times\) & 2.3\(\times\) & 1.1 & 1.3\(\times\) & 1.5\(\times\) & 1.3\(\times\) \\ & 180 & 2461 & 79.12 & 1004 & 20.2 & 1024 & 103 & 1055 & 80.86 & 770 & 18.4 & 788 & 1.1\(\times\) & 2.3\(\times\) & 1.7 & 1.3\(\times\) & 1.3\(\times\) & 1.3\(\times\) \\ & 500 & 559 & 75.28 & 268 & 5.6 & 274 & 52.9 & 77.14 & 123 & 5.9 & 129 & 1.0\(\times\) & 3.1\(\times\) & 1.9 & 2.2\(\times\) & 0.9\(\times\) & 2.1\(\times\) \\ & 25 & 559 & 75.09 & 268 & 28.8 & 271 & 26.5 & 472.6 & 49 & 2.9 & 52 & 0.9\(\times\) & 12.5\(\times\) & 2.1\(\times\) & 5.5\(\times\) & 1.0\(\times\) & **5.2\(\times\)** \\ & 25 & 15 & 559 & 67.17 & 26.8 & 17.7 & 270 & 13 & 179 & 68.52 & 123 & 1.5 & 124 & 1.1\(\times\) & 3.1\(\times\) & 1.1 & 2.2\(\times\) & 1.0\(\times\) & 2.2\(\times\) \\ & 13 & 559 & 65.35 & 268 & 15.5 & 270 & 13 & 179 & 68.52 & 123 & 1.5 & 124 & 1.0\(\times\) & 3.1\(\times\) & 1.7 & 2.2\(\times\) & 1.0\(\times\) & 2.2\(\times\) \\ \hline \multirow{8}{*}{**FullyRecher**} & 300 & 2227 & 64.96 & 927 & 33.7 & 961 & 327 & 1065 & 64.92 & 526 & 36.7 & 563 & 0.9\(\times\) & 2.1\(\times\) & 0.0\(\times\) & 1.8\(\times\) & 0.9\(\times\) & 1.7\(\times\) \\ & 142 & 2227 & 58.90 & 927 & 16.0 & 943 & 104 & 104 & 179 & 58.90 & 97 & 11.7 & 10.8 & 1.4\(\times\) & 1.2\(\times\) & 0.0\(\times\) & 9.6\(\times\) & 1.4\(\times\) & **8.7\(\times\)** \\ \cline{1-1} & 489 & 980.46 & 64.24 & 3600 & 53.0 & 3745 & 4316 & 67.83 & 2059 & 73.44 & 202 & 0.7\(\times\) & 2.3\(\times\) & 2.3\(\times\) & 1.8\(\times\) & 0.7\(\times\) & 1.8\(\times\) \\ \cline{1-1} & 489 & 980.36 & 64.24 & 3600 & 55.0 & 3745 & 418 & 2842 & 66.30 & 1307 & 45.0 & 1352 & 1.2\(\times\) & 3.5\(\times\) & 1.7 & 2.8\(\times\) & 1.2\(\times\) & 2.8\(\times\) \\ \cline{1-1} & 499 & 980.36 & 64.24 & 3600 & 55.0 & 3745 & 418 & 2842 & 66.30 & 1307 & 45.0 & 1352 & 1.2\(\times\) & 3.5\(\times\) & 1.7 & 2.8\(\times\) & 1.2\(\times\) & 2.8\(\times\) \\ \cline{1-1} & 298 & 2227 & 64.04 & 927 & 33.5 & 961 & 327 & 1055 & 64.92 & 526 & 36.7 & 563 & 0.9\(\times\) & 2.1\(\times\) & 0.9\(\times\) & 1.8\(\times\) & 0.9\(\times\) & 1.7\(\times\) \\ \cline{1-1} & 100 & 2277 & 58.94 & 927 & 11.2 & 930 & 140 & 179 & 58.90 & 97 & 117 & 10.8 & 1.0\(\times\) & 1.2\(\times\) & 0.0\(\times\) & 9.6\(\times\) & 1.0\(\times\) & **8.7\(\times\)** \\ \cline{1-1} & 59 & 2227 & 54.40 & 927 & 6.6 & 934 & 52 & 712 & 54.46 & 329 & 5.9 & 33 & 1.1\(\times\) & 3.1\(\times\) & 0.1 & 2.8\(\times\) & 1.1\(\times\) & 2.8\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of HybReNet with state-of-the-art in private inference: SENet [21] and SNL [20]. HybReNet exhibits superior ReLU and FLOPs efficiency and achieve a substantial reduction in latency. #Re and #FL denote ReLU and FLOPs counts; Acc. is top-1 accuracy; Lat. is the runtime for one private inference, including Homomorphic (HE) and Garbled require 1460K and 6544K ReLUs respectively, while the HRN-5x5x3x only requires 343K and 1372K ReLUs, leading to a 4.3\(\times\) and 4.7\(\times\) ReLU reduction respectively. Now, we compare the ConvNeXt-V2 models with HybReNets on TinyImageNet, after employing ReLU optimization steps. The ReLU-accuracy Pareto is shown in Figure 10(b), with a detailed comparison outlined in Table 7. The competing HRNs achieve 1.3\(\times\) to 1.7\(\times\) ReLU savings; 1.4\(\times\) to 2.5\(\times\) FLOPs reduction; which results in 1.3\(\times\) to 2.3\(\times\) runtime improvements. **Sensitivity study and analysis of networks produced by DeepReShape:** We analyze the impact of each stagewise channel multiplication factor (\(\alpha\), \(\beta\), \(\gamma\)) on the network's ReLU and FLOPs efficiency using a sensitivity analysis. With ResNet18 configured with \(m\)=16, we systematically vary one factor at a time, starting from 2, while other factors are held constant at 2. We observe that augmenting \(\alpha\) and \(\beta\) values improves ReLU efficiency; notably, the latter optimizes the performance marginally better than the former until a saturation point is reached. On the other hand, FLOPs efficiency is most effectively improved by augmenting \(\alpha\), outperforming \(\beta\) enhancements while augmenting \(\gamma\) values yields the worst FLOPs-efficiency. This suggests that FLOPs growth in the deeper layers of StageCh networks is inconsequential. Conclusively, higher \(\alpha\) and \(\beta\) values with a _restrictive_\(\gamma\) value are desirable for FLOPs-ReLU-Accuracy balance. We find that ReLU equalization in HRNs confines the \(\gamma\) values (see Appendix A). Precisely, all the four HRNs produced by DeepReShape method; HRN-5x5x3x, HRN-5x7x2x, HRN-6x6x2x, and HRN-7x5x2x; posses higher \(\alpha\) and \(\beta\) values while \(\gamma\) values are restricted as \(\gamma<\)4. Thus, higher \(\alpha\) and \(\beta\) values in HRNs boost ReLU efficiency, and a lower \(\gamma\) value restricts the FLOPs' growth in deeper layers, promoting FLOP efficiency. ## 6 Related Work **PI-specific network optimization:** Delphi [9] and SAFENet [15] substitute the ReLUs with low-degree polynomials, while DeepReDuce [17] is a coarse-grained ReLU optimization and drops ReLUs layerwise. SNL [20] and SENet [21] are fine-grained ReLU optimization, and drops the pixelwise ReLUs. CryptoNAS [14] and Sphynx [19] use neural architecture search and employ constant number of ReLUs per layer for designing ReLU-efficient networks, disregarding FLOPs implications. In contrast, our approach achieves ReLU and FLOPs efficiency simultaneously. We refer the reader to [43] for detailed HE and GC-specific optimizations for private inference, and Appendix I for additional related work. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c}{SOTA in Private Inference (on ResNet343)} & \multicolumn{1}{c}{HybReNet(Ours)} & \multicolumn{1}{c}{HybReNet(Ours)} & \multicolumn{1}{c}{HybReNet(Ours)} & \multicolumn{1}{c}{HybReNet(Ours)} & \multicolumn{1}{c}{HybReNet(Ours)} & \multicolumn{1}{c}{HybReNet} \\ \cline{3-13} \cline{13-13} \multicolumn{13}{c}{} & \#Re & **\#FL** & Acc. & HE & **\#FeL** & Acc. & HE & GC & **Lat.** & **\#Re** & **\#FL** & Acc. & HE & GC & **Lat.** \\ \hline \multirow{9}{*}{\begin{tabular}{} \end{tabular} } & 200 & 1162 & 78.80 & 459 & 22.5 & 482 & 134 & 527 & 674 & 15.1 & 419 & 15.2 \(\times\) & 2.2 \(\times\) & 0.8 & 1.1 \(\times\) & 1.5 \(\times\) & 1.1\(\times\) \\ & 80 & 1162 & 76.65 & 459 & 0.9 & 468 & 67 & 132 & 76.91 & 140 & 7.5 & 148 & 1.2 \(\times\) & 8.8 \(\times\) & 0.3 \(\times\) & 1.2 \(\times\) & 3.2 \(\times\) \\ & 50 & 1162 & 74.84 & 4.5 & 5.6 & 65 & 132 & 76.91 & 140 & 7.5 & 148 & 0.7 \(\times\) & 8.8 \(\times\) & 2.8 \(\times\) & 2.3 \(\times\) & 3.3 \(\times\) & 0.7 \(\times\) & 3.1 \(\times\) \\ \cline{1-1} & 30 & 1162 & 71.00 & 459 & 3.4 & 462 & 67 & 132 & 76.91 & 140 & 7.5 & 148 & 0.7 \(\times\) & 8.5 \(\times\) & **5.9** & 3.3 \(\times\) & 0.4 \(\times\) & **3.1** \\ & 25 & 1162 & 69.68 & 459 & 2.8 & 462 & 67 & 132 & 76.91 & 140 & 7.5 & 148 & 0.4 \(\times\) & 8.8 \(\times\) & **5.9** & 3.3 \(\times\) & 0.4 \(\times\) & 3.1 \(\times\) \\ & 25 & 1162 & 67.08 & 459 & 1.7 & 461 & 67 & 132 & 76.91 & 140 & 7.5 & 148 & 0.4 \(\times\) & 8.8 \(\times\) & **5.9** & **3.3 \(\times\) & 0.2 \(\times\) & **3.1** \\ \hline \multirow{9}{*}{ \begin{tabular}{} \end{tabular} } & 500 & 4646 & 65.34 & 1710 & 56.2 & 1766 & 537 & 2109 & 67.48 & 880 & 60.3 & 940 & 0.9 \(\times\) & 2.2 & 2.1 \(\times\) & 1.9 \(\times\) & 0.9 \(\times\) & 2.3 \(\times\) \\ & 400 & 4646 & 63.32 & 1710 & 45.0 & 1755 & 337 & 2109 & 67.88 & 880 & 60.3 & 940 & 0.7 \(\times\) & 2.2 & 2.2 \(\times\) & 0.7 \(\times\) & 2.2 \(\times\) & 2.1 \(\times\) & 0.7 \(\times\) & 2.3 \(\times\) \\ & 300 & 4646 & 63.99 & 1710 & 3.7 & 1744 & 268 & 529 & 64.02 & 245 & 30.2 & 275 & 1.1 \(\times\) & 8.8 \(\times\) & 0.0 & 7.0 \(\times\) & **1.1** & **6.3** \\ & 200 & 4646 & 62.49 & 1710 & 22.5 & 1733 & 268 & 529 & 64.02 & 245 & 30.2 & 275 & 0.7 \(\times\) & 8.8 \(\times\) & 1.5 \(\times\) & 0.0 & 7.0 \(\times\) & **6.3** \\ \cline{1-1} & 1622 & 17801 & 69.85 & 4067 & 18.24 & 2429 & 1270 & 2824 & 70.30 & 3091 & 1428 & 2233 & 1.3 \(\times\) & 1.4 \(\times\) & 0.4 & 1.3 \(\times\) & 1.3 \(\times\) & 1.3 \(\times\) \\ & 2178 & 9080 & 68.75 & 2368 & 143.7 & 2512 & 952 & 4638 & 69.15 & 1837 & 107.1 & 1944 & 1.3 \(\times\) & 2.0 & 4.0 & 1.3 \(\times\) & 1.3 \(\times\) & 1.3 \(\times\) \\ & 221 & 13436 & 67.08 & 1307 & 81.0 & 1388 & 537 & 2109 & 67.84 & 880 & 60.3 & 940 & 1.3 \(\times\) & 1.6 \(\times\) & 0.4 & 1.5 \(\times\) & 1.3 \(\times\) & 1.5 \(\times\) \\ & 541 & 1935 & 65.72 & 738 & 60.8 & 799 & 402 & 1187 & 65.77 & 592 & 45.2 & 637 & 1.3 \(\times\) & 1.6 \(\times\) & 0.0 & 1.3 \(\times\) & 1.3 \(\times\) & 1.3 \(\times\) \\ & 451 & 1345 & 640.70 & 546 & 50.7 & 597 & 268 & 529 & 64.02 & 245 & 30.2 & 275 & 1.7 \(\times\) & 2.5 \(\times\) & 0.0 & 2.2 \(\times\) & 1.7 \(\times\) & 2.2 \(\times\) \\ \hline \hline \end{tabular} \end{table} Table 7: ResNet34-based HybReNets outperform SOTA PI Conclusion In this work, we develop the DeepReShape method to achieve ReLU and FLOPs efficiency simultaneously and designed a novel family of networks named HybReNet. We further study the essential network attributes enabling PI efficiency over a wide range of ReLU counts. Our findings demonstrate that distinct network attributes are required for efficient PI at higher and lower ReLU counts. ## 8 Checklist **Broader impact:** The privacy-preserving computations often necessitates significant resources, such as memory and compute time. For instance, the use of the garbled-circuit technique alone requires hundreds of gigabytes of memory, and homomorphic computations can take hours for a single private inference. This demand for resources highlights the necessity for efficient optimization strategies. Previously, a line of research on specialized hardware architectures and protocol optimizations have been put forward to address these overheads. However, these approaches come with their own limitations: the former presents sustainability issues [44], and the latter can introduce new security vulnerabilities and lack backward compatibility. Contrastingly, algorithmic enhancements can be effectively deployed across a variety of hardware platforms and security protocols. In our research, we demonstrate that a substantial reduction in runtime, \(\sim\)(5\(\times\) to 10\(\times\)), can be achieved simply by strategically allocating channels in existing classical networks and employing straightforward ReLU optimization steps. Importantly, this decrease does not depend on specific network architectures or specialized hardware, thus broadening the potential impact of our algorithmic optimization work. **Limitations:** Achieving a specific ReLU count with HRNs is challenging due to the use of coarse-grained ReLU optimization steps, which are influenced by the base channel counts and stage-wise channel multiplication factors. The difference between fine-grained and coarse-grained ReLU optimization lies in their performance and configurability to a target ReLU count. Fine-grained ReLU optimization allows for an independent target ReLU count input, while coarse-grained ReLU optimization adjusts based on the network's ReLU count. However, when applied to HRNs, fine-grained ReLU optimization has been found to underperform compared to the coarse-grained approach. **Reproducibility statements:** We have provided all the implementation-specific details in Appendix H. Further, the details of the network architecture can be found in Appendix J. Also, the details of the ReLU optimization steps are presented in Table 5, Table 11, and Table 12. ## Acknowledgment We would like to thank Karthik Garimella for his assistance in computing the runtime (HE and GC latency) for private inference. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA), under the Data Protection in Virtual Environments (DPRIVE) program, contract HR0011-21-9-0003. The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
2310.01267
Cooperative Graph Neural Networks
Graph neural networks are popular architectures for graph machine learning, based on iterative computation of node representations of an input graph through a series of invariant transformations. A large class of graph neural networks follow a standard message-passing paradigm: at every layer, each node state is updated based on an aggregate of messages from its neighborhood. In this work, we propose a novel framework for training graph neural networks, where every node is viewed as a player that can choose to either 'listen', 'broadcast', 'listen and broadcast', or to 'isolate'. The standard message propagation scheme can then be viewed as a special case of this framework where every node 'listens and broadcasts' to all neighbors. Our approach offers a more flexible and dynamic message-passing paradigm, where each node can determine its own strategy based on their state, effectively exploring the graph topology while learning. We provide a theoretical analysis of the new message-passing scheme which is further supported by an extensive empirical analysis on a synthetic dataset and on real-world datasets.
Ben Finkelshtein, Xingyue Huang, Michael Bronstein, İsmail İlkan Ceylan
2023-10-02T15:08:52Z
http://arxiv.org/abs/2310.01267v2
# Cooperative Graph Neural Networks ###### Abstract Graph neural networks are popular architectures for graph machine learning, based on iterative computation of node representations of an input graph through a series of invariant transformations. A large class of graph neural networks follow a standard message-passing paradigm: at every layer, each node state is updated based on an aggregate of messages from its neighborhood. In this work, we propose a novel framework for training graph neural networks, where every node is viewed as a _player_ that can choose to either 'listen', 'broadcast', 'listen and broadcast', or to 'isolate'. The standard message propagation scheme can then be viewed as a special case of this framework where every node 'listens and broadcasts' to all neighbors. Our approach offers a more flexible and dynamic message-passing paradigm, where each node can determine its own strategy based on their state, effectively exploring the graph topology while learning. We provide a theoretical analysis of the new message-passing scheme which is further supported by an extensive empirical analysis on a synthetic dataset and on real-world datasets. ## 1 Introduction Graph neural networks (GNNs) (Searselli et al., 2009; Gori et al., 2005) are a class of deep learning architectures for learning on graph-structured data. Their success in various graph machine learning tasks (Shlomi et al., 2021; Duvenaud et al., 2015; Zitnik et al., 2018) has led to a surge of different architectures (Kipf and Welling, 2017; Xu et al., 2019; Velickovic et al., 2018; Hamilton et al., 2017; Li et al., 2016). GNNs are based on an iterative computation of node representations of an input graph through a series of invariant transformations. Gilmer et al. (2017) showed that the vast majority of GNNs can be implemented through _message-passing_, where the fundamental idea is to update each node's representation based on an aggregate of messages flowing from the node's neighbors. The message-passing paradigm has been very influential in graph ML, but it also comes with well-known limitations related to the information flow on a graph, pertaining to _long-range_ dependencies Dwivedi et al. (2022). In order to receive information from \(k\)-hop neighbors, a message-passing neural network needs at least \(k\) layers. In many types of graphs, this typically implies an exponential growth of a node's receptive field. The growing amount of information needs then to be compressed into fixed-sized node embeddings, possibly leading to information loss, referred to as _over-squashing_(Alon and Yahav, 2021). **Motivation.** Our goal is to generalize the message-passing scheme by allowing each node to decide how to propagate information _from_ or _to_ its neighbors, thus enabling a more flexible flow of information. As a motivating example, consider the graph depicted in Figure 1 and suppose that, at every layer, the black node \(w\) needs information _only_ from the neighbors which have a yellow neighbor (node \(v\)), and hence only information from a subgraph (highlighted with gray color). Such a scenario falls outside the ability of standard message-passing schemes because there is no mechanism to condition the propagation of information on the two-hop information. The information flow becomes more complex if the nodes have different choices across layers, since we cannot anymore view the process as merely focusing on a subgraph. This kind of message-passing is then _dynamic_ across layers. Consider the example depicted in Figure 2: the top row shows the Figure 1: Node \(w\) listens only to \(v\). Applying the same choice iteratively, \(w\) receives information from a subgraph marked in gray. information flow relative to the node \(u\) across three layers, and the bottom row shows the information flow relative to the node \(v\) across three layers. Node \(u\) listens to every neighbor in the first layer, only to \(v\) in the second layer, and to nodes \(s\) and \(r\) in the last layer. On the other hand, node \(v\) listens to node \(w\) for the first two layers, and to node \(u\) in the last layer. **Main idea.** In the above examples, for each node, we need to learn whether or not to listen to a particular node in the neighborhood. To achieve this, we regard each node as a _player_ that can take the following actions in each layer: * Standard (S): Broadcast to neighbors that listen _and_ listen to neighbors that broadcast. * Listen (L): Listen to neighbors that broadcast. * Broadcast (B): Broadcast to neighbors that listen. * Isolate (I): Neither listen nor broadcast, effectively isolating the node. The case where all nodes perform the action Standard corresponds to the standard message-passing scheme used in GNNs. Conversely, having all the nodes Isolate corresponds to removing all the edges from the graph implying node-wise predictions. The interplay between these actions and the ability to change them _locally_ and _dynamically_ makes the overall approach richer and allows to decouple the input graph from the computational one and incorporate directionality into message-passing: a node can only listen to those neighbors that are currently broadcasting, and vice versa. We can emulate the example from Figure 2 by making \(u\) choose the actions \(\langle\mathrm{L},\mathrm{L},\mathrm{S}\rangle\), \(v\) and \(w\) the actions \(\langle\mathrm{S},\mathrm{S},\mathrm{L}\rangle\), and \(s\) and \(r\) the actions \(\langle\mathrm{S},\mathrm{I},\mathrm{S}\rangle\). **Contributions.** In this paper, we develop a new class of architectures, dubbed _cooperative graph neural networks_ (Co-GNNs), where every node in the graph is viewed as a player that can perform one of the aforementioned actions. Co-GNNs comprise two jointly trained cooperating message-passing neural networks: an _environment network_\(\eta\) (for solving the given task), and an _action network_\(\pi\) (for choosing the best actions). Our contributions can be summarized as follows: * We propose a novel, flexible message-passing mechanism for graph neural networks, which leads to Co-GNN architectures that effectively explore the graph topology while learning (Section 4). * We provide a detailed discussion on the properties of Co-GNNs (Section 5.1) and show that they are more expressive than 1-dimensional Weisfeiler-Leman algorithm (Section 5.2), and more importantly, better suited for long-range tasks due to their adaptive nature (Section 5.3). * Empirically, we focus on Co-GNNs with basic action and environment networks to carefully assess the virtue of the new message-passing paradigm. We first experimentally validate the strength of our approach on a synthetic task (Section 6.1). Afterwards, we conduct experiments on real-world datasets, and observe that Co-GNNs always improve compared to their baseline models, and further yield multiple state-of-the-art results (Sections 6.2 and 6.3). We complement these with further experiments reported in the Appendix. Importantly, we illustrate the dynamic and adaptive nature of Co-GNNs (Appendix B) and also provide experiments to evaluate Co-GNNs on long-range tasks (Appendix D). Proofs of the technical results and additional experimental details can be found in the Appendix. Figure 2: Example information flow for two nodes \(u,v\). **Top**: information flow relative to \(u\) across three layers. Node \(u\) listens to every neighbor in the first layer, but only to \(v\) in the second layer, and only to \(s\) and \(r\) in the last layer. **Bottom**: information flow relative to \(v\) across three layers. The node \(v\) listens only to \(w\) in the first two layers, and only to \(u\) in the last layer. Background **Graph neural networks.** We consider simple, undirected attributed graphs \(G=(V,E,\mathbf{X})\), where \(\mathbf{X}\in\mathbb{R}^{|V|\times d}\) is a matrix of (input) node features, and \(\mathbf{x}_{v}\in\mathbb{R}^{d}\) denotes the feature of a node \(v\in V\). We focus on _message-passing neural networks (MPNNs)_(Gilmer et al., 2017) that encapsulate the vast majority of GNNs. An MPNN updates the initial node representations \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\) of each node \(v\) for \(0\leq\ell\leq L-1\) iterations based on its own state and the state of its neighbors \(\mathcal{N}_{v}\) as: \[\mathbf{h}_{v}^{(\ell+1)}=\phi^{(\ell)}\left(\mathbf{h}_{v}^{(\ell)},\psi^{(\ell)} \left(\mathbf{h}_{v}^{(\ell)},\{\mathbf{h}_{u}^{(\ell)}\mid u\in\mathcal{N}_{v}\} \right)\right),\] where \(\{\!\!\{\cdot\}\!\}\) denotes a multiset and \(\phi^{(\ell)}\) and \(\psi^{(\ell)}\) are differentiable _update_ and _aggregation_ functions, respectively. We denote by \(d^{(\ell)}\) the dimension of the node embeddings at iteration (layer) \(\ell\). The final representations \(\mathbf{h}_{v}^{(L)}\) of each node \(v\) can be used for predicting node-level properties or they can be pooled to form a graph embedding vector \(\mathbf{z}_{G}^{(L)}\), which can be used for predicting graph-level properties. The pooling often takes the form of simple averaging, summation, or element-wise maximum. In this paper, we largely focus on the basic MPNNs of the following form: \[\mathbf{h}_{v}^{(\ell+1)}=\sigma\left(\mathbf{W}_{s}^{(\ell)}\mathbf{h}_{v}^{(\ell)}+\mathbf{ W}_{n}^{(\ell)}\psi\left(\left\{\mathbf{h}_{u}^{(\ell)}\mid u\in\mathcal{N}_{v}\mathbf{ \mathbbm{1}}\right\}\right)\right),\] where \(\mathbf{W}_{s}^{(\ell)}\) and \(\mathbf{W}_{n}^{(\ell)}\) are \(d^{(\ell)}\times d^{(\ell+1)}\) learnable parameter matrices acting on the node's self-representation and on the aggregated representation of its neighbors, respectively, \(\sigma\) is a non-linearity, and \(\psi\) is either _mean_ or _sum_ aggregation function. We refer to the architecture with mean aggregation as MeanGNNs and to the architecture with sum aggregation as SumGNNs (Hamilton, 2020). We also consider prominent models such as GCN (Kipf and Welling, 2017) and GIN (Xu et al., 2019). **Straight-through Gumbel-softmax estimator.** In our approach, we rely on an action network for predicting categorical actions for the nodes in the graph, which is not differentiable and poses a challenge for gradient-based optimization. One prominent approach to address this is given by the Gumbel-softmax estimator (Jang et al., 2017; Maddison et al., 2017) which effectively provides a differentiable, continuous approximation of discrete action sampling. Consider a finite set \(\Omega\) of actions. We are interested in learning a categorical distribution over \(\Omega\), which can be represented in terms of a probability vector \(\mathbf{p}\in\mathbb{R}^{|\Omega|}\), whose elements store the probabilities of different actions. Let us denote by \(\mathbf{p}(a)\) the probability of an action \(a\in\Omega\). Gumbel-softmax is a special reparametrization trick that estimates the categorical distribution \(\mathbf{p}\in\mathbb{R}^{|\Omega|}\) with the help of a Gumbel-distributed vector \(\mathbf{g}\in\mathbb{R}^{|\Omega|}\), which stores an i.i.d. sample \(\mathbf{g}(a)\sim\textsc{Gumbel}(0,1)\) for each action \(a\). Given a categorical distribution \(\mathbf{p}\) and a temperature parameter \(\tau\), Gumbel-softmax scores can be computed as follows: \[\mathrm{Gumbel-softmax}\left(\mathbf{p};\tau\right)=\frac{\exp\left((\log(\mathbf{p})+ \mathbf{g})/\tau\right)}{\sum_{a\in\Omega}\exp\left((\log(\mathbf{p}(a))+\mathbf{g}(a))/ \tau\right)}\] As the softmax temperature \(\tau\) decreases, the resulting vector tends to a _one-hot_ vector. Straight-through Gumbel-softmax estimator utilizes the Gumbel-softmax estimator during the backward pass only (for a differentiable update), while during the forward pass, it employs an ordinary sampling. ## 3 Related work Most of GNNs used in practice are instances of MPNNs (Gilmer et al., 2017) based on the message-passing approach, which has roots in classical GNN architectures (Scarselli et al., 2009; Gori et al., 2005) and their modern variations (Kipf and Welling, 2017; Xu et al., 2019; Velickovic et al., 2018; Hamilton et al., 2017; Li et al., 2016). Despite their success, MPNNs have some known limitations. First of all, their expressive power is upper bounded by the 1-dimensional Weisfeiler-Leman graph isomorphism test (1-WL) (Xu et al., 2019; Morris et al., 2019) in that MPNNs cannot distinguish any pair of graphs which cannot be distinguished by 1-WL. This drawback has motivated the study of more expressive architectures, based on higher-order graph neural networks (Morris et al., 2019; Maron et al., 2019; Keriven and Peyre, 2019), subgraph sampling approaches (Bevilacqua et al., 2022; Thiede et al., 2021), lifting graphs to higher-dimensional topological spaces (Bodnar et al., 2021), enriching the node features with unique identifiers (Bouritsas et al., 2022; Loukas, 2020; You et al., 2021) or random features (Abboud et al., 2021; Sato et al., 2021). Second, MPNNs generally perform poorly on long-range tasks due to their information propagation bottlenecks (Li et al., 2018; Alon and Yahav, 2021). This motivated approaches based on rewiring the input graph (Klicpera et al., 2019; Topping et al., 2022; Karhadkar et al., 2023) by connecting relevant nodes and shortening propagation distances to minimize bottlenecks, or designing new message-passing architectures that act on distant nodes directly, e.g., using shortest-path distances (Abboud et al., 2022; Ying et al., 2021). Lately, there has been a surging interest in the advancement of Transformer-based approaches for graphs (Ma et al., 2023; Ying et al., 2021; Yun et al., 2019; Kreuzer et al., 2021; Dwivedi and Bresson, 2021), which can encompass complete node connectivity beyond the local information classical MPNNs capture, which in return, allows for more effective modeling of long-range interactions. Finally, classical message passing updates the nodes in a synchronous manner, which does not allow the nodes to react to messages from their neighbors individually. This has been recently argued as yet another limitation of classical message passing from the perspective of algorithmic alignment (Faber and Wattenhofer, 2022). Through a new message-passing scheme, our work presents new perspectives on these limitations by dynamically changing the information flow depending on the task, and resulting in more flexible architectures than the classical MPNNs, while also allowing asynchronous message passing across nodes. Our work is related to the earlier work of Lai et al. (2020), where the goal is to update each node using a potentially different number of layers, which is achieved by learning the optimal aggregation depth for each node through a reinforcement learning approach. Co-GNNs are orthogonal to this study both in terms of the objectives and the approach (as detailed in Section 5). ## 4 Cooperative graph neural networks Co-GNNs view each node in a graph as _players_ of a multiplayer environment, where the state of each player is given in terms of the representation (or _state_) of its corresponding node. In this environment, every node is updated following a two-stage process. In the first stage, each chooses an action from the set of actions given their current state and the states of their neighboring nodes. In the second stage, every node state gets updated based on their current state and the states of a _subset_ of the neighboring nodes, as determined by the actions in the first stage. As a result, every node can determine how to propagate information from or to its neighbors. Formally, a Co-GNN \((\pi,\eta)\) architecture is given in terms of two cooperating GNNs: (i) an action network \(\pi\) for choosing the best actions, and (ii) an environment network \(\eta\) for updating the node representations. A Co-GNN layer updates the representations \(\mathbf{h}_{v}^{(\ell)}\) of each node \(v\) as follows. First, an action network \(\pi\) predicts, for each node \(v\), a probability distribution \(\mathbf{p}_{v}^{(\ell)}\in\mathbb{R}^{4}\) over the actions \(\{\mathrm{S},\mathrm{L},\mathrm{B},\mathrm{I}\}\) that \(v\) can take, given its state and the state of its neighbors \(\mathcal{N}_{v}\), as follows: \[\mathbf{p}_{v}^{(\ell)}=\pi\left(\mathbf{h}_{v}^{(\ell)},\{\!\!\{\mathbf{h}_{u}^{(\ell)} \mid u\in\mathcal{N}_{v}\}\!\!\}\right). \tag{1}\] Then, for each node \(v\), an action is sampled \(a_{v}^{(\ell)}\sim\mathbf{p}_{v}^{(\ell)}\) using the Straight-through Gumbel-softmax estimator, and an environment network \(\eta\) is utilized to update the state of each node in accordance with the sampled actions, as follows: \[\mathbf{h}_{v}^{(\ell+1)}=\begin{cases}\eta^{(\ell)}\big{(}\mathbf{h}_{v}^{(\ell)}, \{\!\!\{\mathbf{h}_{u}^{(\ell)},\!\!\{\mathbf{h}_{u}^{(\ell)}\mid u\in\mathcal{N}_{v}, a_{v}^{(\ell)}=\mathrm{S}\setminus\mathrm{B}\}\!\!\}\big{)},&a_{v}^{(\ell)}= \mathrm{I}\lor\mathrm{B}\\ \eta^{(\ell)}\big{(}\mathbf{h}_{v}^{(\ell)},\{\!\!\{\mathbf{h}_{u}^{(\ell)}\mid u\in \mathcal{N}_{v},a_{u}^{(\ell)}=\mathrm{S}\setminus\mathrm{B}\}\!\!\}\big{)},&a_{ v}^{(\ell)}=\mathrm{L}\lor\mathrm{S}.\end{cases} \tag{2}\] This corresponds to a single layer update, and, as usual, by stacking \(L\geq 1\) layers, we obtain the representations \(\mathbf{h}_{v}^{(L)}\) for each node \(v\). In its full generality, a Co-GNN \((\pi,\eta)\) architecture can use any GNN architecture in place of the action network \(\pi\) and the environment network \(\eta\). For the sake of our study, we focus on simple models such as SumGNNs, MeanGNNs, GCN, and GIN, which are respectively denoted as \(\sum\), \(\mu\), \(*\) and \(\epsilon\). For example, we write Co-GNN\((\Sigma,\mu)\) to denote a Co-GNN architecture which uses SumGNN as its action network and MeanGNN as its environment network. Fundamentally, Co-GNNs update the node states in a fine-grained manner as formalized in Equation (2): if a node \(v\) chooses to Isolate or to Broadcast then it gets updated only based on its previous state, which corresponds to a node-wise update function. On the other hand, if a node \(v\) chooses the action Listen or Standard then it gets updated based on its previous state as well as the state of its neighbors which perform the actions Broadcast or Standard at this layer. ## 5 Model properties In this section, we provide a detailed analysis of Co-GNNs, focusing on its conceptual novelty, its expressive power, and its suitability to long-range tasks. ### Conceptual properties of the learnable message-passing paradigm **Task-specific**: Standard message-passing updates nodes based on their local neighborhood, which is completely task-agnostic. By allowing each node to listen to the information only from'relevant' neighbors, Co-GNNs can determine a computation graph which is best suited for the target task. For example, if the task requires information only from the neighbors with a certain degree then the action network can learn to listen only to these nodes, as we experimentally validate in Section 6.1. **Directed**: The outcome of the actions that the nodes can take amounts to a special form of _'directed rewiring'_ of the input graph: an edge can be _dropped_ (e.g., if two neighbors listen without broadcasting); an edge can remain _undirected_ (e.g., if both neighbors apply the standard action); or, an edge can _become directed_ implying directional information flow (e.g., if one neighbor listens while its neighbor broadcasts). Taking this perspective, the proposed message-passing can be seen as operating on a potentially different directed graph induced by the choice of actions at every layer. Formally, given a graph \(G=(V,E)\), let us denote by \(G^{(\ell)}=(V,E^{(\ell)})\) the directed computational graphs induced by the actions chosen at layer \(\ell\), where \(E^{(\ell)}\) is the set of directed edges at layer \(\ell\). We can rewrite the update given in Equation (2) concisely as follows: \[\mathbf{h}_{v}^{(\ell+1)}=\eta^{(\ell)}\left(\mathbf{h}_{v}^{(\ell)},\{\!\!\{\mathbf{h}_ {u}^{(\ell)}\mid(u,v)\in E^{(\ell)}\}\!\!\}\right).\] Consider the input graph \(H\) from Figure 3: \(u\) gets messages from \(v\) only in the first two layers, and \(v\) gets messages from \(u\) only in the last layer, illustrating a directional message-passing between these nodes. This abstraction allows for a direct implementation of Co-GNNs by simply considering the induced graph adjacency matrix at every layer. **Dynamic**: In this setup, each node learns to interact with the'relevant' neighbors and does so only as long as they remain relevant: Co-GNNs do not operate on a pre-fixed computational graph, but rather on a learned computational graph, which is dynamic across layers. In our running example, observe that the computational graph is a different one at every layer (depicted in Figure 4). This brings advantages for the information flow as we outline in Section 5.3. **Feature and structure based**: Standard message-passing is completely determined by the structure of the graph: two nodes with the same neighborhood get the same aggregated message. This is not necessarily the case in our setup, since the action network can learn different actions for two nodes with different node features, e.g., by choosing different actions for a _red_ node and a _blue_ node. This enables different messages for different nodes even if their neighborhoods are identical. Figure 4: The computational graph of \(H\). Figure 3: The input graph \(H\) and its computation graphs \(H^{(0)}\), \(H^{(1)}\), \(H^{(2)}\) at the respective layers. The computation graphs are a result of applying the following actions: \(\langle\operatorname{L},\operatorname{L},\operatorname{S}\rangle\) for the node \(u\); \(\langle\operatorname{S},\operatorname{S},\operatorname{L}\rangle\) for the nodes \(v\) and \(w\); \(\langle\operatorname{S},\operatorname{I},\operatorname{S}\rangle\) for the nodes \(s\) and \(r\); \(\langle\operatorname{S},\operatorname{S},\operatorname{S}\rangle\) for all other nodes. Asynchronous**: Standard message-passing updates all nodes synchronously at every iteration, which is not always optimal as argued by Faber and Wattenhofer (2022), especially when the task requires to treat the nodes non-uniformly. By design, Co-GNNs enable asynchronous updates across nodes. EfficientWhile being more sophisticated, the proposed message-passing algorithm is efficient in terms of runtime, as we detail in Appendix C. Co-GNNs are also parameter-efficient: they use the same policy network and as a result a comparable number of parameters to their baseline models. ### Expressive power of cooperative graph neural networks The environment and action networks of Co-GNN architectures are parameterized by standard MPNNs. This raises an obvious question regarding the expressive power of Co-GNN architectures: _are_ Co-GNNs _also bounded by 1-WL?_ Consider, for instance, the non-isomorphic graphs \(G_{1}\) and \(G_{2}\) depicted in Figure 5. Standard MPNNs cannot distinguish these graphs, while Co-GNNs can: **Proposition 5.1**.: _Let \(G_{1}=(V_{1},E_{1},\mathbf{X}_{1})\) and \(G_{2}=(V_{2},E_{2},\mathbf{X}_{2})\) be two non-isomorphic graphs. Then, for any threshold \(0<\delta<1\), there exists a parametrization of a Co-GNN architecture using sufficiently many layers \(L\), satisfying \(\mathbb{P}(\mathbf{z}_{G_{1}}^{(L)}\neq\mathbf{z}_{G_{2}}^{(L)})\geq 1-\delta\)._ The explanation for this result is the following: Co-GNN architectures learn, at every layer, and for each node \(u\), a probability distribution over the actions. These learned distributions are identical for two isomorphic nodes. However, the process relies on _sampling_ actions from these distributions, and clearly, the samples from identical distributions can differ. This makes Co-GNN models _invariant in expectation_, and the variance introduced by the sampling process helps to discriminate nodes that are 1-WL indistinguishable. Thus, for two nodes indistinguishable by 1-WL, there is a non-trivial probability of sampling a different action for the respective nodes, which in turn makes their direct neighborhood differ. This yields unique node identifiers (see, e.g., Loukas, 2020) with high probability and allows us to distinguish any pair of graphs assuming an injective graph pooling function (Xu et al., 2019). Our result is analogous to GNNs with random node features (Abboud et al., 2021; Sato et al., 2021), which are more expressive than their classical counterparts. Another relation is to subgraph GNNs (Bevilacqua et al., 2022; Papp et al., 2021), which bypass 1-WL expressiveness limitation by considering subgraphs extracted from \(G\). In our framework, the sampling of different actions has an analogous effect. Note that Co-GNNs are _not_ designed for more expressive power, and our result relies merely on variations in the sampling process, which should be noted as a limitation. We validate the stated expressiveness gain on a synthetic experiment in Appendix D. ### Dynamic message-passing for long-range tasks Long-range tasks necessitate to propagate information between distant nodes: we argue that a dynamic message-passing paradigm is highly effective for such tasks since it becomes possible to propagate only relevant task-specific information. Suppose, for instance, that we are interested in transmitting information from a source node to a distant target node: our message-passing paradigm can efficiently filter irrelevant information by learning to focus on the shortest path connecting these two nodes, hence maximizing the information flow to the target node. We can generalize this observation towards receiving information from multiple distant nodes and prove the following: **Theorem 5.2**.: _Let \(G=(V,E,\mathbf{X})\) be a connected graph with node features. For some \(k>0\), for any target node \(v\in V\), for any \(k\) source nodes \(u_{1},\ldots,u_{k}\in V\), and for any compact, differentiable function \(f:\mathbb{R}^{d^{(0)}}\times\ldots\times\mathbb{R}^{d^{(0)}}\rightarrow\mathbb{ R}^{d}\), there exists an \(L\)-layer Co-GNN computing final node representations such that for any \(\epsilon,\delta>0\) it holds that \(\mathbb{P}(|\mathbf{h}_{v}^{(L)}-f(\mathbf{x}_{u_{1}},\ldots\mathbf{x}_{u_{k}})|<\epsilon )\geq 1-\delta\)._ This means that if a property of a node \(v\) is a function of \(k\) distant nodes then Co-GNNs can approximate this function. This follows from two findings: (i) the features of \(k\) nodes can be transmitted to the source node without loss of information in Co-GNNs and (ii) the final layer of a Co-GNN architecture, e.g., an MLP, can approximate any differentiable function over \(k\) node features (Ornik, 1991; Cybenko, 1989). We validate these findings empirically on long-range interactions datasets (Dwivedi et al., 2022) in Appendix D. Figure 5: \(G_{1}\) and \(G_{2}\) are indistinguishable by a wide class of GNNs. Co-GNNs can distinguish these graphs. ### Over-squashing and over-smoothing **Over-squashing** refers to the failure of message passing to propagate information on the graph. Topping et al. (2022); Di Giovanni et al. (2023) formalized over-squashing as the insensitivity of an \(r\)-layer MPNN output at node \(u\) to the input features of a distant node \(v\), expressed through a bound on the Jacobian \(\|\partial\mathbf{h}_{v}^{(r)}/\partial\mathbf{x}_{u}\|\leq C^{r}(\hat{\mathbf{A}}^{r})_{vu}\), where \(C\) encapsulated architecture-related constants (e.g., width, smoothness of the activation function, etc.) and the normalized adjacency matrix \(\hat{\mathbf{A}}\) captures the effect of the graph. Graph rewiring techniques amount to modifying \(\hat{\mathbf{A}}\) so as to increase the upper bound and thereby reduce the effect of over-squashing. Since the actions of every node in Co-GNNs result in an effective graph rewiring (different at every layer), the action network can choose actions that transmit the features of node \(u\in V\) to node \(v\in V\) as shown in Theorem 5.2, resulting in the maximization of the bound on the Jacobian. **Over-smoothing** refers to the tendency of node embeddings to become increasingly similar across the graph with the increase in the number of message passing layers (Li et al., 2018). Recently, Rusch et al. (2022) showed that over-smoothing can be mitigated through the gradient gating mechanism, which adaptively disables the update of a node from neighbors with similar features. Our architecture, through the choice of Broadcast or Isolate actions, allows to mimic this mechanism. ## 6 Experimental results We evaluate Co-GNNs on (i) a synthetic experiment comparing with classical MPNNs, (ii) real-world node classification datasets (Platonov et al., 2023), and (iii) real-world graph classification datasets (Morris et al., 2020). We provide a detailed analysis regarding the actions learned by Co-GNNs on different datasets in Appendix B, illustrating the task-specific nature of Co-GNNs. Finally, we report a synthetic expressiveness experiment and an experiment on long-range interactions datasets (Dwivedi et al., 2022) in Appendix D. **Implementation.** We trained and tested our model on NVidia GTX V100 GPUs with Python 3.9, Cuda 11.8, PyTorch 2.0.0, and PyTorch Geometric (Fey and Lenssen, 2019). ### Synthetic experiment on RootNeighbors **Task.** In this experiment, we compare Co-GNNs to a class of MPNNs on a new dataset: RootNeighbors. Specifically, we consider the following regression task: _given a rooted tree, predict the average of the features of root-neighbors of degree \(6\)_. This is an intricate task since it requires first identifying the neighbors of the root node with degree \(6\) and then returning the average feature of these nodes as the target value. RootNeighbors consists of trees of depth \(2\) with random node features of dimension \(d=5\). The data generation ensures, for each root in each tree has at least one neighbor which is of degree \(6\) (detailed in Appendix E.2). One example tree is shown in Figure 5(a): the root node \(r\) has only two neighbors with degree \(6\) (\(u\) and \(v\)) and the target prediction value is \((\mathbf{x}_{u}+\mathbf{x}_{v})/2\). **Setup.** We trained GCN (Kipf and Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018), SumGNN, MeanGNN, as baselines and compared them to Co-GNN\((\Sigma,\Sigma)\), Co-GNN\((\mu,\mu)\) and Co-GNN\((\Sigma,\mu)\). We used the Adam optimizer and reported all hyperparameters in the appendix (Table 8). We report the Mean Average Error (MAE). **Results for MPNNs.** The results are presented in Table 1, which includes the random baseline: the average MAE obtained via a random prediction. All MPNNs perform poorly on this task. GCN, GAT, and MeanGNN fail to identify node degrees, making it impossible to detect nodes with a Figure 6: RootNeighbors example. specific degree, which is crucial for the task. GCN and GAT are only marginally better than the random baseline, whereas MeanGNN performs substantially better than the random baseline. The latter can be explained by the fact that MeanGNN employs a different transformation on the source node rather than treating it as a neighbor (unlike the self-loop in GCN/GAT) and this yields better average MAE. SAGE and SumGNN use sum aggregation and can identify the node degrees, but they struggle in averaging the node features, which yield comparable MAE results to that of MeanGNN. **Results for Co-GNNs.** The ideal mode of operation for Co-GNNs would be as follows: 1. The action network chooses either the action Listen or Standard for the root node, and the action Broadcast or Standard for the root-neighbors which have a degree \(6\), 2. The action network chooses either the action Listen or the action Isolate for all the remaining root-neighbors, and 3. The environment network updates the root node by averaging the features of its neighbors which are currently broadcasting. This is depicted in Figure 6 and can be achieved with a single-layer Co-GNN assuming (1)-(3) can be accomplished. The best result is achieved by Co-GNN\((\Sigma,\mu)\), because SumGNN (as the action network) can accomplish (1) and (2), and MeanGNN (as the environment network) can accomplish (3). This model leverages the strengths of the SumGNN model and the MeanGNN model to cater to the different roles of the action and environment networks, making it the most natural Co-GNN model for the regression task. The next best model is Co-GNN\((\Sigma,\Sigma)\), which also uses SumGNN as the action network, accomplishing (1) and (2). However, it uses another SumGNN as the environment network which cannot easily mimic the averaging of the neighbor's features. Finally, Co-GNN\((\mu,\mu)\) performs weakly, since MeanGNN as an action network cannot achieve (1) hindering the performance of the whole task. Indeed, Co-GNN\((\mu,\mu)\) performs comparably to MeanGNN suggesting that the action network is not useful in this case. To shed light on the performance of Co-GNN models, we computed the percentage of edges which are accurately retained or removed by the action network in a single layer Co-GNN model. We observe an accuracy of 57.20% for Co-GNN\((\mu,\mu)\), 99.55% for Co-GNN\((\Sigma,\Sigma)\), and 99.71% for Co-GNN\((\Sigma,\mu)\), which empirically confirms the expected behavior of Co-GNNs. In fact, the example tree shown in Figure 6 is taken from the RootNeighbors, and reassuringly, Co-GNN\((\Sigma,\mu)\) learns precisely the actions that induce the shown optimal subgraph. ### Node classification One of the strengths of Co-GNNs is their capability to utilize task-specific information propagation, which raises an obvious question: could Co-GNNs outperform the baselines on heterophilious graphs, where standard message passing is known to suffer? To answer this question, we assess the performance of Co-GNNs on heterophilic node classification datasets from (Platonov et al., 2023). **Setup.** We evaluate Co-GNN\((\Sigma,\Sigma)\) and Co-GNN\((\mu,\mu)\) on the 5 heterophilic graphs, following the 10 data splits and the methodology of Platonov et al. (2023) and report the accuracy and standard deviation for roman-empire and amazon-ratings, and mean ROC AUC and standard deviation for minesweeper, tokokers, and questions. The classical baselines GCN (Kipf and Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018), GAT-sep, GT (Shi et al., 2021) and GT-sep are from (Platonov et al., 2023). We use the Adam optimizer and report all hyperparameters in the Appendix E.4. **Results.** All results are reported in Table 2: Co-GNNs achieve state-of-the-art across results the board, despite using relatively simple architectures as their action and environment networks. Overall, Co-GNNs demonstrate an average accuracy improvement of 2.23% compared to all baseline methods, across all datasets, surpassing the performance of more complex models such as GT. These results are reassuring as they establish Co-GNNs as a strong method in the heterophilic setting. \begin{table} \begin{tabular}{l c} \hline \hline Model & MAE \\ \hline Random & 0.474 \\ GCN & 0.468 \\ SAGE & 0.336 \\ GAT & 0.442 \\ SumGNN & 0.370 \\ MeanGNN & 0.329 \\ \hline Co-GNN\((\Sigma,\Sigma)\) & 0.196 \\ Co-GNN\((\mu,\mu)\) & 0.339 \\ Co-GNN\((\Sigma,\mu)\) & 0.079 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on RootNeighbors. Top three models are colored by First, Second, Third. ### Graph classification In this experiment, we evaluate Co-GNNs on the TUDataset (Morris et al., 2020) graph classification benchmark. **Setup.** We evaluate Co-GNN\((\Sigma,\Sigma)\) and Co-GNN\((\mu,\mu)\) on the 7 graph classification benchmarks, following the risk assessment protocol of Errica et al. (2020), and report the mean accuracy and standard deviation. The results for the baselines DGCNN (Wang et al., 2019), DiffPool (Ying et al., 2018), Edge-Conditioned Convolution (ECC) (Simonovsky and Komodakis, 2017), GIN (Xu et al., 2019), GraphSAGE (Hamilton et al., 2017) are from Errica et al. (2020). We also include ICGMM\({}_{f}\)(Castellana et al., 2022), and SPN\((k=5)\)(Abboud et al., 2022) as more recent baselines. OOR (Out of Resources) implies extremely long training time or GPU memory usage. We use Adam optimizer and StepLR learn rate scheduler, and report all hyperparameters in the appendix (Table 11). **Results.** Co-GNN models achieve the highest accuracy on three datasets out of six as reported in Table 3 and remain competitive on the other datasets. Co-GNN yield these performance improvements, despite using relatively simple action and environment networks, which is intriguing as Co-GNNs unlock a large design space which includes a large class of model variations. ## 8 Reproducibility statement To ensure the reproducibility of this paper, we include the Appendix with five main sections. Appendix A includes detailed proofs for the technical statements presented in the paper. Appendix E provides data generation protocol for RootNeighbors and further details of the real-world datasets that are used in Section 6. The experimental results in the paper are reproducible with the hyperparameter settings for all results contained in Tables 8 to 11 in Appendix E.4. The code to reproduce all experiments, along with the code to generate the datasets and tasks we propose is released at [https://anonymous.4open.science/r/CoGNN](https://anonymous.4open.science/r/CoGNN).
2301.11912
OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs). It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors. Therefore, DNNs planted in safety-critical systems should be verified to be robust against occlusions prior to deployment. However, most existing robustness verification approaches for DNNs are focused on non-semantic perturbations and are not suited to the occlusion case. In this paper, we propose the first efficient, SMT-based approach for formally verifying the occlusion robustness of DNNs. We formulate the occlusion robustness verification problem and prove it is NP-complete. Then, we devise a novel approach for encoding occlusions as a part of neural networks and introduce two acceleration techniques so that the extended neural networks can be efficiently verified using off-the-shelf, SMT-based neural network verification tools. We implement our approach in a prototype called OccRob and extensively evaluate its performance on benchmark datasets with various occlusion variants. The experimental results demonstrate our approach's effectiveness and efficiency in verifying DNNs' robustness against various occlusions, and its ability to generate counterexamples when these DNNs are not robust.
Xingwu Guo, Ziwei Zhou, Yueling Zhang, Guy Katz, Min Zhang
2023-01-27T18:54:00Z
http://arxiv.org/abs/2301.11912v1
# OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep Neural Networks ###### Abstract Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs). It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors. Therefore, DNNs planted in safety-critical systems should be verified to be robust against occlusions prior to deployment. However, most existing robustness verification approaches for DNNs are focused on non-semantic perturbations and are not suited to the occlusion case. In this paper, we propose the first efficient, SMT-based approach for formally verifying the occlusion robustness of DNNs. We formulate the occlusion robustness verification problem and prove it is NP-complete. Then, we devise a novel approach for encoding occlusions as a part of neural networks and introduce two acceleration techniques so that the extended neural networks can be efficiently verified using off-the-shelf, SMT-based neural network verification tools. We implement our approach in a prototype called OccRob and extensively evaluate its performance on benchmark datasets with various occlusion variants. The experimental results demonstrate our approach's effectiveness and efficiency in verifying DNNs' robustness against various occlusions, and its ability to generate counterexamples when these DNNs are not robust. ## 1 Introduction Deep neural networks (DNNs) are computer-trained _programs_ that can implement hard-to-formally-specify tasks. They have repeatedly demonstrated their potential in enabling artificial intelligence in various domains, such as face recognition [6] and autonomous driving [27]. They are increasingly being incorporated into safety-critical applications with interactive environments. To ensure the security and reliability of these applications, DNNs must be highly dependable against adversarial and environmental perturbations. This dependability property is known as _robustness_ and is attracting a considerable amount of research efforts from both academia and industry, aimed at ensuring robustness via different technologies such as adversarial training [13, 28], testing [41, 33], and formal verification [34, 10, 5]. Occlusion is a prevalent kind of perturbation, which may cause DNNs to misclassify an image by occluding some segment thereof [39, 25, 8]. For instance, a "turn left" traffic sign may be misclassified as "go straight" after it is occluded by a tape, probably resulting in traffic accidents. A similar situation may occur in face recognition, where many well-trained neural networks fail to recognize faces correctly when they are partially occluded, such as when glasses are worn[38]. A neural network is called _robust against occlusions_ if small occlusions do not alter its classification results. Generally, we wish a DNN to be robust against occlusions that appear negligible to humans. It is challenging to verify whether a DNN is robust or not on an input image if the image is occluded. On the one hand, the verification problem is non-convex due to the non-linear activation functions in DNNs. It is NP-complete even when dealing with common, fully connected feed-forward neural networks (FNNs) [19]. On the other hand, unlike existing perturbations, occlusions are challenging to encode using \(L_{p}\) norms. Most existing robustness verification approaches assume that perturbations need to be defined by \(L_{p}\) norms and then apply approximations and abstract interpretation techniques [34, 10, 5] as part of the verification process. The semantic effect of occlusions partially alters the values of some neighboring pixels from large to small or in the inverse direction, e.g., 255 to 0, when a black occlusion occludes a white pixel. Therefore, existing techniques for perturbations in \(L_{p}\) norms are not suited to occlusion perturbations. SMT-based approaches have been shown to be an efficient approach to DNN verification [19]. They are both sound and complete, in that they always return definite results and produce counterexamples in non-robust cases. We show that, although it is straightforward to encode the occlusion robustness verification problem into SMT formulas, solving the constraints generated by this naive encoding is experimentally beyond the reach of state-of-the-art SMT solvers, due to the inclusion of a large number of the piece-wise ReLU activation functions. Consequently, such a straightforward encoding approach cannot scale to large networks. In this paper, we systematically study the occlusion robustness verification problem of DNNs. We first formalize and prove that the problem is NP-complete for ReLU-based FNNs(see Appendix A). Then, we propose a novel approach for encoding various occlusions and neural networks together to generate new equivalent networks that can be efficiently verified using off-the-shelf SMT-based robustness verification tools such as Marabou [20]. In our encoding approach, although additional neurons and layers are introduced for encoding occlusions, the number is reasonably small and independent of the networks to be verified. The efficiency improvement of our approach comes from the fact that our approach significantly reduces the number of constraints introduced while encoding the occlusion and leverages the backend verification tool's optimization against the neural network structure. Furthermore, we introduce two acceleration techniques, namely input-space splitting to reduce the search space of a single verification, which can significantly improve verification efficiency, and label sorting to help verification terminates earlier. We implement a tool called OccRob with Marabou as the backend verification tool. To our knowledge, this is the first work on formally verifying the occlusion robustness of deep neural networks. To demonstrate the effectiveness and efficiency of OccRob, we evaluate it on six representative FNNs trained on two benchmark datasets. The empirical results show that our approach is effective and efficient in verifying various types of occlusions with respect to the occlusion position, size, and occluding pixel value. **Contributions.** We make the following three major contributions: (i) we propose a novel approach for encoding occlusion perturbations, by which we can leverage _off-the-shelf_ SMT-based robustness verification tools to verify the robustness of neural networks against various occlusion perturbations; (ii) we prove the verification problem of the occlusion robustness is NP-complete and introduce two acceleration techniques, i.e., label sorting and input space splitting, to improve the efficiency of verification further; and (iii) we implement a tool called OccRob and conduct experiments extensively on a collection of benchmarks to demonstrate its effectiveness and efficiency. **Paper Organization.** Sec. 2 introduces preliminaries. Sec. 3 formulates the occlusion robustness verification problem and studies its complexity. Sec. 4 presents our encoding approach and acceleration techniques for the verification. Sec. 5 shows the experimental results. Sec. 6 discusses related work, and Sec. 7 concludes the paper. ## 2 Preliminaries ### Deep Neural Networks and the Robustness As shown in Fig. 1, a deep neural network consists of multiple layers. The neurons on the input layer take input values, which are computed and propagated through the hidden layers and then output by the output layer. The neurons on each layer are connected to those on the predecessor and successor layers. We only consider fully connected, feedforward networks (FNNs) [11]. Given a \(\lambda\)-layer neural network, let \(W^{(i)}\) be the weight matrix between the \((i-1)\)-th and \(i\)-th layers, and \(\mathsf{b}^{(i)}\) be the biases of the corresponding neurons, where \(i=1,2,\ldots,\lambda\). The network implements a function \(F:\mathbb{R}^{u}\rightarrow\mathbb{R}^{r}\) that is recursively defined by: \[z^{(0)}=x\] (Layer Function) \[z^{(i)}=\sigma(W^{(i)}\cdot z^{(i-1)}+\mathsf{b}^{(i)}),\ for\ i=1, \ldots,\lambda-1\] (Network Function) where \(\sigma(\cdot)\) is called an _activation function_ and \(z^{(i)}\) denotes the result of neurons at the \(i\)-th layer. For example, Fig. 1 shows a 3-layer neural network with three input neurons and two output neurons, namely, \(\lambda=3\), \(u=3\) and \(r=2\). For the sake of simplicity, we use \(\Phi_{F}(x)=arg\ max_{\ell\in L}F(x)\) to denote the label \(\ell\) such that the probability \(F_{\ell}(x)\) of classifying \(x\) to \(\ell\) is larger than those to other labels, where \(L\) represents the set of labels. The activation function \(\sigma\) usually can be a piece-wise Rectified Linear Unit (ReLU), \(\sigma(x)=max(x,0)\)), or S-shape functions like Sigmoid \(\sigma(x)=\frac{1}{1+e^{-x}}\), Tanh \(\sigma(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\), or Arctan \(\sigma(x)=tan^{-1}(x)\). In this work, we focus on the networks that only contain ReLU activation functions, which are widely adopted in real-world applications. A neural network is called _robust_ if small perturbations to its inputs do not alter the classification result [40]. Specifically, given a network \(F\), an input \(x_{0}\) and a set \(\Omega\) of perturbed inputs of \(x_{0}\), \(F\) is called locally robust with respect to \(x_{0}\) and \(\Omega\) if \(F\) classifies all the perturbed inputs in \(\Omega\) to the same label as it does \(x_{0}\). Figure 1: A fully-connected feed-forward neural network (FNN). **Definition 1**: **(Local Robustness [16]).** _A neural network \(F:\mathbb{R}^{u}\rightarrow\mathbb{R}^{r}\) is called locally robust with respect to an input \(x_{0}\) and a set \(\Omega\) of perturbed inputs of \(x\) if \(\forall x\in\Omega,\Phi_{F}(x)=\Phi_{F}(x_{0})\) holds._ Usually, the set \(\Omega\) of perturbed inputs is defined by an \(\ell_{p}\)-norm ball around \(x_{0}\) with a radius of \(\epsilon\), i.e., \(\mathbb{B}_{p}(x_{0},\epsilon):=\{x\:|\:\|x-x_{0}\|_{p}\leq\epsilon\}\)[16, 2]. ### Occlusion Perturbation In the context of image classification networks, occlusion is a kind of perturbation that blocks the pixels in certain areas before the image is fed into the network. Existing studies showed that the classification accuracy of neural networks could be significantly decreased when the input objects are artificially occluded [23, 45]. Occlusions can have various occlusion shapes, sizes, colors, and positions. The shapes can be square, rectangle, triangle, or irregular shape. The size is measured by the number of occluded pixels. The occlusion color specifies the colors occluded pixels can take. The coloring of an occlusion can be either uniform, where all occluded pixels share the same color, or multiform, where these colors can vary in the range of \([-\epsilon,\epsilon]\), where \(\epsilon\) specifies the threshold between an occluded pixel's value and its original value. Prior studies [8, 3] showed that both the uniform and multiform occlusions could cause misclassification to neural networks. Fig. 2 shows two examples of multiform and uniform occlusions, respectively. The traffic sign for "70km/h speed limit" in Fig. 2(a) is misclassified to "30km/h" by adding a \(5\times 5\) multiform occlusion. Fig. 2(d) shows another sign, with different light conditions, where a \(3\times 3\) uniform occlusion (in Fig. 2(c)) causes the sign to be misclassified to "30km/h". The occlusion position is another aspect of defining occlusions. An occlusion can be placed precisely on the pixels of an image, or between a pixel and its neighbors. Fig. 3 shows an example, where the dots represent image pixels and the circles are the occluding pixels that will substitute the occluded ones. We say that an occlusion pixel \(\vartheta_{i^{\prime},j^{\prime}}\) at location \((i^{\prime},j^{\prime})\) surrounds an image pixel \(p_{i,j}\) at location \((i,j)\) if and only if \(|i-i^{\prime}|<1\) and \(|j-j^{\prime}|<1\). Note that \(i^{\prime},j^{\prime}\) are real numbers, representing the location where the occlusion pixel \(o\) is placed on the image. An image pixel can be occluded by the substitute occlusion pixels if the occlusion pixels surround the image pixel. Figure 3: An example occlusion on a \(5\times 5\) image at real number position. Figure 2: Two multiform and uniform occlusions to traffic signs causing mis-classifications. There are at most four surrounding occlusion pixels for each image pixel, as shown in Fig. 3. Let \(\mathbb{I}_{p}\) be the set of the locations where the surrounding occlusion pixels of \(p\) are placed. After the occlusion, the value of pixel \(p_{i,j}\) is altered to the new one denoted by \(p^{\prime}_{i,j}\), which can be computed by interpolation [18, 21] such as next neighbour interpolation or Bi-linear interpolation based on occlusion pixels in \(\mathbb{I}_{p}\). Besides that, we use a method based on \(L_{1}\)-distance to calculate how much a pixel is occluded. Since the \(L_{1}\)-distance of two adjacent pixels is 1, a surrounding occlusion pixel should not affect the image pixel if their \(L_{1}\)-distance is greater than 1. The formula \(max(0,(|1-i^{\prime}+i|)+(1-j^{\prime}+j)-1)\) indicates how much an image pixel at \((i,j)\) is occluded by an occlusion pixel at \((i^{\prime},j^{\prime})\). For instance, occlusion pixel at \((i^{\prime},j^{\prime})=(0.9,0.9)\) has no effect to image pixel \((i,j)=(0,0)\) since their \(L_{1}\)-distance is larger than 1. Therefore, the occlusion factor \(s_{i,j}\) for pixel \(p\) at \((i,j)\) can be calculated based on all surrounding occlusion pixels in \(\mathbb{I}_{p}\) as: \[s_{i,j}=max(0,\sum_{i^{\prime}_{0},j^{\prime}\in\mathbb{I}_{p}}(1-j+j^{\prime} |)+\sum_{i^{\prime},j_{0}\in\mathbb{I}_{p}}(|1-i^{\prime}+i|)-1) \tag{1}\] where \((i^{\prime}_{0},j^{\prime}_{0})\) is the first element of \(\mathbb{I}_{p}\). Notably, \(s\) is 1 for completely occluded pixel and 0 for the pixel that is not occluded, otherwise \(s\) has a value between \((0,1)\). Also, it is a special case for Equation 1 when \((i^{\prime},j^{\prime})\) are integers, where \(s\) can be reduced to 0 or 1. ## 3 The Occlusion Robustness Verification Problem Let \(\mathbb{R}^{m\times n}\) be the set of images whose height is \(m\) and width is \(n\). We use \(\mathbb{N}_{1,m}\) (_resp. \(\mathbb{N}_{1,n}\)_) to denote the set of all the natural numbers ranging from 1 to \(m\) (_resp. \(n\)_). A coloring function \(\zeta:\mathbb{R}^{m\times n}\times\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) is a mapping of each pixel of an image to its corresponding color value. Given an image \(x\in\mathbb{R}^{m\times n}\), \(\zeta(x,i,j)\) defines the value to color the pixel of \(x\) at \((i,j)\). Definition 2 (Occlusion function): Given a coloring function \(\zeta\) and an occlusion \(\vartheta\) of size \(w\times h\) which is at position \((a,b)\), the occlusion function is defined as function \(\gamma_{\zeta,w\times h}:\mathbb{R}^{m\times n}\times\mathbb{R}\times\mathbb{ R}\rightarrow\mathbb{R}^{m\times n}\) such that \(x^{\prime}=\gamma_{\zeta,w\times h}(x,a,b)\) if for all \(i\in\mathbb{N}_{1,n}\) and \(j\in\mathbb{N}_{1,m}\), there is, \[x^{\prime}_{i,j}=x_{i,j}-s_{i,j}\times(x_{i,j}-\zeta(x,i,j)), \tag{2}\] \[\text{where},\,\zeta(x,i,j)=\frac{\sum_{(i^{\prime},j^{\prime}) \in\mathbb{I}_{x_{i,j}}}\vartheta_{\tilde{r},\tilde{j}^{\prime}}\sqrt{(i-i^{ \prime})^{2}+(j-j^{\prime})^{2}}}{\sum_{(\tilde{r},j^{\prime})\in\mathbb{I}_ {x_{i,j}}}\sqrt{(i-i^{\prime})^{2}+(j-j^{\prime})^{2}}}. \tag{3}\] \(s\) in Equation 2 is the occlusion factor for pixel at \((i,j)\) as mentioned in Sec. 2.2. Note that when \(i^{\prime},j^{\prime}\) are integers, Equation 2 can be reduced to \(x_{i,j}=\vartheta_{i,j}\), which represents that \(x_{i,j}\) is completely occluded by the occlusion. In other words, the integer case is a special case of the real number case. Also, when pixel at \((i,j)\) is not occluded, since \(s_{i,j}=0\). In this case, Equation 2 can be reduced to \(x^{\prime}_{i,j}=x_{i,j}\). Interpolation is handled by \(\zeta\) showed in Equation 3. It shows the standard form for the color of the new \(x^{\prime}_{i,j}\). A unique color value is specified for all the pixels in the occluded area for a uniform occlusion. Therefore, \(\zeta\) in Equation 3 can be reduced to \(\zeta(x,i,j)=\mu\) for some \(\mu\in[0,1]\). The coloring function in a multiform occlusion is defined as \(\zeta(x,i,j)=x_{i,j}+\mathcal{A}_{p}\) with \(\mathcal{A}_{p}\in[-\epsilon,\epsilon]\), where \(\epsilon\in\mathbb{R}\) defines the threshold that a pixel can be altered. **Definition 3** (Local occlusion robustness).: _Given a DNN \(F:\mathbb{R}^{m\times n}\rightarrow\mathbb{R}^{\prime}\), an occlusion function \(\gamma_{\zeta,w\times h}:\mathbb{R}^{m\times n}\times\mathbb{R}\times\mathbb{R} \rightarrow\mathbb{R}^{m\times n}\) with respect to coloring function \(\zeta\) and occlusion size \(w\times h\), and an input image \(x\), \(F\) is called local occlusion robust on \(x\) with \(\gamma_{\zeta,w\times h}\) if \(\Phi_{F}(x)=\Phi_{F}(\gamma_{\zeta,w\times h}(x,a,b))\) holds for all \(1\leq a\leq n\) and \(1\leq b\leq m\)._ Intuitively, Definition 3 means that \(F\) is robust on \(x\) against the occlusions of \(\gamma_{\zeta,w\times h}\), if on any occluded image of \(x\) by the occlusion function \(\gamma_{\zeta,w\times h}\), \(F\) always returns the same classification result as on the original image \(x\). Depending on the coloring function \(\zeta\), the definition applies to various occlusions concerning shapes, colors, sizes, and positions. We can also extend the above definition to the global occlusion robustness if \(F\) is robust on all images concerning \(\gamma_{\zeta,w\times h}\). We prove that even for the case of uniform occlusion, a special case of the multiform one, the local occlusion robustness verification problem is NP-complete on the ReLU-based neural networks. We leave the details of the proof to Appendix A. ## 4 SMT-Based Occlusion Robustness Verification ### A Naive SMT Encoding Method The verification problem of FNNs' local occlusion robustness can be straightforwardly encoded into an SMT problem. In Definition 3, we assume that \(x\) is classified by \(\Phi\) to the label \(\ell_{q}\), i.e., \(\Phi(x)=\ell_{q}\), for a label \(\ell_{q}\in L\). To prove \(F\) is robust on \(x\) after \(x\) is occluded by occlusion \(\vartheta\) with size \(w\times h\), it suffices to prove that \(F\) classifies every occluded image \(x^{\prime}=\gamma_{\zeta,w\times h}(a,b)\) to \(\ell_{q}\) for all \(1\leq a\leq n\) and \(1\leq b\leq m\). This is equivalent to proving that the following constraints are not satisfiable: \[1\leq a\leq n,1\leq b\leq m, \tag{4}\] \[\bigwedge_{i\in\mathbb{N}_{1,n},j\in\mathbb{N}_{1,n}}\] \[\left(((a-1<i<a+w+1)\wedge(b-1<j<b+h+1)\wedge x^{\prime}_{i,j}= \gamma_{\zeta,w\times h}(x,a,b)_{i,j})\vee\right.\] (5) \[\left.((i\geq a+w+1)\vee(i\leq a-1)\vee(j\geq b+h+1)\vee(j\leq b-1 ))\wedge x^{\prime}_{i,j}=x_{i,j})\right),\] \[\bigvee_{l\in\mathbb{N}_{1,q-1}\cup\mathbb{N}_{q+1}}F(x^{\prime}) _{l}\geq F(x^{\prime})_{q}. \tag{6}\] The conjuncts in Eq. 5 define that \(x^{\prime}\) is an occluded instance of \(x\), and the disjuncts in Eq. 6 indicate that, when satisfiable, there exists some label \(\ell_{i}\) which has a higher probability than \(\ell_{q}\) to be classified to. Namely, the occlusion robustness of \(F\) on \(x\) is falsified, with \(x^{\prime}\) being a witness of the non-robustness. Note that this naive encoding considers the occlusion position's real number cases since function \(\gamma\) implicitly includes the interpolation. Although the above encoding is straightforward, solving the encoded constraints is experimentally beyond the reach of general-purpose existing SMT solvers due to the piece-wise linear ReLU activation functions in the definition of \(F\) in the constraints of Eq. 6, and the large search space \(m\times n\times(2\epsilon)^{w\times h}\) (see Experiment II in Sec. 5). ### Our Encoding Approach **An Overview of the Approach.** To improve efficiency, we propose a novel approach for encoding occlusion perturbations into four layers of neurons and concatenating the original network to these so-called _occlusion layers_, constituting a new neural network which can be efficiently verified using state-of-the-art, SMT-based verifiers. Fig. 4 shows the overview of our approach. Given an input image and an occlusion, we first construct a 3-hidden-layer occlusion neural network (ONN) and then concatenate it to the original FNN by connecting the ONN's output layer to the FNN's input layer. The combined network represents all possible occluded inputs and their classification results. The robustness of the constructed network can be verified using the existing SMT-based neural network verifiers. We introduce two acceleration techniques to speed up the verification further. First, we divide the occlusion space into several smaller, orthogonal spaces, and verify a finite set of sub-problems on the smaller spaces. Second, we employ the eager falsification technique [14] to sort the labels according to their probabilities of being misclassified to. The one with a larger probability is verified earlier by the backend tools. Whenever a counterexample is returned, an occluded image is found such that its classification result differs from the original one. If all sub-problems are verified and no counterexamples are found, the network is verified robust on the input image against the provided occlusion. **Encoding Occlusions as Neural Networks.** Given a coloring function \(\zeta\), an occlusion size \(w\times h\) and an input image \(x\) of size \(m\times n\), we construct a neural network \(O:\mathbb{R}^{4+ct}\rightarrow\mathbb{R}^{m\times n}\) to encode all the possible occluded images of \(x\), where \(c=1\) if \(x\) is a grey image and \(c=3\) if \(x\) is an RGB image, \(t=0\) for the uniform occlusion and \(t=w\times h\) for the multiform one. Fig. 5 shows the neural network architecture for encoding occlusions. We divide it into a fundamental part and an additional part. The former encodes the occlusion position and the uniform occlusion color. The additional part is needed only by the multiform occlusion to encode the coloring function. Without loss of generality, we assume that the input layer takes the vector \((a,w,b,h,\zeta)\), where \((a,b)\) is the top-left coordinate of Figure 4: The workflow of encoding and verifying FNN’s robustness against occlusions. occlusion area in \(x\). The coloring function \(\zeta\) is admitted by other \(c\times t\) neurons in the input layer when the occlusion is multiform. _(1) Encoding occlusion positions._ We explain the weights and biases that are defined in the neural network to encode the occlusion position. On the connections between the input layer and the first hidden layer, the weights in matrices \(W_{1,1}\), \(W_{1,2}\) and \(W_{1,3}\) are 1, -1 and -1, respectively. Note that we hide all the edges whose weights are 0 in the figure for clarity. The biases in \(\overline{\mathsf{b}}_{1,1}\) are \((-1,-2,\ldots,-m)\) for the first \(m\) neurons on the first hidden layer. Those in \(\overline{\mathsf{b}}_{1,2}\) are \((2,3,\ldots,m+1)\). The weights in \(W_{1,4}\), \(W_{1,5}\), \(W_{1,6}\) and the biases in \(\overline{\mathsf{b}}_{1,3}\) and \(\overline{\mathsf{b}}_{1,4}\) are defined in the same way. We omit the details due to the page limitation. For the second layer, the diagonals of weight matrices \(W_{2,1}\) to \(W_{2,4}\) are set to -1, and the rest of their entries are 0. The biases in \(\overline{\mathsf{b}}_{2,1}\) and \(\overline{\mathsf{b}}_{2,2}\) are 1. After the propagation to the second hidden layer, a pixel at position \((i,j)\) in the image \(x\) is occluded if and only if both the outputs of the \(i^{th}\) neuron in the first \(m\) neurons and the \(j^{th}\) neuron in the remaining \(n\) neurons on the second hidden layer are 1. The third hidden layer represents the occlusion status of each pixel in the original image \(x\). \(2n\) weight matrices connect the second layer and the \(n\times m\) neurons of the third layer. For example, we consider the weights in \(W_{3,i}\) and \(W_{3,n+i}\) which connect the \(i^{th}\) group of \(m\) neurons in the third layer to the second layer. The size of \(W_{3,i}\) is \(m\times m\), and the weights in the \(i^{th}\) row are 1 while the rest is 0. The size of \(W_{3,n+i}\) is \(m\times n\). The weights on its diagonal are set to 1, while the rest are set to 0. All the biases in \(\overline{\mathsf{b}}_{3,1}\) to \(\overline{\mathsf{b}}_{3,n}\) are -1. The output of the third layer indicates the occlusion status of all the pixels. If a pixel at \((i,j)\) is occluded, then the output of the \((i\times m+j)^{th}\) neuron in the third layer is 1, and otherwise, 0. _(2) Encoding Coloring Functions._ We consider the uniform and multiform coloring functions separately for verification efficiency, although the former is a special case of the latter. We first consider the general multiform case. In the multiform case, we introduce \(2\times n\times m\) extra neurons in the third hidden layer, as shown in the bottom part of Fig. 5. These neurons can be combined with the third layer, but it would be more clear to separate them. The weight matrix \(W_{3,\zeta}\) connects the third layer to these neurons, with its first half of diagonal set to 1, and the second half set to -1. This helps retain the sign Figure 5: An occlusion neural network for the occlusions on an image \(x\) with \(\zeta\) and \(w\times h\). of the input \(\zeta\) during propagation. The weight matrix \(W_{\zeta}\) connects the input \(\zeta\) to these neurons, whose diagonal are 1 and the biases \(\overline{\mathsf{b}}_{\zeta}\) are -1. These neurons works just like the third layer, except that they not only represent the occlusion status of pixels, but also preserve the input \(\zeta\). If a pixel at \((i,j)\) is occluded and \(\zeta\) has a positive value, then the \((i\times m+j)^{th}\) output in the first half of them is \(\zeta\). The \((i\times m+j)^{th}\) output in the second half is \(\zeta\) when \(\zeta\) has a negative value. Otherwise, the output is 0. In the uniform case, it can be encoded together with input images, and we thus explain in the following paragraph. _(3) Encoding Input Images._ In the fourth layer, we use \(W_{4}\) to denote the weight matrix connecting the third layer. \(W_{4}\) is used to encode pixel values of the input image \(x\) and the coloring function \(\zeta\) of occlusions. In the uniform case, the weight \(\mathsf{w}(i,i)\) in the diagonal of \(W_{4}\) is \(\mathsf{w}(i,i)=\zeta_{i}-x_{i}\) and the biases \(\overline{\mathsf{b}}_{4}=\mathbf{x}\) where \(\mathbf{x}\) is the flattened vector of the original input image. In the multiform case, the weight matrix \(W_{4,\zeta}\) connects the neurons in the bottom part that preserves information of input \(\zeta\) to the fourth layer. The first half of \(W_{4,\zeta}\) is identical to \(W_{4}\), and the second half of \(W_{4,\zeta}\) has its diagonal set to -1. It provides the value of the coloring function \(\zeta\) with any sign for each occluded pixel. The output of the \(j^{th}\) neuron in the \(i^{th}\) group of the fourth layer is the raw pixel value plus \(\zeta\) if the pixel at \((i,j)\) is occluded; otherwise, it is the raw pixel value of \(p\). **An Illustrative Example.** We show an example of constructing the occlusion network on a \(2\times 2\), single-channel image in Fig. 6. In this example, we assume that the input image is \(x=[0.4,0.6,0.55,0.72]\) and the occlusion applied to \(x\) has a size of \(1\times 1\), which means \(w=1\) and \(h=1\). For uniform occlusion, the coloring function \(\zeta\) Fig. 6: An example of encoding a one-pixel uniform occlusion as a neural network. pixels stay unchanged. Suppose we change \(a\) to some real number, for instance, 1.5. After the same propagation, we will get an output of \((0,0.5,0,0.5)\) in the third layer, representing that the neurons in the second column are affected by the occlusion by a factor of 0.5. The fourth layer then outputs \([0.4,0.3,0.55,0.36]\), which is the corresponding occluded image \(x^{\prime}\). In the multiform case, as mentioned at the first, we suppose the threshold \(\epsilon=0.1\), and keep all other settings. Then after the same propagation to the third layer, the third layer will output \((0,1,0,0)\), representing that the second pixel is occluded. Those extra neurons then output \((0,0.1,0,0,0,0,0,0)\) where the second neuron in the first half is 0.1 and 0 for the remaining. This indicates both that the second pixel in the first row is occluded, and has an epsilon of 0.1. After propagation to the fourth layer, the occlusion network outputs \(x^{\prime}=[0.4,0.7,0.55,0.72]\) based on its \(W_{4}\) and \(\overline{\mathsf{b}}_{4}\). As expected, the second pixel is occluded and increases by 0.1, and other pixels stay unchanged. For the case of a negative \(\epsilon\) of \(-0.1\), the extra neurons output \((0,0,0,0,0,0.1,0,0)\). Note that the second neuron in the second half is 0.1 and the remaining are 0, which helps retain the sign of \(-0.1\). The fourth layer then outputs \([0.4,0.5,0.55,0.72]\), which is the expected occluded image where the second pixel decreases by 0.1. ### The Correctness of the Encoding Given an input image \(x\), a rectangle occlusion of size \(w\times h\), and a coloring function \(\zeta\), let \(O\) be the corresponding occlusion neural network constructed in the approach above. Let \(F\) be the FNN to verify. We concatenate \(O\) to \(F\) by connecting \(O\)'s output layer to \(F\)'s input layer. The combined network implements the composed function \(F\circ O\). The problem of verifying the occlusion robustness of \(F\) on the input image \(x\) is reduced to a regular robustness verification problem of \(F\circ O\). Theorem 4.1 (Correctness): _An FNN \(F\) is robust on the input image \(x\) with respect to a rectangle occlusion in the size of \(w\times h\) and a coloring function \(\zeta\) if and only if \(\Phi_{F\circ O}((a,w,b,h,\zeta))=\Phi_{F}(x)\) for all \(1\leq a\leq n\) and \(1\leq b\leq m\)._ Theorem 4.1 means that all the occluded images from \(x\) are classified by \(F\) to the same label as \(x\), which implies the correctness of our proposed encoding approach. To prove Theorem 4.1, it suffices to show that the encoded occlusion neural network represents all the possible occluded images. In other words, when being perceived as a function, the network outputs the same occluded image as the occlusion function for the same occlusion coordinate \((a,b)\), as formalized in the following lemma. Lemma 1: _Given an occlusion function \(\gamma_{\zeta,w\times h}:\mathbb{R}^{m\times n}\times\mathbb{R}\times\mathbb{ R}\rightarrow\mathbb{R}^{m\times n}\) and an input image \(x\), let \(O_{\gamma,x}:\mathbb{R}^{4+ct}\rightarrow\mathbb{R}^{m\times n}\) be the corresponding occlusion neural network. There is \(\gamma_{\zeta,w\times h}(x,a,b)=O_{\gamma,x}(a,w,b,h,\zeta)\) for all \(1\leq a\leq n\) and \(1\leq b\leq m\)._ Proof (Sketch): It suffices to prove \(\gamma_{\zeta,w\times h}(x,a,b)_{i,j}=O_{\gamma,x}(a,w,b,h,\zeta)_{i,j}\) for all \(i\in\mathbb{N}_{1,n}\) and \(j\in\mathbb{N}_{1,m}\). By Definition 2, we consider the following two cases: _Case 1: When a pixel \(p\) at position \((i,j)\) is fully occluded, we have \(\gamma_{\zeta,w\times h}(x,a,b)_{i,j}=\zeta(x,i,j)\). We need to prove that \(O_{\gamma,x}(a,w,b,h,\zeta)_{i,j}=\zeta(x,i,j)\)._ Suppose \(p\) is covered by an arbitrary uniform occlusion with size of \(w_{0}\times h_{0}\) at position \((a_{0},b_{0})\). We can observe that for that pixel \(p\), \(i>a_{0}\wedge i<a_{0}+w_{0}-1\) and \(j>b_{0}\wedge j<b_{0}+h_{0}-1\) hold since \(p\) is covered by the occlusion. We show the output of \(O_{\gamma,x}(a,w,b,h,\zeta)_{i,j}\) by inspecting the \((i*n+j)^{th}\) output of the occlusion network after propagation, starting from inspecting the output of the \(i^{th}\) and \((i+m)^{th}\) neurons of the first layer. According to the network structure discussed in Sec. 4.2, we can tell that the \(i^{th}\) neuron in the first layer is \(0\) only when \(i>a_{0}\), the same property holds for the \((i+m)^{th}\) neuron when \(i<a_{0}+w_{0}-1\). Therefore, the output for the \(i^{th}\) and \((i+m)^{th}\) neurons of the first layer is \(0\), which leads to the \(i^{th}\) neuron in the first part of the second layer has output of value \(1\). Through the similar process, we can get that the value of \(z_{j}^{(2)}\) in the second part of the second layer is also \(1\). The \((i\times n+j)^{th}\) neuron in the third layer is based on the \(i^{th}\) neuron and \(j^{th}\) neuron of the second layer that we just discussed. Therefore, the output of that neuron, \(z_{i\times n+j}^{(3)}\), is \(1\). For uniform occlusion, suppose the coloring function \(\zeta\) has a fixed value \(\underline{\mu}_{0}\). By propagating the output \(z_{i\times n+j}^{(3)}\) to the fourth layer, which is calculated as \(W_{4}\times z^{(3)}+\overline{\mathsf{b}}_{4}\), the \((i\times n+j)^{th}\) output of the fourth layer is \(1\times(\mu_{0}-p_{i,j})+p_{i,j}=\mu_{0}\). Likely, for multiform occlusion, \(\zeta\) indicates the threshold \(\epsilon_{0}\) that a pixel can change. The \((i\times n+j)^{th}\) extra neuron outputs \(\epsilon_{0}\), then the corresponding neuron in the fourth layer outputs \(p_{i,j}+\epsilon_{0}\). This output of \(O_{\gamma,x}(a,w,b,h,\zeta)_{i,j}\) is identical to \(\gamma_{\zeta,w\times h}(x,a,b)_{i,j}\), the expected pixel value at position \((i,j)\), which also indicates that the color is correctly encoded. _Case 2: When a pixel \(p\) at position \((i,j)\) is not occluded, we have \(\gamma_{\zeta,w\times h}(x,a,b)_{i,j}=x_{i,j}\). Then, we need to prove that \(O_{\gamma,x}(a,w,b,h,\zeta)_{i,j}=x_{i,j}\)._ In this case, we can observe that \(i<a_{0}\lor i\geq a_{0}+w_{0}\) and \(j<b_{0}\lor j\geq b_{0}+h_{0}\) hold for pixel \(p\). Then We can tell that the corresponding neuron in the third layer outputs \(0\) and the output of the \((i*n+j)^{th}\) neuron in the fourth layer is the origin pixel value of \(p\) following the similar process discussed in case 1. For the occlusion with real number position, some more cases need to be discussed, but the proof has a very similar sketch as the normal occlusion with integer position. We leverage the equality of \(a\times b=exp(log(a)+log(b))\) and add it to the propagation between the third layer and those extra neurons only when the occlusion is at real number positions in the multiform case. And we use \(ReLU(a+b-1)\) as an alternative to logarithms and exponents in implementation since Marabou does not support such operations. A complete proof for Lemma 1 is deferred to Appendix B. Theorem 1 can be directly derived from Lemma 1 and Definition 3 by substituting \(\gamma_{\zeta,w\times h}(x,a,b)\) for \(O_{\gamma,x}(a,w,b,h,\zeta)\) in the definition. ### Verification Acceleration Techniques Existing SMT-based neural network verification tools can directly verify the composed neural network. The number of ReLU activation functions in the network is the primary factor in determining the verification time cost by the backend tools. In the occlusion part, the number of ReLU nodes is independent of the scale of the original networks to be verified. Therefore, our approach's scalability relies only on the underlying tools. To further improve the verification efficiency, we integrate two algorithmic acceleration techniques by dividing the verification problem into small independent sub-problems that can be solved separately. **Occlusion Space Splitting.** We observed that verifying the composed neural network with a large input space can significantly degrade the efficiency of backend verifiers. Even for small FNNs with only tens of ReLUs, the verifiers may run out of time due to the large occlusion space for searching. For instance, the complexity of Reluplex [19] can be derived from the original SMT method of Simplex [32]. It has a complexity of \(\Omega(v\times m\times n)\), where \(m\) and \(n\) represent the number of constraints and variables, and \(v\) represents the number of pivots operated in the Simplex method. In the worst case, \(v\) can grow exponentially. Reduction in the search space can reduce the number of pivot operations, therefore significantly improving verification efficiency. Based on the above observation, we can divide \([1,m]\) (_resp._\([1,n]\)) into \(k_{m}\in\mathbb{Z}^{+}\) (_resp._\(k_{n}\in\mathbb{Z}^{+}\)) intervals \([m_{0},m_{1}],\ldots,[m_{k_{m}-1},m_{k_{m}}]\) (_resp._\([n_{0},n_{1}],\ldots,[n_{k_{m}-1},n_{k_{m}}]\)) and verify the problem on the Cartesian product of the two sets of intervals. \[\begin{split}&\forall x^{\prime}\in\mathbb{X}.\Phi(x^{\prime})= \Phi(x)\equiv\bigwedge_{(i,j)=(0,0)}^{(k_{m}-1,k_{m}-1)}\forall x^{\prime}\in \mathbb{X}_{(i,j)}.\Phi(x^{\prime})=\Phi(x),\text{ where}\\ &\mathbb{X}=\bigcup_{(i,j)=(0,0)}^{(k_{m}-1,k_{m}-1)}\mathbb{X}_ {(i,j)}=\bigcup_{(i,j)=(0,0)}^{(k_{m}-1,k_{m}-1)}\{\gamma_{\zeta,w\times k}(x,a,b)|m_{i}\leq a\leq m_{i+1},n_{j}\leq b\leq n_{j+1}\}.\end{split} \tag{7}\] In this way, we split the occlusion space into \(k_{m}\times k_{n}\) sub-spaces. It is equivalent to prove \(\forall x^{\prime}\in\mathbb{X}.\Phi(x^{\prime})\) for all \(\mathbb{X}_{(i,j)}\) with \(0\leq i<k_{m}\) and \(0\leq j<k_{n}\), without losing the soundness and completeness. We call each verification instance a _query_, which can be solved more efficiently than the one on the whole occlusion space by backend verifiers. Furthermore, another advantage of occlusion space splitting is that these divided queries can be solved in parallel by leveraging multi-threaded computing. **Eager Falsification by Label Sorting.** Another _Divide & Conquer_ approach for acceleration is to divide the verification problem into independent sub-problems by the classification labels in \(L\), as defined below: \[\forall x^{\prime}\in\mathbb{X}.\Phi(x^{\prime})=\Phi(x)\equiv\forall x^{ \prime}\in\mathbb{X}.\bigwedge_{\ell^{\prime}\in L}\Phi(x)=\ell^{\prime} \vee\Phi(x^{\prime})\neq\ell^{\prime}. \tag{8}\] The dual problem to disprove the robustness can be solved to find some label \(\ell^{\prime}\) such that \(\Phi(x)\neq\ell^{\prime}\wedge\Phi(x^{\prime})=\ell^{\prime}\). We can first solve those that have higher probabilities of being non-robust. Once a sub-problem is proved non-robust, the verification terminates, with no need to solve the remainder. Such approach is called _eager falsification_[14]. Based on this methodology, we sort the sub-problems in a descent order according to the probabilities at which the original image is classified to the corresponding labels by the neural network. A higher probability implies that the image is more likely to be classified to the corresponding label. Heuristically, there is a higher probability of finding an occlusion such that the occluded image is misclassified to that label. We sequence the queries into backend verifiers until all are verified, or a non-robust case is reported. Our experimental results will show that this approach can achieve up to 8 and 24 times speedup in the robust and non-robust cases, respectively. ## 5 Implementation and Evaluation We implemented our approach in a Python tool called OccRob, using the PyTorch framework. As a backend tool, we chose the Marabou [20] state-of-the-art, SMT-based DNN verifier. We evaluated our proposed approach extensively on a suite of benchmark datasets, including MNIST [24] and GTSRB [15]. The size of the networks trained on the datasets for verification is measured by the number of ReLUs, ranging from 70 to 1300. All the experiments are conducted on a workstation equipped with a 32-core AMD Ryzen Threadripper CPU @ 3.7GHz and 128 GB RAM and Ubuntu 18.04. We set a timeout threshold of 60 seconds for a single verification task. All code and experimental data, including the models and verification scripts can be accessed at [https://github.com/MakiseGuo/OccRob](https://github.com/MakiseGuo/OccRob). We evaluate our proposed method concerning efficiency and scalability in the occlusion robustness verification of ReLU-based FNNs. Our goals are threefold: 1. To demonstrate the effectiveness of the proposed approach for the robustness verification against various types of occlusion perturbations. 2. To evaluate the efficiency improvement of the proposed approach, compared with the naive SMT-based method. 3. To demonstrate the effectiveness of the acceleration techniques in efficiency improvement. **Experiment I: Effectiveness.** We first evaluate the effectiveness of OccRob in robustness verification against various types of occlusions of different sizes and color ranges. Table 1 shows the verification results and time costs against multiform occlusions on two medium FNNs trained on MNIST and GTSRB. We consider two occlusions sizes, \(2\times 2\) and \(5\times 5\), respectively. The occluding color range is from 0.05 to 0.40. In each verification task, we selected the first 30 images from each of the two datasets and verified the network's robustness around them, under corresponding occlusion settings. As expected, larger occlusion sizes and occluding color ranges imply more non-robust cases. One can see that OccRob can almost always verify and falsify each input image, except for a few time-outs. The robust cases cost more time than the non-robust cases, but all can be finished in a few minutes. Note that the time overhead for building occlusion neural networks is almost negligible, compared with the verification time. The effectiveness against uniform occlusions is shown in the following experiment. \begin{table} \begin{tabular}{|c|c|r|r r|r r|r r r|} \hline & \multicolumn{3}{c|}{Medium FNN (600 ReLUs) on MNIST} & \multicolumn{3}{c|}{Medium FNN (343 ReLUs) on GTSRB} \\ \hline Size & \(\epsilon\) & - / + & \(T_{+}\) & \(T_{-}\) & \(T_{\text{build}}\) & TO(\%) & - / + & \(T_{+}\) & \(T_{-}\) & \(T_{\text{build}}\) & TO(\%) \\ \hline \multirow{8}{*}{\(2\times 2\)} & 0.05 & **2** / 28 & 120.01 & 11.98 & 0.068 & 0.00 & **8** / 13 & 103.64 & 24.18 & 0.089 & 0.00 \\ & 0.10 & **3** / 27 & 121.37 & 19.18 & 0.067 & 0.00 & **8** / 13 & 108.62 & 22.57 & 0.088 & 0.00 \\ & 0.20 & **4** / 26 & 122.12 & 39.57 & 0.067 & 0.00 & **10** / 11 & 113.7 & 23.17 & 0.084 & 0.00 \\ & 0.30 & **4** / 26 & 122.74 & 39.85 & 0.068 & 0.00 & **11** / 10 & 117.97 & 26.41 & 0.089 & 0.00 \\ & 0.40 & **6** / 24 & 126.66 & 49.6 & 0.074 & 0.00 & **14** / 7 & 115.49 & 31.53 & 0.096 & 0.14 \\ \hline \multirow{8}{*}{\(5\times 5\)} & 0.05 & **5** / 25 & 123.45 & 49.04 & 0.065 & 0.00 & **9** / 12 & 123.99 & 26.02 & 0.101 & 0.00 \\ & 0.10 & **6** / 24 & 124.13 & 44.09 & 0.073 & 0.00 & **12** / 9 & 127.65 & 26.96 & 0.01 & 0.00 \\ \cline{1-1} & 0.20 & **10** / 20 & 179.89 & 52.51 & 0.073 & 3.26 & **16** / 5 & 126.98 & 27.22 & 0.102 & 0.00 \\ \cline{1-1} & 0.30 & **14** / 16 & 284.67 & 65.98 & 0.076 & 5.45 & **18** / 3 & 146.68 & 29.11 & 0.100 & 0.04 \\ \cline{1-1} & 0.40 & **22** / 8 & 339.78 & 97.28 & 0.074 & 7.33 & **19** / 2 & 169.17 & 26.52 & 0.103 & 0.09 \\ \hline \end{tabular} * - / +: the numbers of non-robust and robust cases; \(T_{+}\) (_resp._\(T_{-}\)): average verification time in robust (_resp._ non-robust) cases; \(T_{\text{build}}\): the building time of occlusion neural networks; TO (%): the percentage of runtime-out cases among all the queries. \end{table} Table 1: Occlusion verification results on two medium FNNs trained on MNIST and GTSRB in different occlusion sizes \(2\times 2\) and \(5\times 5\) and occlusion radius \(\epsilon\). Fig. 7 shows several occlusive adversarial examples that are generated by OccRob under different occlusion settings. These occlusions do not alter the semantics of the original images and should be classified to the same results as those non-occluded ones. However, they are misclassified to other results. **Experiment II: Efficiency improvement over the naive encoding method.** We compare the efficiency of OccRob with that of a naive SMT encoding approach on verifying uniform occlusions since the naive encoding approach cannot handle verification against multiform occlusions. We apply the same acceleration techniques, such as parallelization and a variant of input space splitting, to the naive approach, which otherwise times out for almost all verification tasks even on the smallest model. Table 2 shows the average verification time on six FNNs of different sizes against uniform occlusions. We can observe that OccRob affords a significant improvement in efficiency, up to 30 times higher than the naive approach. It can always finish before the preset time threshold, while the naive method fails to verify the two large networks under the same time threshold. The timeout proportion of two medium networks is over 70%. While the small network on MNIST only has an 8% of timeout proportion with the naive method, OccRons barely timeouts on every network(see Appendix C.2). **Experiment III: Effectiveness of the integrated acceleration techniques.** We finally evaluate the effectiveness of the two acceleration techniques integrated with the tool. We evaluate each technique separately by excluding it from OccRob and comparing the verification time of OccRob and the corresponding excluded versions. Fig. 8 shows the experimental results of verifying the medium FNN trained on GTSRB against multiform occlusions by the tools. Fig. 8 (a) shows that label sorting can improve efficiency in both robust and non-robust cases. In particular, the improvement is more significant in the non-robust case, with up to 5 times speedup in the experiment. That is because solving Figure 7: Occlusive adversarial examples automatically generated for non-robust images. each query is faster than solving all simultaneously, and further OccRob immediately stops dispatching queries once a counterexample is found in the non-robust case. Fig. 8 (b) shows that occlusion space splitting can also significantly improve the efficiency, with up to 8 and 24 times speedups in the robust and non-robust cases, respectively. In addition, Fig. 8 (b) also shows a significant reduction in the number of time-outs. ## 6 Related Work Robustness verification of neural networks has been extensively studied recently, aiming at devising efficient methods for verifying neural networks' robustness against various types of perturbations and adversarial attacks. We classify those methods into two categories according to the type of perturbations, which can be semantic or non-semantic. Semantic perturbation has an interpretable meaning, such as occlusions and geometric transformations like rotation, while non-semantic perturbation means that noises perturb inputs with no particular meanings. \begin{table} \begin{tabular}{|c|c c|c c|c c|c c|c c|} \hline & \multicolumn{4}{c|}{MNIST} & \multicolumn{4}{c|}{GTSRB} \\ \hline FNNs & Small FNN & Medium FNN & Large FNN & Small FNN & Medium FNN & Large FNN \\ \hline Size & OR & NAI & OR & NAI & OR & NAI & OR & NAI & OR & NAI \\ \hline \(1\times 1\) & 46.44 & 63.12 & 110.18 & 759.93 & 206.50 & TO & 29.76 & 472.23 & 69.28 & 989.08 & 173.62 & TO \\ \(2\times 2\) & 49.62 & 165.53 & 98.60 & 832.98 & 199.17 & TO & 21.04 & 340.89 & 42.16 & 680.81 & 103.42 & TO \\ \(3\times 3\) & 51.23 & 298.59 & 111.14 & 863.74 & 205.67 & TO & 11.93 & 169.35 & 32.00 & 499.31 & 81.17 & TO \\ \(4\times 4\) & 44.78 & 256.22 & 115.99 & 886.73 & 225.02 & TO & 8.90 & 141.85 & 31.24 & 419.62 & 106.41 & TO \\ \(5\times 5\) & 48.96 & 270.23 & 113.01 & 803.40 & 264.79 & TO & 6.11 & 190.81 & 27.97 & 418.56 & 118.99 & TO \\ \(6\times 6\) & 47.81 & 318.28 & 127.98 & 642.01 & 288.18 & TO & 7.49 & 213.35 & 21.70 & 282.04 & 60.02 & TO \\ \(7\times 7\) & 34.99 & 357.78 & 124.47 & 589.41 & 222.65 & TO & 6.02 & 153.81 & 31.96 & 404.18 & 62.60 & TO \\ \(8\times 8\) & 36.05 & 324.34 & 129.27 & 469.24 & 215.53 & TO & 5.99 & 123.07 & 28.44 & 250.97 & 54.37 & TO \\ \(9\times 9\) & 34.58 & 224.01 & 141.54 & 375.97 & 219.61 & TO & 6.42 & 102.39 & 31.30 & 160.84 & 59.87 & TO \\ \(10\times 10\) & 28.98 & 178.44 & 78.89 & 398.01 & 182.36 & TO & 6.61 & 127.20 & 28.59 & 153.96 & 40.69 & TO \\ \hline \end{tabular} \end{table} Table 2: Performance comparison between OccRob (OR) and the naive (NAI) methods on MNIST and GTSRB under different occlusion sizes. Figure 8: Efficiency evaluation results of the two acceleration techniques. Non-semantic perturbations are usually represented as \(L_{p}\) norms, which define the ranges in which an input can be altered. Some robustness verification approaches for non-semantic perturbations are both sound and complete by leveraging SMT [19, 1] and MILP (mixed integer linear programming) [37] techniques, while some sacrifice the completeness for better scalability by over-approximation [29, 2, 7], abstract interpretation [34, 10, 5], interval analysis by symbolic propagation [44, 43, 26], etc. In contrast to a large number of works on non-semantic robustness verification, there are only a few studies on the semantic case. Because semantic perturbations are beyond the range of \(L_{p}\) norms [9], those abstraction-based approaches cannot be directly applied to verifying semantic perturbations. Mohapatra et al. [30] proposed to verify neural networks against semantic perturbations by encoding them into neural networks. Their encoding approach is general to a family of semantic perturbations such as brightness and contrast changes and rotations. Their approach for verifying occlusions is restricted to uniform occlusions at integer locations. Sallami et al.[31] proposed an interval-based method to verify the robustness against the occlusion perturbation problem under the same restriction. Singh et al. [36] proposed a new abstract domain to encode both non-semantic and semantic perturbations such as rotations. Chiang et al. [4] called occlusions _adversarial patches_ and proposed a certifiable defense by extending interval bound propagation (IBP) [12]. Compared with these existing verification approaches for semantic perturbations, our SMT-based approach is both sound and complete, and meanwhile, it supports a larger class of occlusion perturbations. ## 7 Conclusion and Future Work We introduced an SMT-based approach for verifying the robustness of deep neural networks against various types of occlusions. An efficient encoding method was proposed to represent occlusions using neural networks, by which we reduced the occlusion robustness verification problem to a regular robustness verification problem of neural networks and leveraged _off-the-shelf_ SMT-based verifiers for the verification. We implemented a resulting prototype OccRob and intensively evaluated its effectiveness and efficiency on a series of neural networks trained on the public benchmarks, including MNIST and GTSRB. Moreover, as the scalability of DNN verification engines continues to improve, our approach, which uses them as blackbox backends, will also become more scalable. As our occlusion encoding approach is independent of target neural networks, we believe it can be easily extended to other complex network structures, such as convolutional and recurrent ones, which only depend on the backend verifiers. It would also be interesting to investigate how the generated adversarial examples could be used for neural network repairing [42, 17] to train more robust networks. ## Acknowledgments This work has been supported by National Key Research Program (2020AAA0107800), NSFC-ISF Joint Program (62161146001, 3420/21) and NSFC projects (61872146, 61872144), Shanghai Science and Technology Commission (20DZ1100300), Shanghai Trusted Industry Internet Software Collaborative Innovation Center and "Digital Silk Road" Shanghai International Joint Lab of Trustworthy Intelligent Software (Grant No. 22510750100).
2303.02859
Bayesian inference with finitely wide neural networks
The analytic inference, e.g. predictive distribution being in closed form, may be an appealing benefit for machine learning practitioners when they treat wide neural networks as Gaussian process in Bayesian setting. The realistic widths, however, are finite and cause weak deviation from the Gaussianity under which partial marginalization of random variables in a model is straightforward. On the basis of multivariate Edgeworth expansion, we propose a non-Gaussian distribution in differential form to model a finite set of outputs from a random neural network, and derive the corresponding marginal and conditional properties. Thus, we are able to derive the non-Gaussian posterior distribution in Bayesian regression task. In addition, in the bottlenecked deep neural networks, a weight space representation of deep Gaussian process, the non-Gaussianity is investigated through the marginal kernel.
Chi-Ken Lu
2023-03-06T03:25:30Z
http://arxiv.org/abs/2303.02859v2
# Bayesian inference with finitely wide neural networks ###### Abstract The analytic inference, e.g. predictive distribution being in closed form, may be an appealing benefit for machine learning practitioners when they treat wide neural networks as Gaussian process in Bayesian setting. The realistic widths, however, are finite and cause weak deviation from the Gaussianity under which partial marginalization of random variables in a model is straightforward. On the basis of multivariate Edgeworth expansion, we propose a non-Gaussian distribution in differential form to model a finite set of outputs from a random neural network, and derive the corresponding marginal and conditional properties. Thus, we are able to derive the non-Gaussian posterior distribution in Bayesian regression task. In addition, in the bottlenecked deep neural networks, a weight space representation of deep Gaussian process, the non-Gaussianity is investigated through the marginal kernel and the accompanying small parameters. ## I Introduction Neal in his seminal work [1] pointed out that a shallow but infinitely wide random neural network is a Gaussian process (GP) [2] in statistical sense. Subsequent work [3; 4] in interpreting neural network with specific nonlinear activation units as kernel machines was also inspired by such idea. More recent reports [5; 6] further claimed the equivalence between GP and deep neural networks when each hidden layer in latter is of infinite width. Consequently, machine learning practitioners can perform Bayesian inference by treating deep and wide neural network as a GP, and exploit the analytic marginal and conditional properties of multivariate Gaussian distribution. Otherwise, one needs to employ gradient-based learning and bootstrap sampling for obtaining predictive distribution [7]. In reality, all neural networks have finite width. Therefore, the deviation from Gaussianity requires further quantitative account as practitioners may wonder the corrections to the predictive mean and variance in, for example, a regression task. Yaida [8] and colleagues [9] proposed a perturbative approach for computing the multivariate cumulants by direct application of Wick's contraction theorem. Moreover, the fourth cumulants are shown to be nonzero, scaled by the sum of reciprocal widths, \(1/N_{1}+1/N_{2}+\cdots+1/N_{L-1}\), and signaling non-Gaussian aspect of the random processes representing finite-width deep neural networks with \(L\) hidden layers [8]. A quartic energy functional for a fixed set of network outputs is formulated field-theoretically with which the corrections to the posterior mean and variance due to the _weak_ non-Gaussianity are obtained [8]. While Yaida's approach is appealing from a field theory perspective (see also [10; 11]), the loss of elegant marginal property due to the presence of fourth power term in exponent is critical for analytic inference with finite-width networks. An alternative thinking is to modify the multivariate Gaussian distribution so that the new distribution can match the higher cumulants associated with the networks. In this paper, we shall use the multivariate Edgeworth series expansion [12] to construct the non-Gaussian distribution for the network's outputs. In particular, we find that the differential representation of Edgeworth expansion greatly facilitates the derivation of marginal and conditional properties of the non-Gaussian distribution. Three main results are reported in this paper. First, the marginal property is intact, and the corrections to conditional mean and variance are derived. Second, with observed data, the non-Gaussian posterior distribution associated with an unobserved output is derived. Third, we derive the marginal covariance [13] of a bottlenecked deep neural network [14], which represents deep Gaussian Process [15] in weight space. It is worthwhile to note that some of the hidden layers in the bottlenecked network are narrow and _strong_ non-Gaussianity may be induced. The paper has the following organization. In Sec. II we begin by reviewing the computational structure of a shallow cosine network, its equivalence to GP with a Gaussian kernel, and the emergence of non-Gaussianity due to finite width. The shallow network with random parameters in Bayesian setting is a non-Gaussian prior over function. In Sec. III the non-Gaussian prior is represented as a differential representation of Edgeworth expansion around Gaussian multivariate distribution, and its marginal and conditional properties are established. Application of the non-Gaussian prior in Bayesian regression task is discussed in Sec. IV. Finally, Sec. V is devoted to investigating the combined effect of nonlinear activation, depth and finite width on the non-Gaussian prior for a deep bottlenecked cosine network, which is followed by a discussion in Sec. VI. ## II Wide feed forward neural network Let us start with the discussion of a random shallow feed forward network with cosine activation. In the infinite width limit, the network is statistically equivalent to a GP with Gaussian kernel [16]. The goal is to see the emergence of nonzero fourth cumulant when the width is finite. Consider a single output network with \(N\) activation units, the real function value \(z_{\alpha}\in\mathbb{R}\) with greek subscript is an indexed random variable in association with its input \(\mathbf{x}_{\alpha}\in\mathbb{R}^{d}\). Explicitly, the output-input of the network has the following relation, \[z_{\alpha}=\sqrt{\frac{2}{N}}\sum_{i=1}^{N}w_{i}\cos(\frac{\Omega_{i}\cdot \mathbf{x}_{\alpha}}{\sqrt{d}}+\phi_{i})\,, \tag{1}\] where the weight variables \(w\)'s are sampled from the Gaussian distribution \(\mathcal{N}(0,1)\), the scaling variables \(\Omega\)'s sampled from \(\mathcal{N}(0,I_{d})\), and the phase variables \(\phi\)'s sampled from uniform distribution \(\mathcal{U}([0,2\pi])\). The normalization factors \(\sqrt{2/N}\) and \(1/\sqrt{d}\) above follow the parameterization used in [17]. Because \(w\) is zero-mean, the first relevant statistical moment is the covariance, \(k(\mathbf{x}_{\alpha},\mathbf{x}_{\beta})\), namely the expectation of product of two function values at two inputs, \[\begin{split} k(\mathbf{x}_{\alpha},\mathbf{x}_{\beta})& :=\mathbb{E}[z_{\alpha}z_{\beta}]\\ &=\frac{1}{N}\sum_{i}\mathbb{E}\bigg{\{}\cos[\frac{\Omega_{i} \cdot(\mathbf{x}_{\alpha}-\mathbf{x}_{\beta})}{\sqrt{d}}]+\cos[\frac{\Omega_{ i}\cdot(\mathbf{x}_{\alpha}+\mathbf{x}_{\beta})}{\sqrt{d}}+2\phi_{i}]\bigg{\}}\\ &=e^{-\frac{1}{2d}|\mathbf{x}_{\alpha}-\mathbf{x}_{\beta}|^{2}} \,.\end{split} \tag{2}\] In above, the independence between the random variables \(w\)'s is used to arrive at the second equality. The third equality is due to the fact that Fourier transformation of Gaussian is Gaussian and the vanishing average of cosine with uniformly random phase. The fourth moment can be computed in similar manners, but one shall notice that the product of two cosine terms containing the random phase can generate a nonzero contribution. Here, we focus on the fourth cumulant tensor as the signature of non-Gaussianity, \[\begin{split} V_{\alpha\beta\gamma\delta}&:= \mathbb{E}[z_{\alpha}z_{\beta}z_{\gamma}z_{\delta}]-K_{\alpha\beta}K_{\gamma \delta}-K_{\alpha\gamma}K_{\beta\delta}-K_{\alpha\delta}K_{\beta\gamma}\\ &=\frac{1}{N^{2}}\sum_{ij}\mathbb{E}\big{\{}\cos[\frac{\Omega_{i} \cdot(\mathbf{x}_{\alpha}+\mathbf{x}_{\beta})}{\sqrt{d}}+2\phi_{i}]\cos[\frac {\Omega_{j}\cdot(\mathbf{x}_{\gamma}+\mathbf{x}_{\delta})}{\sqrt{d}}+2\phi_{j }]\big{\}}+\text{sym. perm.}\\ &=\frac{1}{2N}\bigg{(}e^{-\frac{1}{2d}|\mathbf{x}_{\alpha}+ \mathbf{x}_{\beta}-\mathbf{x}_{\gamma}-\mathbf{x}_{\delta}|^{2}}+e^{-\frac{1 }{2d}|\mathbf{x}_{\alpha}+\mathbf{x}_{\gamma}-\mathbf{x}_{\beta}-\mathbf{x}_ {\delta}|^{2}}+e^{-\frac{1}{2d}|\mathbf{x}_{\alpha}+\mathbf{x}_{\delta}- \mathbf{x}_{\beta}-\mathbf{x}_{\gamma}|^{2}}\bigg{)}\,.\end{split} \tag{3}\] Hence, the fourth cumulant tensor receives a contribution proportional to the reciprocal width, \(1/N\), and it is symmetric with respect to permutation of indices. ## III Multivariate Edgeworth expansion and non-Gaussian prior The fourth cumulant tensor in Eq. (3) signifies that the distribution over the network outputs is non-Gaussian. In other words, the random parameters \(w\)'s, \(\Omega\)'s, and \(\phi\)'s from a prior distribution induce the non-Gaussian prior function distribution due to the finite width. The first interesting consequence of the nonzero fourth cumulant is that the prior distribution over a single output, say \(z_{\alpha}\), receives a correction term deviating from Gaussian [18], \[q(z_{\alpha})=\mathcal{N}(z_{\alpha}|0,\sigma_{0}^{2})\bigg{[}1+\frac{V_{ \alpha\alpha\alpha\alpha}}{24}(3-6\frac{z_{\alpha}^{2}}{\sigma_{0}^{2}}+\frac {z_{\alpha}^{4}}{\sigma_{0}^{4}})\bigg{]}\,. \tag{4}\] The variance \(\sigma_{0}^{2}:=k(z_{\alpha},z_{\alpha})\) and the new distribution \(q\) has matched moments, i.e. \(\mathbb{E}_{q}[1]=1\), \(\mathbb{E}_{q}[z_{\alpha}^{2}]=\sigma_{0}^{2}\), and \(\mathbb{E}_{q}[z_{\alpha}^{4}]=3\sigma_{0}^{4}+V_{\alpha\alpha\alpha\alpha}\). Although the work [18] derived the above non-Gaussian distribution from a renormalization group perspective, such expansion with Hermite polynomials around Gaussian distribution has been known as Edgeworth expansion in statistics literature [19; 20]. When the width \(N\rightarrow\infty\), the fourth cumulant \(V\) vanishes and for notational convenience we denote the limiting distribution by \(q_{\infty}\), which is Gaussian. ### Joint distribution As we are mainly interested in the prior distribution over a finite set of function values \(\{z_{1},z_{2},\cdots\}\), the objective here is to construct the non-Gaussian joint distribution as a prior in Bayesian learning. The work [12] suggested the construction of multivariate Edgeworth expansion by replacing the univariate Gaussian \(\mathcal{N}(z|0,\sigma_{0}^{2})\) with multivariate one \(\mathcal{N}(\mathbf{z}|0,K)\) and the Hermite polynomials with contracted tensor terms containing the inverse covariance matrix \([K^{-1}]\) and vector of function values \(\mathbf{z}\). Please see Appendix A for the expression. An apparent difficulty with such representation is the demonstration of consistency after marginalization, i.e., \(\int dz_{1}q(z_{1},z_{2},z_{3},\cdots)=q(z_{2},z_{3},\cdots)\), which is of critical importance for deriving conditional distribution and subsequent Bayesian learning. Inspired by the fact that the Hermite polynomials are obtained by taking derivative of Gaussian, we can rewrite the multivariate Edgeworth expansion in the following differential form, \[q(\mathbf{z})=\big{(}1+\frac{V_{ijml}}{24}\partial_{z_{i}}\partial_{z_{j}} \partial_{z_{m}}\partial_{z_{l}}\big{)}\mathcal{N}(\mathbf{z}|0,K) \tag{5}\] where the notation of summation by repeated indices is employed. An advantage of such representation is that the adjoint property of derivative, \(\partial_{z}^{\dagger}=-\partial_{z}\), can be exploited in the integration of functions which vanish at infinity. Hence, it is easy to see the distribution \(q\) is normalized, and the second and fourth moments match, for instance, \[\begin{split}\mathbb{E}_{q}[z_{1}z_{2}]&=K_{12}\,, \\ \mathbb{E}_{q}[z_{1}z_{2}z_{3}z_{4}]&=K_{12}K_{34}+K _{13}K_{24}+K_{14}K_{23}+V_{1234}\;.\end{split} \tag{6}\] The above non-Gaussian prior can be viewed as a perturbative extension of multivariate Gaussian. In the setting of Bayesian regression, the marginal and conditional properties of Gaussian are appealing when conjugate likelihood functions are employed. In the following, we shall examine the effects of the perturbation in differential form on these properties. ### Marginal consistency and conditional statistics We first show that the marginal consistency is intact with the Edgeworth expansion in differential form. Namely, without loss of generality, it suffices to show that marginalization of \(z_{1}\) of the joint distribution \(q(z_{1},z_{2},z_{3},\cdots)\), \[\begin{split}\int dz_{1}q(z_{1},\tilde{\mathbf{z}})& =\mathcal{N}(\tilde{\mathbf{z}}|0,\tilde{K})+v_{\widetilde{ij}\widetilde{j} \widetilde{m}\widetilde{l}}\partial_{z_{l}}\partial_{\tilde{z}_{j}}\partial_ {z_{\tilde{m}}}\partial_{\tilde{z}_{l}}\int dz_{1}\mathcal{N}(z_{1},\tilde{ \mathbf{z}}|0,K)\\ &=\big{(}1+v_{\widetilde{ij}\widetilde{m}\widetilde{l}}\partial _{z_{l}}\partial_{z_{j}}\partial_{z_{\tilde{m}}}\partial_{\tilde{z}_{l}} \big{)}\mathcal{N}(\tilde{\mathbf{z}}|0,\tilde{K})\\ &=q(\tilde{\mathbf{z}})\;,\end{split} \tag{7}\] indeed reproduces the joint distribution for \(z_{2}\), \(z_{3},\cdots\) in the original form Eq. (5) as the smaller covariance matrix \(\tilde{K}\) is the submatrix of \(K\) excluding the first row and column. For simplicity, we use \(v\) to denote \(V/24\). In deriving above, we have employed the fact that \(\int dz_{1}(\partial_{z_{1}})^{*}\mathcal{N}(z_{1},\tilde{\mathbf{z}})=0\) for any integer power \(s\geq 1\), meaning that the indices \(i,j,m,l\) shall exclude any contribution associated with \(z_{1}\). Thus, the tilded indices \(\tilde{i}\in\{2,3,4,\cdots\}\), and they can be factored out of the integral. Here we wish to stress that the above marginal property of non-Gaussian prior may seems straightforward with the differential representation of Edgeworth expansion, but it does not seem promising to reach the same conclusion if the marginalization is carried out with the explicit representation in Appendix A. Nevertheless, one can easily write the conditional distribution \(q(z_{1}|\tilde{\mathbf{z}})=q(z_{1},\tilde{\mathbf{z}})/q(\tilde{\mathbf{z}})\). More interestingly, the conditional mean is useful for prediction with noiseless observation of data, and also shed light on the effect of non-Gaussianity due to the finite width. The details of deriving the following conditional mean \[\begin{split}\mathbb{E}[z_{1}|\tilde{\mathbf{z}}]& =\int dz_{1}z_{1}\frac{q(z_{1},\tilde{\mathbf{z}})}{q(\tilde{ \mathbf{z}})}\\ &=\mu(\tilde{\mathbf{z}})+\frac{1}{q(\tilde{\mathbf{z}})}A_{ \widetilde{ij}\widetilde{m}}\partial_{\tilde{z}_{1}}\partial_{\tilde{z}_{ \tilde{j}}}\partial_{z_{\tilde{m}}}\mathcal{N}(\tilde{\mathbf{z}}|0,\tilde{K} )\;,\end{split} \tag{8}\] with the finite-width correction term proportional to the third-order tensor, \[A_{\widetilde{ij}\widetilde{m}}=\frac{V_{\widetilde{ij}\widetilde{m}\widetilde {l}}}{6}[\tilde{K}^{-1}\mathbf{k}]_{\widetilde{l}}-\frac{V_{\widetilde{1} \widetilde{ij}\widetilde{m}}}{6}\;, \tag{9}\] can be found in Appendix B. In the GP limit, the conditional mean becomes the well-known result \(\mu(\tilde{\mathbf{z}})=\mathbf{k}^{t}\tilde{K}^{-1}\tilde{\mathbf{z}}\), as the fourth cumulant \(V\) vanishes in large \(N\) limit. We also remind the readers that the tilded symbols are associated with the conditioned outputs \(z_{2,3,\cdots}\). In a similar manner, we can also show the conditional second moment, \[\begin{split}\mathbb{E}[z_{1}^{2}|\tilde{\mathbf{z}}]& =\int dz_{1}z_{1}^{2}\frac{q(z_{1},\tilde{\mathbf{z}})}{q(\tilde{ \mathbf{z}})}\\ &=\mu^{2}(\tilde{\mathbf{z}})+\sigma^{2}+\frac{1}{q(\tilde{ \mathbf{z}})}\bigg{[}2\mu(\tilde{\mathbf{z}})A_{\tilde{\imath}\tilde{\jmath} \tilde{m}}\partial_{z_{\tilde{\imath}}}\partial_{z_{\tilde{\jmath}}}\partial_{z _{\tilde{\imath}}}+B_{\tilde{\imath}\tilde{\jmath}}\partial_{z_{\tilde{\imath} }}\partial_{z_{\tilde{\jmath}}}\bigg{]}\mathcal{N}(\tilde{\mathbf{z}}|0, \tilde{K})\,,\end{split} \tag{10}\] with the second-order tensors denoting \[B_{\tilde{\imath}\tilde{\jmath}}=\frac{V_{\tilde{\imath}\tilde{\jmath}\tilde {m}\tilde{\jmath}}}{2}[\tilde{K}^{-1}\mathbf{k}]_{\tilde{m}}[\tilde{K}^{-1} \mathbf{k}]_{\tilde{l}}-V_{1\tilde{\imath}\tilde{\jmath}\tilde{m}}[\tilde{K}^ {-1}\mathbf{k}]_{\tilde{m}}+\frac{V_{11\tilde{\imath}\tilde{\jmath}}}{2}\;. \tag{11}\] The conditional second moment coincides with that in the GP limit, \(\sigma^{2}=K_{11}-\mathbf{k}^{t}\tilde{K}^{-1}\mathbf{k}\). The details of derivation can also be found in Appendix B. It is worthwhile to note that the conditional variance, \(\mathbb{E}_{q}[z_{1}^{2}|\tilde{\mathbf{z}}]-(\mathbb{E}_{q}[z_{1}|\tilde{ \mathbf{z}}])^{2}\), can include contributions related to function values \(\tilde{\mathbf{z}}\) through the nonzero \(A\) and \(B\) terms in above expressions. The lack of such dependence in GP limit is in fact a shortcoming for modelling, which has motivated the study of non-Gaussian prior in machine learning community (for example the Student-t process in [21]). ### An example: bivariate distribution and prediction Now let us pause to consider a simple example where a shallow network defined in Eq. (1) with width \(N\) is used to model two input-output pairs \((\mathbf{x}_{1},z_{1})\) and \((\mathbf{x}_{2},z_{2})\). In the end, we shall present the predictive mean and variance for \(z_{1}\) conditioning on \(z_{2}\). The following notations are used for simplification: the covariance \(c:=\exp(-|\mathbf{x}_{1}-\mathbf{x}_{2}|^{2}/2d)\), the 2-by-2 covariance matrix \(\Sigma=\left(\begin{smallmatrix}1&c\\ c&1\end{smallmatrix}\right)\) for the bivariate prior, and \(z_{21}:=(z_{2}-cz_{1})/(\sqrt{1-c^{2}})\). Besides, the relations of derivatives of Gaussian in terms of Hermite polynomial shall be useful, \[\partial_{z_{1}}^{n}\partial_{z_{2}}^{m}\mathcal{N}(z_{1},z_{2}|0,\Sigma) \propto\partial_{z_{1}}^{n}\left[\mathcal{N}(z_{1}|0,1)\partial_{z_{2}^{ \prime}}^{m}\mathcal{N}(z_{21}|0,1)\right]\propto(-1)^{m}\frac{\mathcal{N}(z_{ 2}|0,1)}{\sqrt{(1-c^{2})^{m}}}\partial_{z_{1}}^{n}[H_{m}(z_{21})\mathcal{N}(z _{12}|0,1)]\,, \tag{12}\] where we have used \(\mathcal{N}(z_{1},z_{2}|0,\Sigma)\propto\mathcal{N}(z_{1}|0,1)\mathcal{N}(z_{ 21}|0,1)\) up to some irrelevant constant. The probabilist's Hermite polynomials are defined as \(H_{n}(z)=(-1)^{n}[\mathcal{N}(z|0,1)]^{-1}d_{z}^{n}\mathcal{N}(z|0,1)\)[19]. Consequently, following the previous discussion, the non-Gaussian bivariate distribution is shown to be \[\begin{split} q(z_{1},z_{2})&=\mathcal{N}(z_{1},z_ {2}|0,\Sigma)\bigg{\{}1+\frac{\gamma^{4}}{16N}[H_{4}(z_{21})+H_{4}(z_{12})]+ \frac{c\gamma^{4}}{4N}[3cH_{2}(z_{21})+H_{3}(z_{21})H_{1}(z_{12})+z_{12} \leftrightarrow z_{21}]\\ &+\frac{(c^{4}+2)\gamma^{4}}{8N}[2c^{2}+4cH_{1}(z_{21})H_{1}(z_{ 12})+H_{2}(z_{21})H_{2}(z_{12})]\bigg{\}}\;,\end{split} \tag{13}\] and the symbol \(\gamma=1/\sqrt{1-c^{2}}\) is used to simplify the expression. Besides, the fourth cumulant for the cosine network, \(V_{1111}=3/2N\), \(V_{1112}=3c/2N\), and \(V_{1122}=(c^{4}+2)/2N\), have been plugged in above. As for the conditional distribution, e.g. \(q(z_{1}|z_{2})=q(z_{1},z_{2})/q(z_{2})\), for the bivariate case, one can simply divid the above with \(q(z_{2})=\mathcal{N}(z_{2}|0,1)[1+H_{4}(z_{2})/16N]\), which shall yield a conditional Gaussian \(\mathcal{N}(z_{1}|z_{2})\) multiplied by a factor representing the finite-width effect. We can directly apply Eq. (8) to obtain the conditional mean in the simple example where the tilded indices there only apply to \(z_{2}\). Surprisingly, the conditional mean for such noiseless case is \[\mathbb{E}[z_{1}|z_{2}]=cz_{2}\,, \tag{14}\] coinciding with that in the GP limit because the _accidental_ cancelation in the third-order tensor \(A_{222}\propto(cV_{222}-V_{1222})\). We shall show that the cancelation does not occur in the noisy (next Section) and more general cases. Application of Eq. (10) together with the above conditional mean results in the following conditional variance, \[\text{Var}[z_{1}|z_{2}]=(1-c^{2})\bigg{[}1+\frac{4(2-c^{2})}{16N+H_{4}(z_{2})}H_ {2}(z_{2})\bigg{]}\;, \tag{15}\] which consists of the corresponding variance in GP limit along with a term depending on \(z_{2}^{2}\). Bayesian regression with non-Gaussian prior Having established the marginal and conditional properties of the non-Gaussian distribution in Eq. (5), we now continue investigating the posterior distribution over the unseen function value \(z_{*}\) associated with input \(\mathbf{x}_{*}\) when the noisy observations \(\mathcal{D}=\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\cdots\}\) are known. According to Bayes's rule, the objective distribution is the posterior distribution defined as, \[q(z_{*}|\mathcal{D})=\frac{q(z_{*},\mathcal{D})}{q(\mathcal{D})}=\frac{\int d \mathbf{z}\ q(z_{*},\mathbf{z})\mathcal{N}(\mathbf{y}|\mathbf{z},\Lambda)}{ \int d\mathbf{z}\ q(\mathbf{z})\mathcal{N}(\mathbf{y}|\mathbf{z},\Lambda)}\;, \tag{16}\] with \(\Lambda\) denoting a diagonal matrix representing the noise in Gaussian likelihood. When the \(q\)'s in the numerator and denominator are both Gaussian, which corresponds to the infinitely wide network, the limiting distribution reads \[q_{\infty}(z_{*}|\mathcal{D}) =\int d\mathbf{z}\ \mathcal{N}(z_{*}|\mathbf{k}_{*}^{\prime}K^{-1} \mathbf{z},k_{**}-\mathbf{k}_{*}^{\prime}K^{-1}\mathbf{k}_{*})\ \mathcal{N}(\mathbf{z}|K(K+\Lambda)^{-1}\mathbf{y},\Lambda K(K+\Lambda)^{-1})\] \[=\mathcal{N}\big{(}z_{*}|\mathbf{k}_{*}^{\prime}(K+\Lambda)^{-1} \mathbf{y},k_{**}-\mathbf{k}_{*}^{\prime}(K+\Lambda)^{-1}\mathbf{k}_{*}\big{)}\;.\] In above first equality, the first and second Gaussian distributions on the right hand side represent the conditional density \(q_{\infty}(z_{*}|\mathbf{z})\) and the posterior \(q_{\infty}(\mathbf{z}|\mathcal{D})\), respectively. As for the case of finite-width, the involved Gaussian likelihood can be dealt with by continuing application of the properties of adjoint differential operator as well as derivative of independent Gaussians. The details of derivation of evidence \(q(\mathcal{D})\) can be seen in Appendix C, and the expression in terms of derivatives with respect to the observed \(y\)'s reads, \[\begin{split} q(\mathcal{D})&=\int d\mathbf{z}\ \mathcal{N}(\mathbf{y}|\mathbf{z},\Lambda)q(\mathbf{z})\\ &=(1+v_{ijml}\partial_{y_{i}}\partial_{y_{j}}\partial_{y_{m}} \partial_{y_{m}})\mathcal{N}(\mathbf{y}|0,K+\Lambda)\;.\end{split} \tag{17}\] Despite its simple form, the evidence term can not be further be manipulated like in the infinite-width case. The details of derivation of the posterior distribution for output \(z_{*}\) at the unseen input, \[\begin{split} q(z_{*}|\mathcal{D})&=\frac{\int d \mathbf{z}\ \mathcal{N}(\mathbf{y}|\mathbf{z},\Lambda)q(z_{*},\mathbf{z})}{q( \mathcal{D})}\\ &=(1+v_{\hat{1}\hat{j}\hat{m}\hat{l}}\partial_{\hat{1}\hat{j} \hat{m}\hat{l}})\mathcal{N}\big{[}\big{(}\begin{smallmatrix}z_{*}\\ \mathbf{y}\end{smallmatrix}\big{)}\big{]}0,\big{(}\begin{smallmatrix}1& \mathbf{k}_{*}^{\prime}\\ \mathbf{k}_{*}& K+\Lambda\end{smallmatrix}\big{)}\big{]}/q(\mathcal{D})\;, \end{split} \tag{18}\] can also be found in Appendix C. Here, the notations can be understood as follows: the hatted indices, \(\hat{i}\) for instance, additionally include the symbol \(*\) associated with test function value \(z_{*}\), and the derivative refers to \(\partial_{*}:=\partial_{z_{*}}\) (with respect to function value) and \(\partial_{i}:=\partial_{y_{i}}\) (with respect to the observed value). We conclude this Section with considering the same simple regression example but with noisy observation in \(y_{2}\) at input \(\mathbf{x}_{2}\), and we wish to predict the unobserved function value \(z_{1}\). To stress the role of noise parameter \(\sigma_{n}\), we only show the predictive mean here, \[\mathbb{E}[z_{1}|y_{2}]=c\alpha_{n}^{2}y_{2}-\frac{c\sigma_{n}^{2}\alpha_{n}^{ 5}H_{3}(\alpha_{n}y_{2})}{4N+\alpha_{n}^{4}H_{4}(\alpha_{n}y_{2})}\;, \tag{19}\] with \(\alpha_{n}=1/\sqrt{1+\sigma_{n}^{2}}\). The correction term vanishes as the noise parameter does, and is odd with respect to the sign change of \(y_{2}\). ## V Deep bottlenecked network Up to this point, we have focused on the emergence of non-Gaussianity in prior function distribution in wide and shallow network, as well as how the non-Gaussian prior affects the inference in a regression task. Our approach can be extended to the deep and wide neural networks as the corresponding prior distributions approach GP [5; 6]. However, such wide-width assumption leading to _weak_ non-Gaussianity is not valid for the bottlenecked networks in which the hidden layers are alternatively wide and narrow [14; 22]. Let us consider a two-layer bottlenecked feed forward network modeling the hierarchical mapping \(\mathbf{x}\in\mathbb{R}^{d}\mapsto\mathbf{z}^{(1)}\in\mathbb{R}^{H}\mapsto z^{(2) }\in\mathbb{R}\). The hierarchy consists of the following computations, \[\begin{split} z^{(1)}_{i,\alpha}&=\sqrt{\frac{2}{N_{ 1}}}\sum_{j=1}^{N_{1}}w^{(1)}_{ij}\cos(\frac{\Omega^{(1)}_{j}\cdot\mathbf{x}_{ \alpha}}{\sqrt{d}}+\phi^{(1)}_{j})\,,\\ z^{(2)}_{\alpha}&=\sqrt{\frac{2}{N_{2}}}\sum_{j=1}^ {N_{2}}w^{(2)}_{j}\cos(\frac{\Omega^{(2)}_{j}\cdot\mathbf{z}^{(1)}_{\alpha}}{ \sqrt{H}}+\phi^{(2)}_{j})\;,\end{split} \tag{20}\] where the parameters \(w\)'s, \(\Omega\)'s, and \(\phi\)'s share the same prior distribution with their counterparts in the shallow network. The widths of hidden layers, \(N_{1}\) and \(N_{2}\), are large but the hidden output dimension \(H\) is not. In the limit \(N_{1,2}\to\infty\) while \(H\) remains finite, the two-layer cosine bottlenecked network corresponds to the prior of deep Gaussian process [15; 23; 24] with Gaussian kernel, which is a flexible and expressive function prior due to its compositional nature. As its name suggests, the conditional prior \(z^{(2)}|\mathbf{z}^{(1)}\) is a GP and so is each component \(z^{(1)}_{i}|\mathbf{x}\) in the hidden output. However, the marginal distribution for \(z^{(2)}|\mathbf{x}\) is non-Gaussian as [13] showed that the fourth cumulant is positive, i.e., a heavy-tailed distribution [25]. Here, we are interested in tracking how the small parameters, \(1/N_{1}\) and \(1/N_{2}\), and the bottleneck parameter \(1/H\) enters the second moment. For a deep and finitely wide linear network, the prior is non-Gaussian [8; 26] but the second moment does not receive correction due to the finite width [8; 9]. Thus, the effects of nonlinear activation on higher statistical moments are interesting (also see a recent work in [27]). For the deep cosine network in Eq. (20), we can first consider the wide limit, \(N_{1,2}\to\infty\), but the bottleneck width \(H\) remains finite. Following the result in Eq. (2), the covariance for the deep model can be computed as followings, \[\begin{split} k^{(2)}_{\infty}(\mathbf{x}_{\alpha},\mathbf{x}_{ \beta})&=\mathbb{E}_{q_{\infty}}\bigg{\{}\exp\big{(}-\frac{| \mathbf{z}^{(1)}_{\alpha}-\mathbf{z}^{(1)}_{\beta}|^{2}}{2H}\big{)}\bigg{\}} \\ &=\bigg{[}\int dz_{\alpha}dz_{\beta}\ e^{-\frac{(z_{\alpha}-z_{ \beta})^{2}}{2H}q_{\infty}(z_{\alpha},z_{\beta})}\bigg{]}^{H}\\ &=\big{[}1+\frac{1-\exp(-\frac{|\mathbf{x}_{\alpha}-\mathbf{x}_{ \beta}|^{2}}{2d})}{H/2}\big{]}^{-H/2}\,.\end{split} \tag{21}\] Note that the decomposition \(\mathcal{N}(z_{1},z_{2}|0,K)=\mathcal{N}(z_{1}+z_{2}|0,\sigma^{2}_{+})\mathcal{ N}(z_{1}-z_{2}|0,\sigma^{2}_{-})\)[28] has been used in derivation with the variances \(\sigma^{2}_{\pm}=2(K_{11}\pm K_{12})\). We want to stress that the above kernel is exact for any \(H\). For a very narrow case, i.e., \(H=1\), the above result coincides with that in [13], while in the very wide bottleneck case, \(H\gg 1\), it can be shown that \[k^{(2)}_{\infty}(\mathbf{x}_{\alpha},\mathbf{x}_{\beta})\approx\exp[k( \mathbf{x}_{\alpha},\mathbf{x}_{\beta})-1]\big{\{}1+\frac{1}{H}[k(\mathbf{x} _{\alpha},\mathbf{x}_{\beta})-1]^{2}\big{\}}\,, \tag{22}\] where the covariance \(k\) in shallow network is given in (2). In addition, the exponential of shallow kernel in above corresponds to \(H\to\infty\), which is the same as the deep kernel in [29]. Indeed, as suggested in [9], the appearance of reciprocal width, \(1/H\), in the correction term is in association with the weak non-Gaussianity even though the other widths \(N_{1,2}\) are infinite. Next, we shall compute the covariance for the case where all widths are finite. In fact, the outer width \(N_{2}\) does not enter the kernel and we only need large but finite \(N_{1}\). The computation is similar and one only needs to replace the limiting joint distribution \(q_{\infty}(z^{(1)}_{\alpha},z^{(1)}_{\beta})\) with the non-Gaussian \(q\). The details of derivation using the property derivative of Gaussian can be seen in Appendix D. The kernel of the two-layer bottlenecked cosine network with \(N_{1,2}<\infty\) reads, \[k^{(2)}(\mathbf{x}_{\alpha},\mathbf{x}_{\beta})=\bigg{[}1+\frac{1-k(\mathbf{x} _{\alpha},\mathbf{x}_{\beta})}{H/2}\bigg{]}^{-H/2}(1+\epsilon)^{H}\;, \tag{23}\] with the correction due to finite \(N_{1}\), \[\epsilon=\frac{V_{\alpha\alpha\alpha\alpha}+V_{\beta\beta\beta\beta}-4V_{ \alpha\beta\beta\beta}-4V_{\beta\alpha\alpha\alpha}+6V_{\alpha\alpha\beta \beta}}{24}[\frac{3}{H^{2}}-\frac{6\sigma^{2}_{-}}{H^{3}(1+\sigma^{2}_{-}/H) }+\frac{3\sigma^{4}_{-}}{H^{4}(1+\sigma^{2}_{-}/H)^{2}}]\;, \tag{24}\] with \(\sigma^{2}_{-}=2(1-k)\) is used for easing the notation. Again, the above result is for general \(H\). For the wide bottleneck limit, one can show that these small parameters enter the kernel with the correction of \(O(\frac{1}{H})\) followed by \(O(\frac{1}{N_{1}H})\). Hence, the deep model as a function prior serves as a GP with random kernel whose mean value converges to Eq. (22) when all widths \(N_{1,2}\) and \(H\) approach infinity. However, the role of \(H\) is different from \(N_{1}\) from our perturbative analysis, and \(1/N_{1}\) does not appear alone in the small parameter. The influence of finite width on the kernel of deep and nonlinear and convolutional models is also studied in [30; 31; 32]. The fourth moment can be computed in a similar manner. The closed form for \(N_{1,2}\rightarrow\infty\) can be found in [13]. For the finite width case, the fourth cumulant \(V_{\alpha\beta\gamma\delta}^{(2)}\) becomes the \(H\)-th power of the permutational symmetrization of \(\frac{1}{N_{2}}\mathbb{E}_{q}\{\exp[-(z_{\alpha}^{(1)}+z_{\beta}^{(1)}-z_{ \gamma}^{(1)}-z_{\delta}^{(1)})^{2}/2H)\}\), the small parameters of which shall consist of \(1/N_{2}\), \(1/(N_{2}H)\), and \(1/(N_{2}HN_{1})\). The analysis of scaling with respect to the reciprocal width is more complex than the deep linear network. ## VI Discussions In essence, the finite width in random neural networks induces nonzero variance of kernel since the fourth cumulant is not zero. From function space perspective, the shallow networks with finitely large width can be regarded as a GP but the kernel itself is a random variable too, so the learned data representation is a distribution over kernels. Therefore, neural network with very narrow layers may not have enough expressive power for learning while the network with very wide layers is not flexible enough because it is equivalent to GP learning with one fixed kernel. It was recently suggested in Ref. [33] through studying the partition function [34; 35] of finite deep network that Student-t process [21] may suitably represent finite-width network, which offers an alternative description than this paper. The reports in [24; 30] observed the degradation of performance for Bayesian deep neural network when expanding the widths while the study in [36] suggested otherwise. Taking the wide limit with the perturbative approach is appealing because of the analytic and elegant expressions for inference, which may shed light on future investigation of alternative algorithm for classification [37], study of average test error in regression [38; 39] and classification [40] tasks using finite-width networks. In this paper, the conditional statistics and perturbed posterior distribution over the unseen output using the non-Gaussian prior are obtained with help of the differential representation of multivariate Edgeworth expansion. In parallel with the equivalence between GP prior and the random wide neural network, the investigation of neural tangent kernel [41; 17; 42] is important for understanding the learning dynamics during the optimization and the relation with Bayesian neural network learning [43; 44]. ## Acknowledgement The research is supported by the Dean Office of School of Arts and Science at Rutgers University Newark. Correspondences with Jacob Zavatone-Veth and Pietro Rotondo are acknowledged.
2305.08744
Integrating Uncertainty into Neural Network-based Speech Enhancement
Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech. This leads to a single estimate for each input without any guarantees or measures of reliability. In this paper, we study the benefits of modeling uncertainty in clean speech estimation. Prediction uncertainty is typically categorized into aleatoric uncertainty and epistemic uncertainty. The former refers to inherent randomness in data, while the latter describes uncertainty in the model parameters. In this work, we propose a framework to jointly model aleatoric and epistemic uncertainties in neural network-based speech enhancement. The proposed approach captures aleatoric uncertainty by estimating the statistical moments of the speech posterior distribution and explicitly incorporates the uncertainty estimate to further improve clean speech estimation. For epistemic uncertainty, we investigate two Bayesian deep learning approaches: Monte Carlo dropout and Deep ensembles to quantify the uncertainty of the neural network parameters. Our analyses show that the proposed framework promotes capturing practical and reliable uncertainty, while combining different sources of uncertainties yields more reliable predictive uncertainty estimates. Furthermore, we demonstrate the benefits of modeling uncertainty on speech enhancement performance by evaluating the framework on different datasets, exhibiting notable improvement over comparable models that fail to account for uncertainty.
Huajian Fang, Dennis Becker, Stefan Wermter, Timo Gerkmann
2023-05-15T15:55:12Z
http://arxiv.org/abs/2305.08744v1
# Integrating Uncertainty into Neural Network-based Speech Enhancement ###### Abstract Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech. This leads to a single estimate for each input without any guarantees or measures of reliability. In this paper, we study the benefits of modeling uncertainty in clean speech estimation. Prediction uncertainty is typically categorized into _idenatoric uncertainty_ and _epistemic uncertainty_. The former refers to inherent randomness in data, while the latter describes uncertainty in the model parameters. In this work, we propose a framework to jointly model aleatoric and epistemic uncertainties in neural network-based speech enhancement. The proposed approach captures aleatoric uncertainty by estimating the statistical moments of the speech posterior distribution and explicitly incorporates the uncertainty estimate to further improve clean speech estimation. For epistemic uncertainty, we investigate two Bayesian deep learning approaches: Monte Carlo dropout and Deep ensembles to quantify the uncertainty of the neural network parameters. Our analyses show that the proposed framework promotes capturing practical and reliable uncertainty, while combining different sources of uncertainties yields more reliable predictive uncertainty estimates. Furthermore, we demonstrate the benefits of modeling uncertainty on speech enhancement performance by evaluating the framework on different datasets, exhibiting notable improvement over comparable models that fail to account for uncertainty. Speech enhancement, Bayesian estimator, uncertainty estimation, deep neural networks ## I Introduction Speech recorded in noisy environments is often corrupted by background noise, which renders it difficult to understand by either humans or machines via automatic speech recognition systems. These problems call for robust speech enhancement algorithms, which extract desired clean speech from noisy mixtures to improve speech quality and intelligibility of recordings. In this paper, we consider single-channel speech enhancement. Speech enhancement algorithms typically utilize the short-time Fourier transform (STFT) to transfer the recorded signal into the time-frequency domain, where multiplicative filters can be applied to obtain an estimate of clean speech [1, 2]. Various Bayesian estimators, e.g., maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimators, have been developed based on different statistical distributions about speech and noise, aiming to restore either the spectral coefficients of the STFT or the spectral magnitudes [3, 4, 5, 6]. Given the assumption that speech is degraded by uncorrelated additive noise and that both follow complex Gaussian distributions with zero mean, the well-known Wiener filter can be derived. Traditionally, the speech and noise variances estimated by statistical model-based methods [1, 7] can be used to construct the MMSE-optimal Wiener filter. Recently, neural networks have been widely used in speech enhancement methods due to their flexibility and effectiveness in nonlinear modeling. Depending on their application, varying degrees of success are reported [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. Specifically, deep neural networks have been utilized to replace some of the building blocks of conventional speech enhancement methods. For instance, a neural network-based speech presence probability estimator has been proposed in [8] and combined with a single-channel multi-frame approach [9]. In [10, 11], neural networks are employed to estimate speech and noise power spectrum densities that are required in various Bayesian estimators. Additionally, recent work has leveraged the probabilistic modeling of generative networks for speech enhancement. For example, the variational autoencoder (VAE) has been used to estimate the clean speech distribution, which is then combined with a separate noise model to construct a noise reduction Wiener filter [12, 14]. The robustness of this filter can be further improved by injecting noise information [16], temporal dependencies [20, 21, 22], and information from other modalities, such as vision [17, 23]. Besides, speech enhancement approaches based on perceptual metric-guided adversarial training [24, 25] and diffusion-based generative models [26, 27] have also been presented. In contrast, supervised masking approaches [18] aim to learn a mapping from the noisy input to a masking filter. It allows neural networks to directly estimate a time-frequency filter by training on a large amount of noisy-clean speech pairs using an appropriate cost function [19]. In this work, we focus on supervised masking approaches. While the time-frequency noise-removing filter aims to remove noise with minimum speech distortions, the algorithm's robustness and reliability are not guaranteed, especially when speech is corrupted by previously unobserved noise. To alleviate this shortcoming, research has been conducted to investigate how to generalize to unseen situations by, e.g., developing more sophisticated network architectures, improved features, or including more training data that covers a wide variety of acoustic scenarios [28, 29, 30]. The first is often accompanied by a tremendous increase in model parameters, while the latter is rather time-demanding. Still, improving the generalization ability of neural networks in unseen scenarios is an unsolved problem considering the black-box nature of neural networks. It is thus necessary and beneficial to obtain the associated uncertainty as an indicator of reliability besides the point estimate, especially when the model is processing out-of-distribution samples that are insufficiently represented by training data. In machine learning, predictive uncertainty is typically decom posed into two categories [31, 32, 33]: _aleatoric_ uncertainty and _epistemic_ uncertainty. The term aleatoric uncertainty is used to describe the uncertainty of an estimate due to the intrinsic randomness of noisy observations. For speech enhancement, it originates from the stochastic nature of both speech and noise and is reflected in the variance of the clean speech posterior predictive distribution. Epistemic uncertainty is of different nature: If the parameters of a neural network are trained, e.g., using different training data, different initialization, or a different number of epochs, different parameters result. Therefore, also the parameters of a neural network used to estimate clean speech are uncertain. This uncertainty of the parameters is called epistemic uncertainty (also known as _model uncertainty_). For a general introduction to uncertainty modeling, readers are suggested to refer to a review article by Hullermeier et al. [31]. Various uncertainty measures have been employed in the deep regression setting, such as confidence intervals, differential entropy, and variance. Depeweg et al. [34] propose to measure uncertainty based on the entropy of the predictive distribution, which represents the information level of random variables. Pearce et al. [35] use confidence intervals (which state how certain the estimate is within a certain range) in a distribution-free setting. In this paper, we address uncertainty modeling in a probabilistic way following [33, 36, 37] and measure the uncertainty in terms of the _variance_. **Aleatoric uncertainty**. Due to the stochastic nature of speech and noise, a mapping from noisy speech to clean speech is uncertain as reflected by the posterior predictive distribution of clean speech. We can model this posterior using a specific conditional distribution, such as a Gaussian or a Laplacian [33, 36, 38], and employ a neural network to directly estimate the statistical moments of this distribution. While the predicted mean is the MMSE estimate of the target [2], the associated variance can be used to quantify the data inherent uncertainty, i.e., aleatoric uncertainty [33]. Few studies in neural network-based speech enhancement have incorporated the uncertainty of aleatoric nature. Chai et al. propose to use a generalized Gaussian distribution to model the prediction error on a logarithmic scale [39]. In [40], a neural network is used to estimate the parameters of a Gaussian mixture model, which then serves as the basis of an extra statistical model-based speech enhancement approach. This results in only a slight improvement over the baseline optimized with the MMSE criterion. Siniscalchi [41] leverages neural networks to learn a histogram distribution to approximate the conditional target speech distribution, which is assumed to be a truncated Gaussian distribution with a fixed variance in each frequency band. However, the fixed variance does not help to capture data-dependent uncertainty. **Epistemic uncertainty**. Estimating the statistical moments of the speech posterior predictive distribution allows capturing aleatoric uncertainty, but fails to account for epistemic uncertainty, which corresponds to the uncertainty in neural network parameters [31, 32, 33]. Epistemic uncertainty can be captured using Bayesian inference approaches, which instead of modeling the parameters of a neural network as _deterministic_ values, place a distribution over the network parameters and estimates the posterior distribution of the _stochastic_ network parameters [33]. By sampling from the posterior network parameter distribution, multiple sets of neural network parameter realizations can be obtained, thus producing multiple output predictions for each input sample. Uncertainty in predictions due to epistemic uncertainty can be empirically quantified by the variance in these output predictions [31, 33]. While the true posterior network distribution is intractable [42], it can be approximated using 1) Markov Chain Monte Carlo (MCMC) methods [43, 44], which are sampling-based approaches that construct a Markov Chain with the posterior network parameter distribution as its stationary distribution, 2) variational inference [42, 45, 46], which approximates the true posterior network parameter distribution with a tractable variational distribution, and 3) ensemble approaches [36, 47, 48], which were proposed from the frequentist perspective but are considered as an approximate Bayesian approach [37, 49]. For instance, Gal et al. [42] perform variational inference and interpret the dropout regularization technique [50] as imposing Bernoulli distributions on the neural network's weights. This method, referred to as Monte Carlo dropout (_MC dropout_), provides a set of target estimates from multiple forward passes by activating dropout at inference. This set of predictions can empirically approximate the outcome distribution for each input sample and allows inference of the variance (i.e., epistemic uncertainty). In contrast, _Deep ensembles_ proposed in [36] can quantify epistemic uncertainty by training multiple neural networks with random weight initialization [37, 38]. Recent studies attempt to consider the uncertainty of epistemic nature in, e.g., speech emotion recognition [51, 52] and speech recognition [53, 54, 55]. In [51], epistemic uncertainty is captured in a speech emotion recognition model for selective prediction, where samples with low confidence (high uncertainty) are rejected. Braun et al. [53] apply a Gaussian distribution to the weights of an end-to-end speech recognition model to capture uncertainty of neural network parameters, which is then used for parameter pruning. In a recent publication [54], epistemic uncertainty is employed to improve the robustness of domain adaptation for speech recognition. However, quantifying epistemic uncertainty in neural network-based speech enhancement remains unexplored. **Contributions**. Capturing overall predictive uncertainty, which reflects both aleatoric and epistemic uncertainties, is challenging, especially for deep neural networks, but crucial for an understanding of the model's prediction behaviour. In this work, we propose a method that allows capturing aleatoric uncertainty and combining it with epistemic uncertainty approximations to quantify overall predictive uncertainty. In the context of neural network-based speech enhancement, to the best of our knowledge, this is the first work to study different sources of uncertainty in a joint framework and provides for systematic analyses. We follow the complex Gaussian speech-plus-noise assumption and propose to train a neural network to estimate the Wiener filter and its variance, which quantifies aleatoic uncertainty, based on the MAP inference of _complex spectral coefficients_. To regularize the variance estimation, we build an approximate MAP (AMAP) estimator of _spectral magnitudes_ using the estimated Wiener filter (mean of the complex clean speech posterior predictive distribution) and uncertainty (variance of the complex clean speech posterior distribution) explicitly. The resulting AMAP estimator is in turn used in conjunction with the MAP inference of complex spectral coefficients to form a novel hybrid loss function. Rather than discarding uncertainty information at inference, the proposed scheme allows us to explicitly incorporate aleatoric uncertainty approximations into clean speech estimation in a principled way to further correct erroneous speech estimates. Previous studies on modeling epistemic uncertainty have focused on other tasks than speech enhancement, e.g., [38, 51, 52, 53, 54, 55, 56]. Yet, questions such as how reliable and accurate the estimates of epistemic uncertainty are in speech enhancement, and how modeling epistemic uncertainty affects enhancement performance, have not been addressed. To this end, we investigate two Bayesian deep learning techniques: MC dropout [42] and Deep ensembles [36] to capture epistemic uncertainty in clean speech estimation due to their efficiency in approximating Bayesian inference. Although previous works have explored ensemble-based speech enhancement methods [57, 58], they did not investigate the effectiveness of ensemble-based methods for uncertainty estimation. Moreover, we propose to estimate overall predictive uncertainty reflecting both aleatoric and epistemic uncertainties by combining the proposed hybrid loss function with the ensemble-based method. Finally, we present a comprehensive analysis of uncertainty from different sources and show their impacts on speech enhancement performance over different datasets, which we hope lays the foundation for further use of uncertainties. This paper extends our previous conference publication [59], which studied aleatoric uncertainty. Here, we propose to additionally capture epistemic uncertainty and combine them to quantify overall predictive uncertainty in clean speech estimation. Furthermore, we provide a more detailed analysis with respect to uncertainty estimates from different sources in a joint framework. Section II describes the signal model. In Section III, we propose to estimate the uncertainty of aleatoric nature following the complex Gaussian-distributed speech posterior and present how this uncertainty can be incorporated into clean speech estimation. In Section IV, we show how to capture epistemic uncertainty and quantify overall predictive uncertainty that combines different sources of uncertainty. We introduce the experimental setting in Section V, analyze uncertainty estimates in Section VI, and present enhancement performance in Section VII. Section VIII summarizes the findings. ## II Signal Model In the single-channel speech enhancement problem, the noisy mixture consists of clean speech and additive noise. We apply the STFT to obtain the representation in the time-frequency domain as: \[X_{ft}\!=\!S_{ft}\!+\!N_{ft}, \tag{1}\] where \(X_{ft}\), \(S_{ft}\), and \(N_{ft}\) represent the complex spectral coefficients of mixture, speech, and noise, at the time frame \(t\!\in\!\{1\!,2\!,\cdots\!,\!T\}\) and the frequency bin \(f\!\in\!\{1\!,2\!,\!\cdots\!,\!F\}\). \(T\) and \(F\) denote the number of time frames and frequency bins respectively. The objective is to recover clean speech in the time-frequency domain by applying a multiplicative filter. To derive such a filter, various assumptions are made according to different signal characteristics. By assuming that the speech and noise coefficients are uncorrelated and follow a circularly symmetric complex Gaussian distribution, \[S_{ft}\!\sim\!\mathcal{N}_{\text{C}}(0,\!\sigma_{s,ft}^{2}),\ \ \ N_{ft}\! \sim\!\mathcal{N}_{\text{C}}(0,\!\sigma_{n,ft}^{2}), \tag{2}\] where \(\sigma_{s,ft}^{2}\) and \(\sigma_{n,ft}^{2}\) represent the variances of speech and noise respectively, the likelihood \(p(X_{ft}|S_{ft})\) follows a complex Gaussian distribution with mean \(S_{ft}\) and variance \(\sigma_{n,ft}^{2}\), given by \[p(X_{ft}|S_{ft})\!=\!\frac{1}{\pi\sigma_{n,ft}^{2}}\!\exp\!\left(\!-\frac{|X _{ft}\!-\!S_{ft}|^{2}}{\sigma_{n,ft}^{2}}\!\right). \tag{3}\] With the likelihood in (3) and the prior in (2), we can apply Bayes' theorem to obtain the posterior distribution of clean speech as a complex Gaussian of the form [2]: \[p(S_{ft}|X_{ft})\!=\!\frac{1}{\pi\lambda_{ft}}\!\exp\!\left(\!-\frac{|S_{ft} \!-\!W_{ft}^{\text{WF}}X_{ft}|^{2}}{\lambda_{ft}}\right), \tag{4}\] \[W_{ft}^{\text{WF}}\!=\!\frac{\sigma_{s,ft}^{2}}{\sigma_{s,ft}^{2}\!+\!\sigma _{n,ft}^{2}},\ \ \lambda_{ft}\!=\!\frac{\sigma_{s,ft}^{2}\sigma_{n,ft}^{2}}{\sigma_{s,ft}^{2} \!+\!\sigma_{n,ft}^{2}}. \tag{5}\] \(W_{ft}^{\text{WF}}\) is recognized as the _Wiener filter_ and \(\lambda_{ft}\) is the variance of the posterior distribution. Under this assumption, the MMSE estimator, which corresponds to the expectation of the posterior distribution, leads to the Wiener filter applied as: \[\widetilde{S}_{ft}\!=\!W_{ft}^{\text{WF}}\!\cdot\!X_{ft}. \tag{6}\] Due to the symmetry of the complex Gaussian distribution, the MAP estimator of complex speech coefficients is identical to the MMSE estimator. ## III Aleatoric Uncertainty Estimation Although speech enhancement is typically formulated as a problem with a single output, the dependency between input and output can be modeled stochastically by means of a speech posterior predictive distribution \(p(S_{ft}|X_{ft})\), i.e., a variance \(\lambda_{ft}\) is associated with the clean speech estimate and can be interpreted as a measure of uncertainty of the Wiener estimate [2]. This uncertainty accounts for random effects in data and is referred to as _aleatoric uncertainty_[33, 36]. When properly captured, aleatoric uncertainty can reflect the expected estimation error in the absence of ground truth. ### _Deep Aleatoric Uncertainty Estimation_ In contrast to traditional signal processing techniques [1, 2, 60], where the Wiener filter is constructed by separately estimating the variances of speech and noise from the noisy mixture \(X_{ft}\), neural network-based supervised masking methods allow direct estimation of multiplicative filters. Besides the Wiener filter \(W_{ft}^{\text{WF}}\), one can further estimate the data-dependent aleatoric uncertainty \(\lambda_{ft}\) if the neural network is optimized using the speech posterior predictive distribution (4), i.e., by minimizing the negative logarithm of the posterior distribution of clean speech \(p(S_{ft}|X_{ft})\) (the logarithm does not affect the optimization problem due to monotonicity) and averaging over time-frequency bins: \[\widetilde{W}_{ft}^{\text{WF}}\widetilde{\lambda}_{ft}\!=\] \[\underset{W_{ft}^{\text{WF}}\lambda_{ft}}{\operatorname{argmin}} \underbrace{\frac{1}{FT}\!\sum_{f,t}\!\log(\lambda_{ft})\!+\!\frac{|S_{ft} \!-\!W_{ft}^{\text{WF}}X_{ft}|^{2}}{\lambda_{ft}}}_{\mathcal{L}_{p(S|X)}}, \tag{7}\] where \(\widetilde{W}_{ft}^{\text{WF}}\), \(\widetilde{\lambda}_{ft}\) denote estimates of the Wiener filter and associated aleatoric uncertainty [33, 36]. In contrast, if we assume a constant uncertainty for all time-frequency bins, i.e., \(\lambda_{ft}\!=\!\lambda^{*}\), and refrain from explicitly optimizing for \(\lambda^{*}\), \(\mathcal{L}_{p(S|X)}\) degenerates into the well-known mean squared error (MSE) loss \[\mathcal{L}_{\text{MSE}}\!=\!\frac{1}{FT}\!\sum_{f,t}\!|S_{ft}\!-\!W_{ft}^{ \text{WF}}X_{ft}|^{2}, \tag{8}\] which is widely used in neural network-based regression tasks including speech enhancement [19]. However, neural networks trained to perform point estimation do not necessarily output reliable estimates for clean speech when processing out-of-distribution samples that are underrepresented by the training data [28]. In this work, we discard the assumption of constant uncertainty; instead, we propose to treat uncertainty estimation as an additional task by training a neural network with the negative log speech posterior \(\mathcal{L}_{p(S|X)}\). Consequently, this method not only allows us to obtain a noise-removing mask, but also empowers the model to capture the uncertainty of aleatoric nature associated with predictions. Modeling aleatoric uncertainty by minimizing the logarithm of the posterior predictive distribution results in an improvement over baselines that fail to account for uncertainty in computer vision tasks [33]. However, directly using \(\mathcal{L}_{p(S|X)}\) as the loss function is prone to overfitting [59] and may result in reduced estimation performance of the Wiener filter. A recent publication [61] also reveals that directly minimizing the logarithm of the conditional probability hinders the training of mean estimation, which leads to premature convergence. To tackle this problem, we propose an additional regularization of the loss function by incorporating the estimated uncertainty into clean speech estimation as described next. ### _Joint Enhancement and Uncertainty Estimation_ Estimating uncertainty \(\lambda_{ft}\) associated with the Wiener filter is challenging since ground truth of uncertainty is not readily available. Instead, uncertainty estimation is an unsupervised task with an unspecified search space, which can potentially lead to unstable training [62, 63]. In this work, we propose to incorporate a subsequent speech enhancement task that explicitly uses both the Wiener filter and its uncertainty \(\lambda_{ft}\) during the training procedure. The speech enhancement task provides additional coupling between the outputs (Wiener filter and uncertainty). In this manner, the neural network is guided to estimate the uncertainty values that are relevant to the speech enhancement task, as well as to enhance the estimation of the Wiener filter. Considering complex coefficients with a symmetric posterior (4), the MAP and MMSE estimators both lead directly to the Wiener filter \(W_{ft}^{\text{WF}}\) and do not require an uncertainty estimate. However, this situation changes if we consider spectral magnitude estimation. The magnitude posterior \(p(|S_{ft}||X_{ft})\), derived by integrating the phase out of (4), follows a Rician distribution [4] \[\begin{split}& p(|S_{ft}||X_{ft}){=}\\ &\frac{2|S_{ft}|}{\lambda_{ft}}\text{exp}\Bigg{(}\!\!-\!\frac{|S_ {ft}|^{2}\!+\!(W_{ft}^{\text{WF}})^{2}|X_{ft}|^{2}}{\lambda_{ft}}\Bigg{)}I_{b} \Bigg{(}\frac{2|X_{ft}||S_{ft}|W_{ft}^{\text{WF}}}{\lambda_{ft}}\Bigg{)},\end{split} \tag{9}\] where \(I_{0}(\cdot)\) is the modified zeroth-order Bessel function of the first kind. In order to compute the MAP estimate for the spectral magnitude, the mode of the Rician distribution has to be estimated, which is difficult to do analytically. However, it can be approximated by substituting a Bessel function approximation following [64] into (9) and maximizing with respect to the spectral magnitude, yielding a simple closed-form expression [2, 4]: \[\begin{split}|\widehat{S}_{ft}|&\approx W_{ft}^{ \text{AMAP}}|X_{ft}|\\ &=\Bigg{(}\frac{1}{2}W_{ft}^{\text{WF}}\!+\!\sqrt{\left(\frac{ 1}{2}W_{ft}^{\text{WF}}\right)^{2}\!+\!\frac{\lambda_{ft}}{4|X_{ft}|^{2}}} \Bigg{)}|X_{ft}|,\end{split} \tag{10}\] where \(|\widehat{S}_{ft}|\) is an estimate of the clean spectral magnitude \(|S_{ft}|\) using the AMAP estimator of spectral magnitudes \(W_{ft}^{\text{AMP}}\). It can be noticed that the estimator \(W_{ft}^{\text{AMP}}\) utilizes both the Wiener filter \(W_{ft}^{\text{WF}}\) and the associated uncertainty \(\lambda_{ft}\). Fig. 2 illustrates the input-output estimation characteristics of the AMAP estimator and Wiener filter [2]. We can see that \(W_{ft}^{\text{AMP}}\) is _nonlinear_ with respect to the noisy input and tends to cause less target attenuation than the Wiener filter especially for low inputs. This indicates that incorporating the associated uncertainty \(\lambda_{ft}\) may increase the robustness of the estimator by potentially preserving more speech at the slight cost of noise removal. After combining the estimated magnitude \(|\widehat{S}_{ft}|\) with the noisy phase, we can apply the inverse STFT to obtain an estimate of the time-domain speech signal, denoted as \(\widehat{s}\). Afterwards, the estimated time-domain signal is used to compute the negative scale-invariant Fig. 1: Block diagram of the proposed neural network-based aleatoric uncertainty estimation. signal-to-distortion ratio (SI-SDR) metric [65]: \[\mathcal{L}_{\text{SI-SDR}}\!=\!-10\!\log_{10}\!\left(\frac{||\alpha s||^{2}}{|| \alpha s\!-\!\widehat{s}||^{2}}\right)\!,\;\;\;\alpha\!=\!\frac{\widehat{s}^{T} s}{||s||^{2}}, \tag{11}\] which is leveraged as an additional term in the loss function that forces the speech estimate (computed with \(W_{ft}^{\text{MAXMP}}\)) to be similar to the time-domain clean speech target \(s\). While a spectrum loss like (8) is a straightforward solution to regularize the uncertainty estimation, the time-domain loss is expected to be more effective since it is directly related to the raw waveform, implicitly taking phase information into account and thus promoting speech reconstruction for better perceptual performance [66]. Eventually, we propose to combine the SI-SDR loss \(\mathcal{L}_{\text{SI-SDR}}\) with the negative log-posterior \(\mathcal{L}_{p(S|X)}\) given in (7), and train the neural network using a hybrid loss function \[\mathcal{L}\!=\!\beta\mathcal{L}_{p(S|X)}\!+\!(1\!-\!\beta)\mathcal{L}_{\text {SI-SDR}}, \tag{12}\] with the weighting factor \(\beta\!\in\![0,\!1]\). By explicitly using the estimated uncertainty for the speech enhancement task, the hybrid loss guides both mean and variance estimation to improve speech enhancement performance. Fig. 1 depicts a block diagram of this approach. ## IV Bayesian uncertainty estimation While neural networks performing point estimation have demonstrated effectiveness in speech enhancement, it is not guaranteed that neural networks can generalize well to unfamiliar acoustic situations. Therefore, to quantify the overall predictive confidence regarding the estimated clean speech, it is necessary to also assess the uncertainty of the neural network parameters (i.e., _epistemic uncertainty_). Note that a single neural network optimized using the proposed hybrid loss (12) allows capturing aleatoric uncertainty but is unaware of epistemic uncertainty. To solve this, we can utilize Bayesian deep learning approaches, assuming that the weights of a neural network follow some probability distribution rather than deterministic values. Furthermore, when combined with the loss (12), an ensemble of networks can provide both aleatoric uncertainty and epistemic uncertainty estimates. ### _Epistemic Uncertainty Estimation_ Bayesian deep learning provides a set of principled methods to capture epistemic uncertainty [42, 43, 44, 36, 46, 48]. Early work on MCMC methods [43, 44] constructs a Markov chain with the posterior network parameter distribution as its stationary distribution and generates multiple network parameter realizations by sampling from this distribution. However, MCMC methods are computationally inefficient and do not scale well to neural networks with a large number of parameters [37, 38]. Recent work based on variational inference allows approximating the true posterior network parameter distribution with a tractable distribution [45, 46], while at the same time ensemble-based methods are proposed as simple and scalable frequentist alternatives to model uncertainty [47, 36, 48]. Among the existing Bayesian deep learning methods, MC dropout and Deep ensembles have shown their scalability in large neural network-based problems, such as semantic segmentation [33] and depth estimation [37]. Here, we investigate their effectiveness for uncertainty estimation in speech enhancement. We define a neural network as a function parameterized by \(\theta\) and a training dataset that contains noisy-clean speech pairs \(\mathcal{D}\!=\!\{(S_{11},X_{11}),...,(S_{FT},X_{FT})\}\). Hereafter we omit the indices \(ft\), since all time-frequency bins are treated independently in (4). Since the posterior network parameter distribution \(p(\theta|\mathcal{D})\) is computationally intractable in a high dimensional space, variational inference approximates the true posterior network parameter distribution by a pre-specified variational distribution \(q(\theta)\) and the speech posterior predictive distribution at inference time is obtained by marginalizing out \(q(\theta)\) as: \[\begin{split} p(S|X,\!\mathcal{D})&\!=\!\int\!p(S|X,\!\theta)p(\theta|\mathcal{D})d\theta\\ &\!\approx\!\frac{1}{M}\!\sum_{m=1}^{M}\!\!p(S|X,\!\theta_{m}), \;\;\;\theta_{m}\!\sim\!q(\theta),\end{split} \tag{13}\] where \(\theta_{m}\) represents \(m\)-th sampling from \(q(\theta)\)[67]. MC dropout approximates the posterior network parameter distribution using the Bernoulli distribution and samples neural network weights by activating dropout at inference time. Gal et al. provide further details on the derivations in [42]. This allows obtaining \(M\) target speech estimates from multiple stochastic forward passes for each input. In contrast, Deep ensembles repeatedly train the same model \(M\) times with random initialization and random data shuffling [36], generating \(M\) neural networks with deterministic network parameter estimates \(\{\theta_{m}\}_{m=1}^{m=M}\). Since \(\theta_{m}\) can be viewed as independent samples from a certain approximate distribution \(q(\theta)\), Deep ensembles can be considered equivalent to approximate Bayesian inference [37]. Therefore, the predictive distribution is obtained similarly to (13). Furthermore, neural networks usually contain a large number of parameters, which makes them multi-modal in the parameter space. Different initialization starting points in Deep ensembles allow the neural network to converge to different local optima, thus potentially capturing multiple modes of \(p(\theta|\mathcal{D})\)[37, 48]. Epistemic uncertainty can be approximated by building an ensemble of neural networks using either MC dropout or Deep ensembles, where each network is trained to estimate the Wiener filter only with the loss function \(\mathcal{L}_{\text{MSE}}\) (8). With the results of \(M\) forward passes, we can approximate the mean and variance of the distribution \(p(S|X)\) by the empirical mean and variance of the prediction set [38, 42]: \[\widetilde{S}\!=\!\frac{1}{M}\!\sum_{m=1}^{M}\!\widetilde{S}_{\theta_{m}}\!, \;\;\;\widetilde{\Sigma}\!=\!\frac{1}{M}\!\sum_{m=1}^{M}\!|\widetilde{S}_{ \theta_{m}}\!-\!\widetilde{S}|^{2}, \tag{14}\] where \(\widetilde{S}_{\theta_{m}}\) denotes clean speech estimated using the neural network with parameters \(\theta_{m}\). \(\widetilde{S}\) represents the average clean speech estimate and \(\widetilde{\Sigma}\) quantifies the epistemic uncertainty. ### _Overall Predictive Uncertainty_ In the case of optimizing the network using (12), besides the Wiener estimate \(\widetilde{S}_{\theta_{m}}\), each neural network with weights \(\theta_{m}\) can produce the associated variance \(\widetilde{\lambda}_{\theta_{m}}\). The overall predictive uncertainty, which reflects both aleatoric and epistemic uncertainties, can be computed using the law of total variance [33, 38]: \[\widetilde{S}\!=\!\frac{1}{M}\!\sum_{m=1}^{M}\!\widetilde{S}_{\theta_{m}}\!, \;\;\;\;\widetilde{\Sigma}\!=\!\frac{1}{M}\!\sum_{m=1}^{M}\!\Big{(}|\widetilde{ S}_{\theta_{m}}\!-\!\widetilde{S}|^{2}\!+\!\widetilde{\lambda}_{\theta_{m}} \Big{)}, \tag{15}\] where \(\widetilde{S}\) denotes the average Wiener estimate, and \(\widehat{\Sigma}\) quantifies the overall predictive uncertainty. For each neural network with weights \(\theta_{m}\), we can further generate the AMAP clean speech estimate \(\widehat{S}_{\theta_{m}}\) by explicitly incorporating the associated uncertainty \(\hat{\lambda}_{\theta_{m}}\) as in (10). Therefore, given an ensemble of networks, besides the average Wiener estimate \(\widetilde{S}\), the average AMAP estimate can be obtained by: \[\widehat{S}=\frac{1}{M}\sum_{m=1}^{M}\widehat{S}_{\theta_{m}}. \tag{16}\] ## V Experimental setting ### _Datasets_ For training and validation, we use a subset of the Deep Noise Suppression (DNS) Challenge's training set [68], which contains synthetic audio samples of 100 hours with signal-to-noise ratios (SNRs) uniformly distributed between -5 dB and 20 dB. The dataset is randomly split into 80 and 20 hours for training and validation respectively. The model is evaluated on two different unseen datasets. The first is the reverb-free synthetic test set released by DNS Challenge. This evaluation dataset is disjoint from the training and validation datasets and is created by adding noise signals sampled from 12 categories [68] to speech signals from [69] at SNRs distributed between 0 dB and 25 dB [68]. The second unseen evaluation dataset is created using clean speech from the evaluation subset of WSJ0 (si_et_05) [70] and four types of noise from CHiME3 (cafe, street, pedestrian, and bus) [71]. The SNRs are randomly selected from{-10 dB, -5 dB, 0 dB, 5 dB, 10 dB}. ### _Architecture and Hyperparameters_ To ensure a fair comparison, all experiments are performed based on the same U-Net neural network architecture [72, 73]. The U-Net structure with skip connections between the encoder and the decoder is comprised of several blocks, each of which consists of: 2D convolution layer + instance normalization [74] + Leaky ReLU with slope 0.2. The encoder contains 6 blocks that increase the feature channel from 1 to 512 progressively (\(1-16-32-64-128-256-512\)), while the decoder reduces it back to 16 (\(512-256-128-64-32-16-16\)), followed by a \(1\times 1\) convolution layer that outputs a mask of the same shape as the input. For all blocks, the kernel size is set to \((5,\!5)\) with stride \((1,\!2)\) and padding \((2,\!2)\), processing a 2-D input with a dimension of \((T,\!F)\). For the model estimating aleatoric uncertainty, the output layer is split into two heads that predict both the Wiener filter and associated uncertainty1. We applied the sigmoid activation function to the estimated Wiener filter, while using the _log-exp_ technique to constrain the uncertainty output to be greater than 0, i.e., the network outputs the logarithm of the variance, which is then recovered by the exponential term in the loss function. The batch size is 64; the learning rate is 0.001; the weight decay parameter is set to 0.0005. All neural networks are trained with the Adam optimizer [75]. The training process is stopped if the validation loss fails to decrease for 10 consecutive epochs and the learning rate is halved when the validation loss does not decrease for 3 epochs. Footnote 1: Code for the model is available at: [https://github.com/sp-uhb/uncertainty-SE](https://github.com/sp-uhb/uncertainty-SE). The noisy-clean speech pairs have a sampling rate of 16 kHz, and the STFT is computed using a 32 ms Hann window with 50% overlap. ### _Methods_ The algorithms considered in this work include: 1. _Baseline WF_: The U-Net architecture was trained on noisy-clean speech pairs using loss function (8). This serves as a baseline, assuming a constant variance for all time-frequency bins and estimating the Wiener filter for each input only. 2. _Baseline SI-SDR_: Following the same constant variance assumption as _Baseline WF_, the U-Net network was trained to output a multiplicative filter and optimized using the time-domain loss function (11). This serves as another baseline that fails to account for uncertainty. 3. _Aleatoric-WF & Aleatoric-AMAP_: The hybrid loss function (12) allows us to generate two possible clean estimates for each input, i.e., by using the estimated Wiener filter (6) or by applying the AMAP estimator (10) that incorporates both the Wiener filter and its associated uncertainty. They are denoted as _Aleatoric-WF_ and _Aleatoric-AMAP_ respectively. We observe experimentally that the performance of Aleatoric-AMAP only fluctuates slightly with different \(\beta\) values, while the performance of Aleatoric-WF decreases when the value of \(\beta\) is large. The weighting factor \(\beta\) was empirically chosen to be 0.001 to achieve a good trade-off between the performance of Aleatoric-WF and Aleaoticr-AMAP. 4. _MC dropout_: Inserting dropout after each convolution layer regularizes too strongly and impacts the model performance [56], which was confirmed in our preliminary experiments. We thus studied several variants of the U-Net by inserting the dropout layer at different positions of the architecture, and selected the variant with three dropout layers (drop probability of 0.5 [50, 37, 56]) inserted after the three deepest blocks of the encoder. The same cost function as _Baseline WF_ is used. This method captures epistemic uncertainty by activating the dropout layers at inference. 5. _Deep ensembles_: The same setup as _Baseline WF_ was trained \(M\) times with random initialization. This allows the model to capture epistemic uncertainty. 6. _DE-Aleatoric-WF & DE-Aleatoric-AMAP_: The same setup as _Aleatoric-WF/AMAP_ was trained \(M\) times with random intialation. This allows capturing aleatoric and epistemic uncertainties simultaneously. We average over the estimates according to (15) and (16) to obtain two clean speech estimates: _DE-Aleatoric-WF_ and _DE-Aleatoric-AMAP_ respectively. ## VI Analysis of uncertainty estimation In this section, we introduce the evaluation metrics for uncertainty and then analyze the captured aleatoric and epistemic uncertainties. Finally, we show that combining two types of uncertainty yields more reliable predictive uncertainty. ### _Uncertainty Evaluation Metrics_ To evaluate the captured uncertainty, the sparsification plot and the sparsification error are used as evaluation metrics [37, 38, [76]. The sparsification plot illustrates the correlation between the uncertainty measure and the true error. The error of a time-frequency bin is defined as the absolute square between the estimated spectral coefficient and the ground-truth. For this plot, the errors in the time-frequency domain are first sorted according to their corresponding uncertainty measures. The residual error should gradually decrease when the time-frequency bins with large uncertainties are removed. This leads to a plot of the root mean squared error (RMSE) versus the fraction of removed time-frequency bins. Normalization is applied to ensure that the plot is initialized at 1. The best ordering of uncertainty measures is determined by ranking the true errors [38, 76]. This provides a lower bound of each sparsification plot, denoted as the _oracle_ curve, i.e., when the uncertainty estimates and errors are perfectly correlated, the sparsification plot and the oracle curve coincide. The sparsficiation error is computed as the difference between the sparsification plot and the corresponding oracle curve, and the area under the sparsification error (AUSE) curve provides a single value that enables comparison of different uncertainty modeling techniques. A lower AUSE value (i.e., the closer the sparsification plot is to its oracle curve) indicates a more accurate estimate of uncertainty. ### _Analysis of Aleatoric Uncertainty Estimation_ In this part, we analyze the captured data-dependent aleatoric uncertainty associated with the Wiener estimate. For this, an audio example from the DNS challenge test set is selected to illustrate the effectiveness of the proposed optimization metric in modeling uncertainty. Aleatoric-WF in Fig. 3 (c) shows the spectrogram of the clean speech obtained by applying the estimated Wiener filter. By computing the absolute square between the clean reference and estimated spectral coefficients, we can obtain the estimation error as depicted in Fig. 3 (d). It can be observed that large errors occur when the speech is heavily disturbed by noise, as in the region marked by the green box, while for inputs with less distortion, such as the first three seconds, the model produces smaller errors. Meanwhile, the proposed loss function enables the estimation of uncertainty associated with the Wiener filter, as shown in Fig. 3 (e), denoted as aleatoric uncertainty. It shows that aleatoric uncertainty prevails in speech presence regions. By relating Fig. 3 (d) to Fig. 3 (e), the model outputs relatively large uncertainty (e.g., the green box-marked part) when large errors are produced. This suggests that the neural network is able to produce reasonable uncertainty estimates when dealing with complex unseen inputs. Furthermore, we can incorporate the estimated uncertainty into clean speech inference, as in (10), which leads to a clean speech estimate shown in Fig. 3 (f), denoted as Aleatoric-AMAP. It is observed that more speech is preserved than Aleatoric-WF in the highly-uncertain green box-marked region at some cost of noise reduction, i.e., Aleatoric-AMAP leads to less speech distortion with a slight tendency of retaining more noise. The reason for this is that with reliable uncertainty estimates, Aleatoric-AMAP can increase the estimator's value in (10) under high uncertainty (as the AMAP estimator's value is positively correlated with the uncertainty estimate when other terms are fixed), thus causing less target attenuation. Besides the qualitative analysis, we can associate the captured uncertainty with the corresponding prediction errors on the time-frequency bin scale and use sparsification plots to analyze the reliability of the uncertainty estimates. The sparsification plot shown in Fig. 4 is computed based on all audio samples in the DNS reverb-free test dataset. We observe a rapid decrease at the beginning in Fig. 4, implying that large errors come with large uncertainty estimates. By removing 20 percent of time-frequency bins with high uncertainty (i.e., 0.2 in the horizontal axis), the RMSE value drops by around two-thirds. Thus, the monotonically decreasing sparsification plot in Fig. 4 again suggests that the predicted aleatoric Fig. 4: Sparsification plot of aleatoric uncertainty \(\tilde{\lambda}\) evaluated on the DNS test dataset. The dashed line denotes the lower bound of the sparsification plot of aleatoric uncertainty. A smaller distance of the sparsification plot to the oracle curve indicates a more accurate uncertainty estimation. Fig. 3: Aleatoric uncertainty (shown in (e)) captured by the proposed loss function (12) for an excerpt from the DNS test dataset. The uncertainty is visualized as a heatmap. The black color indicates low uncertainty, whereas the brighter color indicates higher uncertainty. uncertainty measurement is closely related to the estimation error. ### _Analysis of Epistemic Uncertainty Estimation_ Next, we ignore aleatoric uncertainty and analyze separately epistemic uncertainty in the model parameters. For this, the neural networks are trained to perform only point estimation, i.e., trained with the loss function (8). An ensemble of models is collected by applying Deep ensembles or MC dropout to approximate the predictive mean and variance. In Fig. 5, we present the same audio example as in Fig. 3 to illustrate the uncertainty measures based on MC dropout and Deep ensembles. MC dropout and Deep ensembles provide the clean speech estimates as shown in the first row of Fig. 5. The estimation error for each method is obtained similarly by calculating the absolute square between the estimated and clean spectral coefficients, shown in the second row. As can be observed, both methods produce large errors as well as associated large uncertainties when the signal is heavily corrupted by noise, i.e., the green box-marked region. While the noise corruption is less severe, i.e., the region marked with a red box, the model generates low prediction errors and also a relatively low level of uncertainty. From the visual analysis, the uncertainty generated by Deep ensembles is more correlated with the error, while MC dropout appears to underestimate the uncertainty of incorrect predictions. To objectively assess the reliability of uncertainty measures, we also utilize the sparsification plots and the sparsification errors, as illustrated in Fig. 6 and Fig. 7 respectively. In Fig. 6, we show the sparsification plots of Deep ensembles and MC dropout for a different number of forward passes \(M\in\{2,4,8,16,32\}\). It can be observed that both MC dropout and Deep ensembles yield decreasing sparsification plots, suggesting that they produce accurate uncertainties that correlate well with the estimation errors. It also shows that a large \(M\) leads to a Fig. 5: The same excerpt as in Fig. 3 illustrates the captured epistemic uncertainty obtained by applying Bayesian deep learning methods (\(M=16\)). _Estimate (MC dropout)_ and _Estimate (DE)_ represent clean speech estimated using MC dropout and Deep ensembles. Fig. 6: Sparsification plots of epistemic uncertainty \(\widetilde{\Sigma}\) for the DNS test dataset. The dashed line denotes the lower bound of the corresponding sparsification plot, denoted as Oracle \(M\). A smaller distance of the sparsification plot to the oracle curve indicates a more accurate uncertainty estimation. Note that all oracle curves are visually _overlapping_. Fig. 7: AUSE for the DNS test dataset. AUSE is plotted relative to a different number of forward passes \(M\). The markers denote the mean and the vertical bars indicate the standard deviation. Lower values indicate a smaller deviation from the oracle curve, and thus more reliable uncertainty estimation. sparsification plot closer to its corresponding oracle curve, i.e., improves the performance of the uncertainty estimation, and this improvement becomes saturated when \(M\) is sufficiently large, e.g., from \(M\!=\!16\) to \(M\!=\!32\). To comprehensively compare MC dropout and Deep ensembles in terms of uncertainty modeling, AUSE is plotted as a function of different numbers of forward passes \(M\). Multiple models for each \(M\) are used to provide mean and standard deviation to account for variations resulting from random factors in training. 16 MC dropout models are trained and used to compute the mean of AUSE and its standard deviation for each possible \(M\). For Deep ensembles, 16 disjoint sets of \(M\) models are randomly selected from the \(33\) trained models to compute the mean and standard deviation of AUSE. The AUSE plot in Fig. 7 provides an alternative and more informative evaluation than a single sparsification plot. It indicates that Deep ensembles generally produce more accurate uncertainty than MC dropout, which may fail to produce reliable uncertainties for some erroneous predictions. This coincides with our visual observation in the green box-marked region in Fig. 5. ### _Prediction Uncertainty Combining Aleatoric and Epistemic Uncertainties_ In this part, we investigate the overall prediction uncertainty obtained by combining aleatoric uncertainty and epistemic uncertainty as in (15). To obtain the overall prediction uncertainty, we use an ensemble of models trained with the optimization metric (12) such that both aleatoric and epistemic uncertainty are captured. It has been shown in Section VI-C that Deep ensembles yield more accurate epistemic uncertainty than MC dropout and, therefore, are selected for the estimation of the overall predictive uncertainty. Although a larger number of models \(M\) could potentially improve the mean and variance estimation, we restrict \(M\) to 16 as further improvements become subtle while the computation time increases considerably. In Fig. 8, we use sparsification plots to analyze the quality of prediction uncertainty estimates combining aleatoric and epistemic uncertainties. The corresponding AUSE values are provided in Table I. The plot illustrates that the overall predictive uncertainty estimates correlate stronger with the estimation error than either of the two uncertainties alone. This suggests that two sources of uncertainty may complement each other and combining both leads to more reliable uncertainty estimates. For example, Deep ensembles do not seem to capture sufficient uncertainty for less distorted input (e.g., first three seconds) as shown in Fig. 5, while aleatoric uncertainty shown in Fig. 3 could be able to compensate for this shortcoming. ## VII Influence of modeling uncertainty for speech enhancement performance In this section, we show how modeling different sources of uncertainty affects the performance of speech enhancement. To evaluate the speech enhancement performance, we employ perceptual evaluation of speech quality (PESQ) [77] to measure speech quality, extended short-time objective intelligibility (ESTOI) [78] to measure speech intelligibility, and SI-SDR to account for both noise reduction and speech distortion. To show the impact of modeling aleatoric uncertainty on speech enhancement performance, we compare the performance of the model trained with the proposed loss function (12) with that of Baseline WF and Baseline SI-SDR. The proposed method enables speech estimation via either the Wiener filter, which implicitly takes uncertainty into account during the training process, or the approximated MAP filter, which explicitly includes uncertainty to estimate speech, denoted as Aleatoric-WF and Aleatoric-AMAP respectively. Table II shows the average evaluation results on the DNS synthetic non-reverb test set. Aleatoric-WF shows improvements in PESQ, ESTOI, and SI-SDR compared to the Baseline WF, indicating the benefit of weighting Wiener estimates with uncertainty during training. Further PESQ improvements over both Baseline WF and Baseline SI-SDR can be observed when explicitly incorporating uncertainty into clean speech estimation, that is, Aleatoric-AMAP. This demonstrates the advantage of modeling uncertainty associated with the Wiener estimate rather than directly estimating optimal points. When evaluated on another dataset with speech from WSJ and noise from CHiME3, the performance gap between Aleatoric-AMAP and the baselines in terms of PESQ is further increased, as shown in Fig. 9, indicating that the model that takes uncertainty into account has improved generalization capacities for speech enhancement. This can be attributed to the nonlinear estimation characteristics of the uncertainty-based AMAP estimator with respect to noisy inputs and the resulting better speech preservation properties. We observe larger improvements over the baselines at high SNRs, which might be explained by the \begin{table} \begin{tabular}{|c||c|c|c|} \hline & Unc. & PESQ & ESTOI & SI-SDR \\ \hline Noisy (DNS) & - & 1.58 \(\pm\) 0.07 & 0.81 \(\pm\) 0.02 & 9.07 \(\pm\) 0.89 \\ \hline Baseline WF & ✗ & 2.48 \(\pm\) 0.10 & 0.90 \(\pm\) 0.01 & 16.84 \(\pm\) 0.74 \\ Baseline SI-SDR & ✗ & 2.63 \(\pm\) 0.10 & 0.91 \(\pm\) 0.01 & 17.49 \(\pm\) 0.78 \\ \hline MC dropout & ✓ & 2.53 \(\pm\) 0.10 & 0.90 \(\pm\) 0.01 & 16.88 \(\pm\) 0.74 \\ Deep ensembles & ✓ & 2.66 \(\pm\) 0.10 & 0.91 \(\pm\) 0.01 & 17.16 \(\pm\) 0.73 \\ \hline Aleatoric-WF & ✓ & 2.62 \(\pm\) 0.11 & 0.91 \(\pm\) 0.01 & 17.54 \(\pm\) 0.78 \\ Aleatoric-MAP & ✓ & 2.69 \(\pm\) 0.10 & 0.91 \(\pm\) 0.01 & 17.54 \(\pm\) 0.78 \\ \hline DE-Aleatoric-WF & ✓ & 2.77 \(\pm\) 0.11 & 0.92 \(\pm\) 0.01 & 17.88 \(\pm\) 0.78 \\ DE-Aleatoric-AMAP & ✓ & 2.83 \(\pm\) 0.10 & 0.92 \(\pm\) 0.01 & 17.90 \(\pm\) 0.78 \\ \hline \end{tabular} \end{table} TABLE II: Evaluation results on the DNS test dataset. All results are stated as mean \(\pm\) 95%-confidence interval. _Unc._ stands for _Uncertainty_. Fig. 8: Sparsification plots of aleatoric \(\bar{\lambda}\), epistemic \(\widetilde{\Sigma}\), and overall predictive uncertainty \(\widehat{\Sigma}\) (i.e., aleatoric & epistemic) on the DNS test dataset. Note that _Oracle aleatoric_ and _Oracle aleatoric & epistemic_ overlap. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Aleatoric & Epistemic & Aleatoric \& epistemic \\ \hline AUSE & 0.110 & 0.094 & 0.067 \\ \hline \end{tabular} \end{table} TABLE I: AUSE values of _Aleatoric_, _Epistemic_, and _Aleatoric & epistemic_ in Fig. 8. fact that, at high SNRs, speech quality (and thus PESQ) is mainly affected by speech distortions, while at low SNRs the main factor is residual noise. Overall, these evaluation results demonstrate the notable benefits of modeling aleatoric uncertainty in the algorithm. To show the impact of modeling epistemic uncertainty on speech enhancement performance, we compare the performance of Deep ensembles and MC dropout with Baseline WF. We again restrict \(M\) to \(16\) as in Section VI-D. MC dropout performs comparably to Baseline WF on the DNS test set, while a larger improvement can be observed when using Deep ensembles. This improvement is even more pronounced in PESQ. Similarly, the results on the second test set are shown in Fig. 9, where Deep ensembles and MC dropout improve over Baseline WF in terms of PESQ for all considered SNRs and provide higher ESTOI scores, especially at low SNRs. We observe that Deep ensembles not only provide more accurate uncertainty estimates than MC dropout but also lead to a better speech enhancement performance. A possible explanation is that while MC dropout only captures local uncertainty around a single mode, Deep ensembles trained with different initialization points are capable of exploring multiple modes in the function space to account for training data, see, e.g., [48, 49]. This may allow the neural network to generalize better to complex acoustic scenarios. To show the impact of modeling predictive uncertainty that combines both aleatoric and epistemic uncertainties on speech enhancement performance, we use the same set of models as described in Section VI-D. We take the average of estimates as in (15) and (16) and obtain two speech estimates, called DE-Aleatoric-WF and DE-Aleatoric-AMAP respectively. They both provide better ESTOI and SI-SDR scores than the baselines, the epistemic uncertainty-only model, and the aleatoric uncertainty-only model, especially at low SNRs. Moreover, DE-Aleatoric-AMAP yields higher scores in PESQ likely due to the uncertainty-dependent regularization and exploration of multiple modes in the function space. This indicates that combining the model that accounts for aleatoric uncertainty with the ensemble-based method can take advantage of the benefits of both approaches and further improve the performance. Overall, the evaluation results across different datasets show that quantifying uncertainty in neural network-based speech enhancement leads to a considerable improvement in enhancement performance over the baseline models. ## VIII Conclusion In this paper, besides estimating clean speech, we quantified predictive uncertainty in neural network-based speech enhancement. For this, aleatoric uncertainty, which describes inherent uncertainty in data, and epistemic uncertainty, which accounts for uncertainty of the model, were captured and analyzed in a joint framework. We investigated the reliability of uncertainty estimates from different sources, and how it affects the enhancement performance. Our proposed hybrid loss function based on MAP inference of complex spectral coefficients and an AMAP estimator of spectral magnitudes has demonstrated the effectiveness in modeling aleatoric uncertainty. In addition, the proposed scheme provided a principled way to create a noise-removing mask that explicitly incorporates uncertainty to further improve speech enhancement performance. The evaluation results on different datasets have shown increased generalization capacities when modeling aleatoric uncertainty. To empirically approximate the predictive distribution and capture epistemic uncertainty, we employed two Bayesian deep learning methods, MC dropout and Deep ensembles. We showed that Deep ensembles not only provide more accurate estimates of epistemic uncertainty than MC dropout, but also lead to more prominent improvements in speech enhancement. A reason may be that Deep ensembles can potentially converge to different local minima in Fig. 9: Performance improvement on the dataset with speech from WSJ0 and noise from CHiME3. PESQi denotes PESQ improvement with respect to noisy mixtures. ESTOIi and SI-SDRi are defined similarly. Markers and vertical bars indicate the mean and 95% confidence interval. the loss landscape due to random initialization. Furthermore, we combined the proposed hybrid function with Deep ensembles to quantify overall prediction uncertainty, which reflects both data uncertainty and model uncertainty. An analysis using sparsification plots showed that combining different types of uncertainties further improves the reliability of predictive uncertainty estimation, indicating the complementary nature of the two sources of uncertainty. Finally, our experiments indicated that the performance of clean speech estimation can be considerably improved over the baselines while additionally obtaining predictive uncertainty estimates. In summary, this work investigated capturing predictive uncertainty in neural network-based speech enhancement and showed the noticeable benefits of modeling uncertainty for clean speech estimation. Uncertainty can indicate the algorithm's confidence in the output in the absence of ground truth, which is essential for assessing the reliability of speech estimates. With this work, we hope to enlighten discussions on modeling uncertainty in the speech enhancement task, while facilitating future research on how to take advantage of uncertainty.
2307.03068
A Hybrid End-to-End Spatio-Temporal Attention Neural Network with Graph-Smooth Signals for EEG Emotion Recognition
Recently, physiological data such as electroencephalography (EEG) signals have attracted significant attention in affective computing. In this context, the main goal is to design an automated model that can assess emotional states. Lately, deep neural networks have shown promising performance in emotion recognition tasks. However, designing a deep architecture that can extract practical information from raw data is still a challenge. Here, we introduce a deep neural network that acquires interpretable physiological representations by a hybrid structure of spatio-temporal encoding and recurrent attention network blocks. Furthermore, a preprocessing step is applied to the raw data using graph signal processing tools to perform graph smoothing in the spatial domain. We demonstrate that our proposed architecture exceeds state-of-the-art results for emotion classification on the publicly available DEAP dataset. To explore the generality of the learned model, we also evaluate the performance of our architecture towards transfer learning (TL) by transferring the model parameters from a specific source to other target domains. Using DEAP as the source dataset, we demonstrate the effectiveness of our model in performing cross-modality TL and improving emotion classification accuracy on DREAMER and the Emotional English Word (EEWD) datasets, which involve EEG-based emotion classification tasks with different stimuli.
Shadi Sartipi, Mastaneh Torkamani-Azar, Mujdat Cetin
2023-07-06T15:35:14Z
http://arxiv.org/abs/2307.03068v1
A Hybrid End-to-End Spatio-Temporal Attention Neural Network with Graph-Smooth Signals for EEG Emotion Recognition ###### Abstract Recently, physiological data such as electroencephalography (EEG) signals have attracted significant attention in affective computing. In this context, the main goal is to design an automated model that can assess emotional states. Lately, deep neural networks have shown promising performance in emotion recognition tasks. However, designing a deep architecture that can extract practical information from raw data is still a challenge. Here, we introduce a deep neural network that acquires interpretable physiological representations by a hybrid structure of spatio-temporal encoding and recurrent attention network blocks. Furthermore, a preprocessing step is applied to the raw data using graph signal processing tools to perform graph smoothing in the spatial domain. We demonstrate that our proposed architecture exceeds state-of-the-art results for emotion classification on the publicly available DEAP dataset. To explore the generality of the learned model, we also evaluate the performance of our architecture towards transfer learning (TL) by transferring the model parameters from a specific source to other target domains. Using DEAP as the source dataset, we demonstrate the effectiveness of our model in performing cross-modality TL and improving emotion classification accuracy on DREAMER and the Emotional English Word (EEWD) datasets, which involve EEG-based emotion classification tasks with different stimuli. Emotion, Electroencephalography, Graph Filtering, Recurrent Attention Network, Spatio-Temporal Encoding, Transfer Learning. ## I Introduction Affective computing is a popular field of study wherein researchers try to develop automatic recognition systems or devices that can interpret or respond to human emotional states. Brain-Computer Interfaces (BCI) link brain activity with external devices [1]. Recently, emotion recognition using physiological signals attracted a notable amount of attention [2]. Physiological signals acquired with wearable devices include electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), blood pressure, galvanic skin response (GSR), eye-tracking metrics such as pupil dilation and gaze entropy, body temperature, and movement kinematics, to name a few. The neural activity of cortical regions can be recorded by multichannel EEG in a way that can preserve the spectral and rhythmic characteristics of brain signals [3]. Comparing EEG with other non-invasive recording methods shows that EEG has better temporal resolution and can acquire brain signals per millisecond. High temporal resolution and ease of use made EEG one of the most practical ways of handling tasks related to cognitive and affective reactions [3]. However, EEG suffers from low signal-to-noise ratio (SNR) and poor spatial resolution. Accordingly, it is challenging to use EEG signals for downstream tasks when compared to MRI and fNIRS [4, 5]. EEG signals are primarily analyzed in particular frequency bands, including theta (\(\theta:4\)-\(8\) Hz), alpha (\(\alpha:8\)-\(12\) Hz), beta (\(\beta:12\)-\(29\) Hz), and gamma (\(\gamma:\ >30\) Hz). Most of the early pieces of work on EEG-based emotion classification have generally relied on two main steps: extracting the informative features and defining a supervised machine learning approach. Wang _et al._ evaluated the performance of three different features, namely power spectral density (PSD), wavelet entropy, and nonlinear dynamical features with kernel support vector machine (SVM) [6]. Zheng _et al._[7] investigated the critical frequency bands and channels with differential entropy (DE), DE asymmetry, and PSD features. They explored the performance of different features by \(K\)-nearest neighbors (\(K\)-NN), SVM, and deep belief networks (DBN). In [8], the authors offered an approach to calculate spectral and temporal entropies by decomposing EEG data via Fourier-Bessel series expansion-based empirical wavelet transform. Then, \(K\)-NN and Shannon entropies were computed after multi-scaling operation in the spectral and temporal domains. [9] proposed a new rhythm sequencing approach to find the best rhythmic features from the sequence of multi-channel EEG data. Finally, [10] applied transfer learning to address the issue of distribution shift between the training and test data. Nevertheless, covering all the manually extracted features both in time and frequency domains is complicated. Besides, the susceptibility of EEG signals to artifacts severely degrades the performance of classical machine learning approaches [11]. To address this issue, one can exploit end-to-end models that start with raw signals rather than extracted features. End-to-end deep learning (DL) approaches have been indicated to surpass classic approaches in various fields, including speech recognition [12], computer vision [13], and biomedical signal processing [14]. In contrast to shallow classifiers that re quire feature engineering, deep neural networks automatically extract practical features from the given signals and learn the low- and high-level representations of the input data [15]. Several deep learning architectures and methodologies have been proposed for EEG-based BCIs; see e.g., [16, 17, 18, 19]. Schirrmeister _et al._[20] designed a deep convolutional neural network (CNN) based architecture with temporal and spatial convolutional (conv) filters followed by conv-pooling blocks to reduce the dimension. Zhao _et al._ investigated the fusion of three different modalities, namely EEG, raw eye-movement-images (EIG), and eye movement features (EYE) [21]. Feeding the fusion of different modalities to a dense co-attention symmetric network resulted in higher classification performance than the single modality. In [17], authors applied an attention technique to set different weight importance to EEG channels. Then, they extracted spatio-temporal features by applying CNN and a recurrent neural network (RNN). Multi-column convolutional neural network (MCNN) was introduced by authors of [22] for emotional states classification. They tested their work in a subject-independent manner by assuming five participants as the test data without performing cross-validation. A novel architecture coined as frame-level distilling neural network (FLDNet), that learns the distilled properties from the correlation of various frames, was introduced in [23]. They presented a triple-net structure to distill the learned features of each net consecutively. The deep forest was proposed in [24], where EEG data were converted to two-dimensional (2D) frame sequences by considering the spatial position of the EEG channels and then fed to the model. The proposed model was insensitive to hyper-parameter settings. EEG-based emotion recognition via DL is still in its early years. Therefore, there is yet to find a better deep structure. Furthermore, although DL models have been successful in EEG analysis, there are few studies that not only explore the performance of the learned representation but also consider their generalization ability on another dataset with a similar task [25]. The main goal of this study is to design a new deep architecture to enhance the performance of the current algorithms for EEG-based emotion recognition. It has been established in different areas that CNNs are effective in capturing the spatial representations, while RNNs capture the temporal dependencies well [26]. Recording EEG data from multiple electrodes over the scalp during a period of time forms both spatial and temporal structures. In order to analyze these structured time series successfully, the extracted information from spatial structures and temporal dynamics should be accounted for [27]. We propose the hybrid end-to-end spatio-temporal attention neural network (STANN) with smooth signals over graphs to consider these two aspects of the data within a unified architecture. As the central contribution of this work, STANN consists of two parallel blocks: the spatio-temporal encoding block and the recurrent attention network block. Considering the complex structure of brain signals and their time-varying character, we propose the idea of applying the graph Fourier transform (GFT) and low pass graph filtering [28] in a pre-processing step. GFT is considered as a solution for overcoming the low SNR of EEG signals [29]. Accordingly, our proposed smooth signal spatio-temporal attention neural network (SS-STANN) simultaneously learns both spatial information and discriminative time dependencies. To evaluate the performance, we apply the introduced method on the publicly available EEG dataset named DEAP that contains discrete ratings for valence, arousal, and dominance in response to audio-visual stimuli [30]. In our comprehensive experimental analysis, we demonstrate the superiority of STANN when using either raw EEG signals or smoothed graph signals as an input. Deep learning models commonly require a large number of parameters to be trained compared to traditional machine learning methods. Thus, DL models need a significant amount of data [31]. One of the challenges related to EEG tasks and DL is insufficient labeled data for similar tasks. A number of transfer-based approaches have been proposed to address this issue that leverage a pre-existing large enough dataset known as the source dataset. Yet, there would be inconsistency across target and source domains which necessitates fine-tuning [32] the network for the target data. We demonstrate the applicability of the information learned through our proposed hybrid architecture in similar emotion classification tasks by applying transfer learning and fine-tuning in order to investigate the generality of our proposed model. We consider \(5\) layouts for tuning the pre-existing network with confined EEG data. As the target data, we investigate the EEG signals of publicly available DREAMER dataset [33] and an Emotional English Word dataset (EEWD) recorded at Sabanci University with different stimuli types that can enlighten the capability of proposed model in cross-modal emotion learning [34]. The major contributions of our work are summarized as follows: * A novel deep architecture that considers spatial and temporal information of time-series data is proposed for EEG emotion classification. The proposed hybrid network encodes the spatio-temporal and attentive temporal information in parallel. * This paper considers the relation among neighbor EEG electrodes to use their spatial-spectral characteristics. Low pass graph filtering is applied to enforce graph smoothness in the spatial domain. * This paper shows the possibility of transferring the learned model parameters for cross-subject and cross-dataset (similar EEG tasks with different stimuli) scenarios. The results are promising in both scenarios. Improved results indicate the cross-modality capability of the proposed model since the learned representations from EEG signals, elicited in response to video clips, can be used to improve classification accuracy even for datasets of emotional written words. A preliminary version of this work was presented in [27]. While [27] contained the initial idea of the STANN framework, this paper extends that preliminary work in several major ways: (1) the approach we present here involves the use of graph signal processing (GSP) tools to perform graph smoothness in the spatial domain, (2) we propose and demonstrate the use of transfer learning within our proposed framework, (3) by visualizing activations of certain layers in our network, we examine how our proposed approach encodes spatial information in the brain, (4) we perform a more extensive comparison of our proposed method to the state-of-the-art, and (5) we extend our experimental analysis to three datasets to demonstrate the effectiveness of the proposed method. The rest of this paper is structured as follows. Section II introduces the proposed SS-STANN method. Section III describes the datasets and reports the experimental results. Section V concludes the paper. ## II Method ### _Overview_ Figure 1 presents an overview of the proposed pipeline. The EEG data are first graph-smoothed via graph filters, and then sliced by non-overlapping sliding windows to obtain data samples. Next, the data are fed to the proposed deep STANN architecture to learn a discriminative representation for the classification of emotional states. ### _Preprocessing using Graph Filtering_ EEG data are recorded during a total time period \(T\) from \(n\) different electrodes mounted over the scalp, which results in a two-dimensional (2D) signal, \(\mathbf{X}\in\mathbb{R}^{n\times T}\). Recent works commonly apply hand-crafted features or raw EEG data as the input of deep neural networks. Exploring structural and functional connectivity of the brain [35] and tracking the relative spatial positions of EEG nodes could be in use of decoding responses elicited from sensory stimuli [36]. In order to exploit GSP tools, we need to define an underlying graph. Thus, we calculate the pairwise Euclidean distances of EEG electrodes and build the graph accordingly. In this way, we solely require the Cartesian coordinates of electrodes while the classical common spatial pattern (CSP) filtering depends on individual subjects or tasks. Let us consider an undirected, weighted graph \(\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{A})\), where \(\mathcal{V}=\{1,2,...,n\}\) is the set of nodes or EEG channels, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges or spatial connections, and \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacency matrix. The edge weights \(\mathbf{A}_{ij}\) are inversely proportional to the pairwise Euclidean distances between node \(i\) and \(j\). Subsequently, one can compute the distance \(d_{ij}\) as follows: \[\mathbf{A}_{ij}=d_{ij}^{-1},\ \mathbf{A}_{ii}=0,\ \ \text{for}\ i,j=1,2,...,n. \tag{1}\] For each electrode, _K_-NN is considered to construct the adjacency matrix while keeping it symmetric to represent the brain topology [37]. To avoid a densely-connected graph, we set \(K\) to \(2\) and \(4\). In the literature, a \(2\)-NN topology was motivated by separating the graph into fronto-temporal and parieto-occipital networks [38]. The \(4\)-NN graph brings engagement with central areas as well. Figure 2 shows the final adjacency matrices for these \(K\) values and their corresponding scalp topologies for \(32\) nodes with the \(10\)-\(20\) electrode placement setup. Furthermore, one can extract informative features using the spectral representation of spatial signals. Using GFT one can analyze the spatial frequency of the signals defined over the graph. The combinatorial graph Laplacian \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) is needed for the calculation of GFT in which \[\mathbf{D}_{ii}=\sum_{k}\mathbf{A}_{ik}\] Fig. 1: Overview of the proposed SS-STANN architecture. The CNN blocks (blue) contain adopt the ReLU activation function and batch normalization. Pooling blocks (orange) are followed by a dropout layer. IN1 and IN2 correspond to inputs for STE and RAN blocks respectively. (STE: spatio-temporal encoding, RAN: recurrent attention network). Fig. 2: The adjacency matrix (**left**) illustration, and its corresponding graph representation (**right**) from a sample \(10\)-\(20\) electrode placement setup. is a diagonal matrix of nodal degrees [28]. Given graph signal \(\mathbf{X}\) one can compute GFT with respect to \(\mathbf{L}\) as follows: \[\tilde{\mathbf{X}}=\mathbf{V^{T}X} \tag{2}\] where \(\mathbf{V}\) is the orthonormal \(n\times n\) matrix of eigenvectors of the matrix \(\mathbf{L}\). Since the electrodes installed in adjacent locations detect electrical activities of common sources [29], we apply smoothing via lowpass graph filtering with respect to the defined graph. This will ensure the similarity of the behavior among neighbor electrodes. Low frequencies in the graph correspond to small eigenvalues. Considering \(\mathbf{\tilde{h}}=[\tilde{h}_{1},\tilde{h}_{2},...,\tilde{h}_{n}]\) the ideal lowpass filter with bandwidth \(w\in\{1,2,...,n\}\) where \(\tilde{h}_{i}=0\) if \(i>w\), the GFT coefficients corresponding to the low frequencies with respect to \(\mathcal{G}\) are given by: \[\tilde{\mathbf{X}}_{low}=\mathbf{\tilde{h}}\mathbf{\tilde{X}} \tag{3}\] where \(\tilde{h}_{i}\) is equal to one for \(w=[1,\frac{n}{2}]\) and zero otherwise. While we choose the filter bandwidth as \(\frac{n}{2}\), users can choose a different number depending on the level of smoothing they want to apply. A smaller bandwidth would lead to more smoothing. Next, the inverse GFT (iGFT), \(\mathbf{X}_{smooth}=\mathbf{V\tilde{X}}_{low}\), is applied to obtain the smoothed data in the spatial domain [28]. ### _The Proposed Hybrid SS-STANN Network_ After the preprocessing and smoothing steps, a sliding window with length \(k\) is applied to obtain the EEG slices \(i\), i.e., \(\mathbf{Z}_{i}\in\mathbb{R}^{n\times k}\). Based on the overview presented in Figure 1, the proposed network is organized into two main blocks: the spatio-temporal encoding (STE) and the recurrent attention network (RAN) blocks. The STE is designed to extract the spatio-temporal information from temporal slices, \(\mathbf{Z}_{i}\). This block consists multiple columns of \(2\)D CNNs. The STE follows the structure proposed in [39, 40]. This architecture contains several independently acting columns that function as a deep network. All columns receive the same input data and their weights are initialized randomly. The output of this model is equal to the average of all the columns' outputs. In this work, each column contains a series of \(2\)D CNN, batch normalization, and average-pooling followed by dropout layers with different kernel sizes and a number of filters. Presuming STE has M columns, the feature map of the \(m\_th\) column would be \(\mathbf{f}^{m}\). \(M\) feature maps,\(\{\mathbf{f}^{m}\}_{m=1}^{m=M}\), are merged to get the final feature map \(\mathbf{f}\). The final feature map passes through the \(1\times 1\) conv layer. This structure enables using different kernel sizes per column to detect the informative features at very different temporal and spatial scales across nearby EEG channels. Details of the implementation will be described in Section III. The RAN block is composed of two bidirectional LSTM layers and an attention mechanism. LSTM networks are recurrent neural networks that capture the dependencies within time steps from sequential data. In LSTMs, the temporal dynamic behavior is captured by feedback connections [41]. Since RNNs are trained by back-propagation through time, a RNN cell is replaced by a LSTM cell to avoid the vanishing gradient problem [41]. Let \(x_{t}\) and \(h_{t}\) denote the input data and the hidden state at time \(t\), respectively. LSTM performance is controlled by three gates: (1) the forget gate \(f_{t}\) selects the information to keep or forget, (2) the input gate \(i_{t}\) controls the flow of the input, and (3) the output gate \(o_{t}\) that calculates the output of the given updated cell (Figure 3). Formulas governing the operations in the LSTM cell are as follows [41]: \[f_{t}=\sigma(W_{f}.[h_{t-1},x_{t}]+b_{f}) \tag{4}\] \[i_{t}=\sigma(W_{i}.[h_{t-1},x_{t}]+b_{i}) \tag{5}\] \[C_{t}=f_{t}\circ C_{t-1}+i_{t}\circ\text{tanh}(W_{C}.[h_{t-1},x_{t}]+b_{C}) \tag{6}\] \[o_{t}=\sigma(W_{o}.[h_{t-1},x_{t}]+b_{o}) \tag{7}\] \[h_{t}=o_{t}\circ\text{tanh}(C_{t}) \tag{8}\] where \(W_{(.)}\), and \(b_{(.)}\) are weights and biases, respectively. \(C_{t}\) is the cell state at time \(t\). The operator \(\circ\) denotes the element-wise vector multiplication. A bidirectional LSTM (BiLSTM) consists of two LSTM blocks that allow the layer to receive information from the sequential input data simultaneously in forward and backward directions. The output of each layer is the concatenation of outputs of two LSTM blocks, i.e, \(h_{i}=[\overrightarrow{h_{f}},\overrightarrow{h_{b}}]\) where \(\overrightarrow{h_{f}}\) and \(\overrightarrow{h_{b}}\) correspond to the forward and backward hidden states, respectively [41]. In several sequence-based applications such as semantics analysis, natural language processing, and medical imaging, certain time steps of the input data might contain the most discriminative information, and attention mechanism addresses this issue by focusing on specific time steps [42]. In this mechanism, the most discriminative task-related features are calculated by multiplication of outputs of hidden states by trainable weights. The output of the attention layer, \(v\), is computed as bellow: \[v=\sum_{i}\alpha_{i}h_{i} \tag{9}\] \[\alpha_{i}=\frac{\exp(Wh_{i}+b)}{\sum_{j}\exp(Wh_{j}+b)} \tag{10}\] where \(h_{i}\) denotes the output of the \(i\_{t}\)_th LSTM layer, and \(W\) and \(b\) are trainable parameters. Fig. 3: LSTM cell block. ### _Training, Optimization, and Transfer Learning_ In order to examine the performance of the proposed network, the features of STE and RAN block are fused and fed to the dense layer. Next, the encoded representation is fed to the final softmax classifier. The cross-entropy loss, \(\mathcal{L}\), is calculated as follows: \[\mathcal{L}=\sum_{i}Y_{i}log\hat{Y}_{i}, \tag{11}\] where \(Y_{i}\) is the ground truth emotion label for each data sample and \(\hat{Y}_{i}\) is the predicted label. Finally, the weights and the biases are trained with batch gradient descent. The trained model is used to perform the supervised emotion classification task. Additionally, in the case of medical imaging in general and EEG-based diagnosis in particular, there exist legitimate interests and needs for using such a training model to solve a similar problem with insufficient training data [34, 43, 44]. To this end, the model has to be trained on the whole data to get the transferable model parameters. The target data would never be seen in the training phase. To investigate the possibility of using this trained network in similar EEG-based emotion recognition tasks, we propose and implement a transfer learning (TL) approach. The goal of TL is to test our model ability in real-life conditions where the available amount of labeled data is not sufficient. TL helps to improve the learning capability of the target data by leveraging the knowledge of the source domain. In this study, we investigate transferring the learned model parameters assuming that individual models across different datasets with similar tasks should share some parameters. Firstly, the model is fully trained using sufficient labeled data (source dataset). Second, we peruse different schemes to tune the pre-trained network via the target dataset. The source and target datasets involve EEG-based emotion recognition experiments with different stimuli. Figure 4 demonstrates five different schemes that we consider in the calibration (fine-tuning) session. The TL schemes in our STE blocks are inspired by observations in CNN-based TL frameworks in computer vision where usually the later network layers are retrained, as the earlier layers are responsible for generic features [45]. In our work, we consider different retrainable cells in both STE and RAN blocks. In particular, going from scheme (a) to scheme (e), we change the status of exactly one layer either to retrainable or non-retrainable (frozen) at each step. In each scheme, blocks marked with a cross are left unchanged during fine-tuning of the network. The number of retrainable parameters in scheme (a) to scheme (e) is equal to \(239100\), \(311420\), \(280295\), \(207975\), and \(53735\), respectively. Due to the variations in inter-dataset samples, we use a small part of the new data (target data), \(\mathbf{N}\), to calibrate and fine-tune our pre-trained model. Since the amount of calibration data is limited, we scale down the initial learning rate to avoid clobbering in initialization [46]. Scaling down the initial learning rate \(\eta\) with the scale \(\alpha\) can be defined as: \[\Phi^{i+1}=(1-\alpha)\Phi^{i}+\alpha(\Phi^{i}-\eta\frac{\partial\mathcal{L}}{ \partial\Phi}) \tag{12}\] where \(\Phi^{i}\) is the trainable parameters at the \(i\_th\) iteration and \(\mathcal{L}\) is the cross-entropy loss function. Here, \(\alpha\) is set to \(0.1\). ## III Dataset and Implementation ### _Datasets_ #### Iii-A1 DEAP Dataset The DEAP dataset was recorded from \(32\) individuals each having rated \(40\) music videos for \(60\) s [30]. After each video, the participants performed a self-assessment to show their emotional states by rating the level of valence, arousal, dominance, and liking from \(1\) to \(9\). The physiological recordings consist of \(32\) channels of EEG signals and \(8\) channels of peripheral physiological data. Here, we just consider the EEG recordings of each trial. The preprocessing scheme is as follows: 1) down-sampling the data to \(128\) Hz, 2) averaging to the common reference, 3) removing electrooculography (EOG) artifacts, and 4) applying a bandpass filter with the frequency range of \([4,45]\) Hz. Accordingly, each recording contains \(3\) s pre-trial relaxing phase followed by \(60\) s trial data. #### Iii-A2 Dreamer Dataset The DREAMER dataset was recorded with a \(14\)-channel, Emotiv EPOC wireless EEG headset [33]. The data were recorded with a \(128\) Hz sampling frequency from \(23\) participants while watching \(18\) film clips. Each film clip lasted \(65\) to \(393\) s to elicit the emotional states. Before data collection, participants watched a neutral film clip to neutralize the emotional state. Data collected while watching this neutral film served as the baseline. Participants were asked to assess their emotional states by rating valence, arousal, and dominance levels in each video from \(1\) to \(5\). To have consistency with the DEAP dataset, each recording contains \(3\) s pre-trial relaxing phase followed by \(60\) s trial data and is band-pass filtered from \(4.0\) to \(45.0\) Hz. #### Iii-A3 Emotional English Word Dataset (EEWD) Data collection was performed via \(64\) Ag/AgCl active electrodes located over the scalp based on the \(10\)-\(10\) International Electrode Placement System while participants were rating emotional words. Participants provided signed informed consents in accordance with the Sabanci University Research Ethics Council guidelines. EEG data were recorded using BioSemi ActiveTwo systems (Biosemi Inc., Amsterdam, the Netherlands) in a dimly lit EEG room within a Faraday cage. A dataset of highly-arousing English words was formed such that \(65\) negative words (arousal\(>6\) and valence\(<3\)) and \(63\) positive words (arousal\(>6\) and valence\(>7\)) were selected from the Affective Norms for English Words (ANEW) dataset [47]. Details of compiling this small dataset of Original English List (OEL) is presented in more details in [48]. Thirty native Turkish speakers participated in the experiment where English served as their secondary language. The experiment consisted of four blocks and each block contained \(32\) randomly selected words. Each word was presented for \(1\) s and then the participants were asked to rate the valence and arousal in the range of \(1\) to \(9\) using a set of pictorial self-assessment manikins (SAM) [48]. The data of two participants were discarded due to technical problems. The recorded signals were down-sampled from \(2048\) Hz to \(128\) Hz, EOG artifacts were removed via independent component analysis (ICA), and a bandpass filter from \(4.0\) to \(45.0\) Hz was applied. All preprocessing steps were conducted using the EEGLAB toolbox [49]. ### _Implementation of the Proposed Network_ The proposed network consists of two parts that operate in parallel [27]. The input of STE, IN1, has the dimensions of n\(\times\)k\(\times 1\) where \(k\) is set to \(128\) corresponding to the \(1\) s slicing window length. In order to select the STE parameters, we perform the grid search on the subset of the kernel sizes, \([9,7,5,3]\), and the parameters corresponding to the best performance on the training data samples selected. Table I provides the details of the STE structure. Each average-pooling layer is followed by a dropout layer. The dropout probability rates for each dropout layer are set to \(0.5\) and \(0.4\), respectively. In order to prevent edge information loss, the same zero-padding technique is used in each convolution (conv) operation. Here, we adopt rectified linear unit (ReLU) which has been used in many related CNN-based applications. After concatenating the outputs of all columns, a \(1\times 1\) conv filter is applied to compute the spatial feature maps. The input of the RAN block, IN2, has the size of \(k\times n\). RAN consists of two BiLSTM layers with the same hidden layer size. The hidden layer and time steps are set to \(80\) and \(128\), respectively. We choose hyperbolic tangent (tanh) as the activation function for all BiLSTM cells. Each pair of forward and backward LSTM cells is followed by a dropout layer with probability rates of \(0.3\) and \(0.2\), respectively. BiLSTM outputs are then used as the input of the attention mechanism. The spatial and temporal representations extracted from STE and RAN blocks are flattened and concatenated. Next, we apply a fully connected layer where the number of hidden units is \(128\). In the end, we exploit the SoftMax operation to obtain classification labels. ## IV Experiments ### _Results and Analysis on the DEAP Dataset_ In this section, we investigate the performance of the proposed DL architecture on the DEAP dataset. The DEAP dataset involves subjects watching long, continuous videos, hence trials can be defined arbitrarily as smaller segments from this dataset. While the DEAP dataset defines \(60\) s EEG intervals as trials, various methods have used different intervals as samples for training and testing. In this work, the trial data are baseline corrected and a non-overlapping sliding window with a length of \(1\) s is applied to slice the \(60\) s trials. The final size of each data sample equals \(32\times 128\) where \(32\) is the number of the EEG nodes and \(128\) is the number of time samples. Thus, the data for each participant consists of \(40\times 60=2400\) data samples. While our experiments and those of other methods we compare against have allowed non-overlapping \(1\) s data samples from any \(60\) s trial to be assigned to training or test sets randomly, a different approach could be to assign all \(1\) s samples from a particular trial to the training or test set. In order to explore different frequency bandwidths, each data sample is filtered into five subbands: theta, alpha, beta, gamma, and wide-band. In order to validate the performance of the proposed method, we consider three binary classification problems, i.e., high-versus-low valence, high-versus-low arousal, and high-versus-low dominance. Considering a threshold of \(5\), we quantize the \(9\)-level ratings of valence, arousal, and dominance into two levels to obtain a binary problem. The model is trained using subject-dependent \(10\)-fold cross-validation (CV). The validation process is repeated Fig. 4: The proposed transfer learning schemes. The blocks marked with a black cross are the ones that remain frozen during fine tuning. times and the average classification performance is reported. Adam optimizer [50] is used to minimize the cross-entropy between the predicted labels and true labels. The epochs and batch-size are picked as \(50\) and \(300\), respectively. Grid search is applied to select the hyper-parameters that maximize the average classification accuracy based on training data. #### Iv-A1 Ablation Study To analyze the impacts of using the station-temporal encoding, recurrent network, and attention mechanism, we have performed an ablation study by considering each block of the proposed STANN, namely STE and RAN as baseline models and evaluating their performance separately using the aforementioned parameters. In order to assess the effect of graph smoothing, we calculate the graph-smoothed signals and assess the network performance in frequency subbands and present the performance results of the proposed method with two different input modalities, i.e., raw EEG data and smooth EEG data. Since the GFT computing and smoothness are not dependent on a specific task or subject, this would not interfere with the automated operation of our model. Details of the ablation experiments are provided in Table II. Tables III, IV, and V show the obtained average accuracies and standard deviations (SD) for binary valence, binary arousal, and binary dominance classification for the baseline models and the proposed method based on data from different frequency subbands with different inputs. To address cases involving unbalanced data, we also report F1-scores for the proposed model in all three classification problems. SS2-(.) and SS4-(.) correspond to graph-smoothed signals with \(2\)-NN and \(4\)-NN adjacency matrices, respectively. Results of DEAP dataset classification in Tables III to V demonstrate that graph smoothing leads to better performance than the use of raw EEG input data. Moreover, in the majority of classification scenarios, the best performance is driven by wide-band data and SS4-STANN. The average classification accuracies for binary valence, arousal, and dominance classification problems based on SS4-STANN are \(95.6\%\), \(97.0\%\), and \(96.8\%\), respectively. These results indicate that our proposed method exceeds the baseline models and that graph smoothing enhances the overall classification performance. Moreover, our findings show that beta and wide-band frequency subbands outperform other spectral features in binary classifications problems of high-versus-low valence, arousal, and dominance dimensions which is in line with the role of different frequency subbands in characterizing affective states [7, 51]. #### Iv-A2 Comparison with the-state-of-the-art We compare the performance of our proposed SS4-SSTANN method for the classification of valence and arousal from wide-band EEG of the DEAP dataset with a number of state-of-the-art solutions namely DCCA [52], ECLGCNN [53], DGCNN [54], CVCNN [55], CRAM [56], ACRNN [17], S-EEGNet [19], and CascCNN-LSTM [26]. In DCCA, firstly, DE features are computed in four different frequency bands and then the network computes the representations of two modalities by passing them through multiple stacked layers of nonlinear transformations.ECLGGCNN uses the infusion of graph convolutional neural networks with LSTMs while DE of the windowed EEG data are considered as the input. DGCNN1 computes DE features and applies a \(2400\) feature vector as the input of the graph CNN. CVCNN utilizes raw EEG and normalized EEG data in combination with PSD features. CRAM extracts spatio temporal information along with attentive temporal dynamics in a cascaded format. The model uses a CNN layer with a fixed kernel and filter size and the temporal information is extracted from the extracted spatial features which makes the temporal information dependent on the spatial features. ACRNN uses an attention technique to get different weights for each EEG channel followed by a CNN to extract spatial features. S-EEGNet applies the Hilbert-Huang transform to preprocess the EEG data before feeding the data to a separable CNN. Casc-CNN-LSTM applies hybrid convolutional recurrent neural networks by using transformed 1D EEG vector sequences into 2D mesh-like matrices. Table VI presents a comparison of the proposed SS4-STANN on wide-band data with the above-mentioned methods from recent literature and demonstrates the superiority of our proposed approach. The reported results are all subject-dependent with a \(10\)-fold CV except [19] where the authors performed a \(4\)-fold CV. #### Iv-A3 How does the STE block encode spatial information? To investigate the learning process of the STE block, activations of the last conv layer are visualized to demonstrate wide-band EEG feature maps learned during the valence and arousal recognition tasks. The outputs of the conv layers are averaged along the time dimension to obtain the key spatial features for each kernel separately. This operation results in a \(32\)-dimensional vector for each kernel. The feature vector is then normalized in the range of \(0\) to \(1\). Figures 5 and 6 present the topographic scalp plots for the first kernels of each column for valence and arousal, respectively. The representations are averaged over the data samples and subjects. For high-vs-low valence and arousal problems, most of the activities are over frontal, temporal, and central lobes which is consistent with the literature regarding the processing of human emotions [57]. The valence difference plots for all columns in Figure 5 show the role of temporal and parietal lobes in positive and negative emotions [58, 59]. The difference plot for arousal classification in column \(2\) of Figure 6 shows higher activity in the left cortex and frontal lobe similar to previous observations in the literature [60]. Moreover, the plots show that arousal processing has a more wide scattered pattern over the brain than valence [25]. ### _Evaluating Transfer Learning Performance_ To show that the learned representations based on SS-STANN have the generality to be applied in similar tasks that have limited labeled data, we perform TL in two different schemes, namely cross-subject TL and cross-dataset TL. In order to make a fair comparison among the different amounts of calibration data, we set 10% of the target data samples as a test set, and the calibration data is selected among the rest 90% of the data. #### Iv-B1 Cross-Subject TL To perform cross-subject TL, one subject of the DEAP dataset is considered the target subject and the rest of the subjects are source subjects. Given its superior performance in the supervised classification task, we use SS4-STANN for the TL experiments. First, the network is trained for \(100\) epochs on the source subjects' data samples to get the pre-trained network. Second, we use \(\mathbf{N}\) data samples of the target subject's data to fine-tune the pre-trained model. During the fine-tuning process, the epoch values are set to Fig. 5: Topographic feature maps for the weight distribution of the first kernel of the last convolutional block of each column in STE during valence classification.The feature values are averaged across the subjects and normalized to the range of \(0\) to \(1\). (Col: Column) Fig. 6: Topographic feature maps for the weight distribution of the first kernel of the last convolutional block of each column in STE during arousal classification. The feature values are averaged across the subjects and normalized to the range of \(0\) to \(1\). (Col: Column) \(10\) when \(\mathbf{N}\) equals \(1\) trial per class and \(20\) when \(\mathbf{N}\) equals \(10\%\) and \(20\%\) of data samples. Table VII presents the average TL results on different TL schemes for binary valence and arousal classification. As presented, increasing the number of the calibration data sample improves the classification performance. To validate the TL effect on the classification performance, we present the performance of the proposed model without TL with \(\mathbf{N}=10\%\) and \(20\%\) in Table VIII. Comparing the results of Tables VII and VIII, it is clear that TL helps to increase performance across different subjects. #### Iv-B2 Cross-Dataset TL To apply the cross-dataset TL, we trained our proposed model on the whole DEAP dataset and tuned the network on new EEG emotion recognition datasets collected with different stimuli. To this end, we choose the publicly available DREAMER dataset and EEWD. To have consistency between datasets' characteristics, namely EEG electrodes, frequency bands of interest, and the sliding window for segmentation, we choose the following settings. Fourteen common electrodes among all datasets are selected, namely AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4. The same data slicing process is also applied to the DREAMER dataset. Since the trial length in DREAMER varies from trial to trial, we select the last \(60\) s of each trial and segment it into \(1\) s data samples. The final number of data samples corresponding to DREAMER is equal to \(18\times 60=1080\) per participant. The length of each trial sample in EEWD is \(1\) s which leaves us with \(128\) time steps. For each value of \(\mathbf{N}\), the best performing TL scheme is indicated in bold. TL process contains two main steps. First, the proposed network is trained on the whole DEAP dataset. Given its superior performance in the supervised classification task, we use SS4-STANN for the transfer learning experiments. \(10\%\) of the wide-band EEG samples of each subject in the DEAP dataset is considered for validation and the rest of samples are set aside for training. The network is trained for \(100\) epochs and the model parameters corresponding to the least validation loss are considered as the final pre-trained network. Second, we use \(\mathbf{N}\) samples of the target data for calibration to fine-tune the pre-trained model. In order to have a binary classification problem for the DREAMER dataset, the valence and arousal ratings are divided into two levels using a threshold of \(3\). Table X presents the TL results related to the DREAMER dataset for different schemes and various amounts of calibration data. We also report F\(1\)-score values to avoid the possibility of bias to one class. To avoid overfitting in the fine tuning process for \(\mathbf{N}\) equals \(1\) trial per class, the number of epochs is set to \(1\), and the epoch values is set \(20\) when \(\mathbf{N}\) equals \(10\%,\ 20\%\), and \(90\%\) of data samples. In the latter case, early stopping with a patience parameter of \(10\) is set for the tuning process. As expected, an increase in the amount of calibration data improves the classification performance. To show the effectiveness of transferring pre-trained model parameters, the performance related to \(\mathbf{N}=10\%,20\%,\text{and}\ 90\%\) of data samples without applying TL is considered as the baseline which is shown in Table IX. Observing the results in Tables IX and X, it is clear that tuning the pre-trained network increases the classification performance for binary valence and arousal problems. Without using TL, the valence classification accuracy for \(\mathbf{N}=10\%,20\%,\text{and}\ 90\%\) of data samples are \(65.8\%\), \(67.6\%\), and \(70.8\%\), respectively. However, TL increases the performance for the same \(\mathbf{N}\) values to \(72.0\%\), \(74.4\%\), and \(83.0\%\). Similar performance improvement is observed for the arousal classification problem. Considering Tables IX and X, for different N values, the results corresponding for without TL and with TL increase from \(73.5\%\), \(75.6\%\), and \(77.6\%\) to \(78.2\%\), \(81.0\%\), and \(87.2\%\). To evaluate transfer learning of models trained with EEG in response to video clips to EEG signals obtained in response to written words, we consider the EEWD and perform a binary valence classification scenario since all the words in that experiment are selected from the high arousal group. We consider trials corresponding to rating values lower or equal to \(3\) as negative valence and trials correspond to rating values higher or equal to \(7\) as positive valence. During the tuning process, the number of epochs and early stopping is set as described in the DREAMER dataset fine-tuning phase. The output prediction of the network is performed on each trial separately. Table XI displays the results on the EEWD dataset for different values of \(\mathbf{N}\). By considering \(\mathbf{N}=1\) trial per class and \(\mathbf{N}=10\%,\text{and}\ 20\%\) of data samples, the performance reaches up to \(68.2\%\), \(68.9\%\), and \(73.5\%\) which is related to schemes a, c, and b, respectively. Due to the limited amount of data samples we do not present the results similar to Table IX for this dataset since the model overfits in early epochs. #### Iv-D3 Verification of TL To establish that the proposed structure works properly and TL helps to separate the components related to each class, we randomly pick the EEG samples of one of the subjects in DREAMER dataset to visualize them with t-SNE [61] (Figure 7). Figure 7(a) shows the scatter plot of samples in the last dense layer before the classification softmax layer without TL and fine-tuning. Figure 7(b) presents the results after TL and fine-tuning. As can be seen, before fine-tuning the process, the representations corresponding to different classes are more mixed up and TL helps make them more separable. ## V Conclusion In this study, we proposed a novel deep learning architecture for subject-dependent EEG-based emotion classification tasks. The proposed SS-STANN involves a hybrid structure with parallel STE and RAN blocks and an attention mechanism. SS-STANN captures the spatial and temporal information inherent in multi-channel EEG data while enforcing graph smoothness in the spatial domain. We demonstrated that this work performs better than other state-of-the-art solutions, with classification accuracies of over 95.0\(\%\) for valence, arousal, and dominance for the DEAP dataset. Moreover, the critical frequency bands and regions are explored to validate the performance results. We also showed that the representations extracted from one EEG experiment could be used in other EEG emotion recognition tasks with similar and different stimulus modalities, highlighting the cross-modal transferability potential of the trained model and learned representations. In the future, we will investigate the effect of using different modalities along with EEG in a similar problem. Also, we will concentrate on conducting real-time experiments based on the proposed framework. Moreover, future work may involve adopting the spatio-temporal feature learning ideas presented here to problems involving other modalities of physiological data such as functional MRI (fMRI) or magnetoencephalography (MEG).
2307.12149
CorrFL: Correlation-Based Neural Network Architecture for Unavailability Concerns in a Heterogeneous IoT Environment
The Federated Learning (FL) paradigm faces several challenges that limit its application in real-world environments. These challenges include the local models' architecture heterogeneity and the unavailability of distributed Internet of Things (IoT) nodes due to connectivity problems. These factors posit the question of "how can the available models fill the training gap of the unavailable models?". This question is referred to as the "Oblique Federated Learning" problem. This problem is encountered in the studied environment that includes distributed IoT nodes responsible for predicting CO2 concentrations. This paper proposes the Correlation-based FL (CorrFL) approach influenced by the representational learning field to address this problem. CorrFL projects the various model weights to a common latent space to address the model heterogeneity. Its loss function minimizes the reconstruction loss when models are absent and maximizes the correlation between the generated models. The latter factor is critical because of the intersection of the feature spaces of the IoT devices. CorrFL is evaluated on a realistic use case, involving the unavailability of one IoT device and heightened activity levels that reflect occupancy. The generated CorrFL models for the unavailable IoT device from the available ones trained on the new environment are compared against models trained on different use cases, referred to as the benchmark model. The evaluation criteria combine the mean absolute error (MAE) of predictions and the impact of the amount of exchanged data on the prediction performance improvement. Through a comprehensive experimental procedure, the CorrFL model outperformed the benchmark model in every criterion.
Ibrahim Shaer, Abdallah Shami
2023-07-22T19:23:06Z
http://arxiv.org/abs/2307.12149v1
CorrFL: Correlation-based Neural Network Architecture for Unavailability Concerns in a Heterogeneous IoT Environment ###### Abstract The Federated Learning (FL) paradigm faces several challenges that limit its application in real-world environments. These challenges include the local models' architecture heterogeneity and the unavailability of distributed Internet of Things (IoT) nodes due to connectivity problems. These factors posit the question of "how can the available models fill the training gap of the unavailable models?". This question is referred to as the "Oblique Federated Learning" problem. This problem is encountered in the studied environment that includes distributed IoT nodes responsible for predicting CO\({}_{2}\) concentrations. This paper proposes the Correlation-based FL (CorrFL) approach influenced by the representational learning field to address this problem. CorrFL projects the various model weights to a common latent space to address the model heterogeneity. Its loss function minimizes the reconstruction loss when models are absent and maximizes the correlation between the generated models. The latter factor is critical because of the intersection of the feature spaces of the IoT devices. CorrFL is evaluated on a realistic use case, involving the unavailability of one IoT device and heightened activity levels that reflect occupancy. The generated CorrFL models for the unavailable IoT device from the available ones trained on the new environment are compared against models trained on different use cases, referred to as the benchmark model. The evaluation criteria combine the mean absolute error (MAE) of predictions and the impact of the amount of exchanged data on the prediction performance improvement. Through a comprehensive experimental procedure, the CorrFL model outperformed the benchmark model in every criterion. Federated Learning, Oblique Federated Learning, Model Heterogeneity, Connectivity Issues, IoT Device Dependability, Representational Learning, CO\({}_{2}\) prediction, HVAC systems ## I Introduction The evolution of sensing and computing technologies spearheaded the rise of inter-connected devices, known as the Internet of Things (IoT). These devices, which include sensors and edge nodes, are tasked with collecting data from the ambient environment to facilitate the automation systems' decision-making process [1]. Since each IoT device captures an aspect of the environment, a centralized server can be employed to gather all IoT device data and create Machine Learning (ML) models to realize a specific task. However, this centralized paradigm is faced with many challenges. These challenges include bandwidth limitations, exposing the environment to a single point of failure, data privacy concerns, and the availability of individual sensors [2, 3]. Federated Learning (FL) [3] is a decentralized paradigm that addresses these challenges by collaboratively training an ML model using local agents' models. This paradigm follows a two-stage process. The local agents train their models using their collected data, and their corresponding model weights are dispatched to the central server. After that, the central server aggregates these weights and transmits the updated weights to the local agents. Lastly, each local agent updates its model weights using the shared model's weights. However, this crude implementation faces several hurdles hampering its implementation in real-world environments. Despite the corpus of works discussing and addressing salient issues in the FL environment, such as vertical FL [4], the stragglers issue [5], and the heterogeneity in sample space [6], the joint consideration of participants' model heterogeneity and the availability of local agents have eluded the research community. In what follows, the explanation of these challenges and how they are encountered in real-world scenarios is detailed. **Model Heterogeneity Challenge**: The model heterogeneity refers to the non-uniformity of the feature space fed to the Neural Network (NN) model. The feature space of local agents differs due to two main reasons. First, the feature space of local agents consisting of IoT devices incorporates sensors of different capabilities, potentially capturing different sets of complementary features. This fact is encouraged by the constant enhancement of sensors' battery lifetime, compelling the stakeholders to deploy new devices from various manufacturers each with their own set of standards for IoT devices [7]. An illustrative example of this challenge is the deployment of Fig. 1: Illustrative Example two IoT devices with different readings depicted in Figure 1. Node 2 captures three readings of temperature (\({}^{\circ}\)C), humidity (RH), and pressure (Pa), while Node 1 captures four readings of temperature (\({}^{\circ}\)C), humidity (RH), light intensity (lux) and pressure (Pa). Second, IoT devices can be equipped with heterogeneous resources, a condition reflected in the generated models in two ways. It limits their ability to capture specific ambient features, creating a disparity in the features fed to the NN model, a factor touched upon in the first reason for model heterogeneity. The second way to address this hurdle is capturing the intended environment space using all the available sensors, albeit catering the feature engineering technique to the resources available on each IoT device. Both of these methods are meant to address the stragglers problem in the FL paradigm research space. As a result of these factors, each IoT device captures different sets of features that share some features. The downstream effect is generating models with heterogeneous architectures. **Availability of Local Agents**: The availability of local agents and their updated model weights is in jeopardy, due to connectivity and energy constraints of IoT devices [8]. The effect of participants' dropout is aggravated when no redundancy is incorporated into the FL environment, which means that the local models with their unique set of features would not be updated in downtime scenarios [9]. This scenario creates a divergence between the set of available and unavailable models. These combined factors require contingency plans on the aggregation server's end to keep generating models so that each local model, including the unavailable ones, can continue its training process. These factors undermine the application of vanilla FL paradigms that assume the uniformity of model architectures (e.g. all models use the same set of features) and the availability of local agents. The model heterogeneity challenges have been previously addressed in the literature using knowledge distillation, for example in the works of [10, 11, 12]. The knowledge distillation method necessitates the existence of a public dataset sent to each local agent. Each agent is trained using these public datasets, and their classification results are transmitted to the central server to infer the underlying models' architecture. The literature has adopted different methodologies to combine these results to generate a shared global model. These methods are limited by the need to craft a public dataset and the lack of incorporating unavailability aspects in their formulation. These factors prompt the research question that we answer in the manuscript, "How can we leverage the knowledge of a set of models to fill the training gap of the unavailable model when sharing some of its feature space?". The question combining the environment's assumptions and the outlined challenges is coined "Oblique Federated Learning: Learning from other participants". "Oblique Federated Learning" assumes that the local agents share common features; however, they do not capture the exact same features. "Oblique Federated Learning" is halfway between Horizontal FL which assumes the agents' feature space uniformity and Vertical FL which assumes that the agents' feature space is formed of disjoint sets. **Use Cases**: To bridge the gap between the hypothetical scenarios outlined in the challenges and the real-world scenarios, it is imperative to present some use cases that convey these challenges. These use cases target two applications in the field of autonomous vehicles and intrusion detection systems. Radar interference management is a prerequisite for the ubiquitous deployment of radar technology to enable autonomous vehicles to mitigate the dangerous blinding of their radars in dense urban communities [13]. Since radar technology is essential for the realization of autonomous vehicle applications, the blinding phenomenon reflects the non-uniformity of the feature spaces of vehicles in an FL application targeting scene detection. ML-based solutions for intrusion detection attacks utilize a set of features representing the network traffic to predict the probability of an attack. However, some sets of these features can be unavailable, due to privacy concerns related to each organization or uninformative, due to the masking of some features (i.e., IP addresses might be unavailable). These two examples illustrate the pervasiveness of the "Oblique Federated Learning" phenomenon. In this work, we address the "Oblique Federated Learning" phenomenon in a distributed IoT environment, consisting of sets of sensors that gather different environmental features and suffer from availability constraints. These devices are responsible for predicting CO\({}_{2}\) concentrations over a future time horizon. This dataset was chosen since the model heterogeneity is inherent in its non-uniform feature space, which does not necessitate forging hypothetical scenarios. Moreover, this application's utility is reflected in different domains. CO\({}_{2}\) concentrations can act as proxy estimators of occupancy, aiding the Heating, Ventilation, and Air Conditioning (HVAC) systems in their decision-making process. As a result, a more informed HVAC control is obtained, which improves occupants' comfort [14], curbs the spread of COVID-19 [15], and reduces energy consumption [16]. In the studied environment, each IoT device collects some common features with other IoT devices, which means that their corresponding NN models share some neuron combinations. The central aggregation server can leverage this fact to generate updated models for the local agents when they cannot send their updated model weights. Toward that end, this paper proposes a novel NN model, termed as Correlational FL (CorrFL), inspired by the representational learning literature, Correlational Neural Networks in particular [17], to produce updated NN weights for unavailable IoT devices from the available ones. This paper evaluates the proposed CorrFL method in various use-case scenarios of activity levels associated with CO\({}_{2}\) concentration changes. Under minimal or no activity levels, the distributed IoT devices are trained using conventional FL paradigms. One of the local models becomes unavailable when heightened activity levels are encountered. In this case, the CorrFL is employed to generate models for the absent local agents to demonstrate its ability to mitigate their absence. The generated models' performance is compared to those obtained using only the training process, referred to as the benchmark models. In summary, the contributions of this paper are as follows: * Coin the term "Oblique Federated Learning" to combine model heterogeneity and availability concerns in an FL environment; * Propose a novel NN architecture that expands the Correlational Neural Networks to a multi-view environment, representing different models, and amends its loss function to tailor to the requirements of the studied environment; * Devise a loss function that incorporates the unavailability of models and maximizes the correlations between the generated models; * Propose an evaluation criterion that combines networking concerns and prediction performance referred to as data size exchange per one percentage prediction improvement; and, * Introduce the concepts of "delay," representing the period between the start of local agents' training and the start of central server training and "Model Dispatch Frequency," describing the number of local model weights sent at once, in the FL environment and highlight their importance on both the network perspective and the accuracy of CO\({}_{2}\) concentration predictions. The rest of the paper is organized as follows. Section II introduces background information pertaining to the proposed approach. Section III details the related work and discusses its limitations in relation to the adopted approach. Section IV explains the components of the devised methodology. Section V outlines the experimental parameters. Section VI explains the use case scenario and its motivation. Section VII details the results. Section VIII concludes the paper. ## II Background A Correlational Neural Network (CorNet) [17] is a staple implementation in the field of representational learning. CorrNet is built on a premise that data, pertaining to a common environment, consists of different views correlated by their connection to the bespoke environment. For example, a movie can be decomposed into a series of images, audio signals, and subtitles. As such, it is expected that a correlation exists between these views, as they represent a movie's modality. Therefore, when one of these modalities is in-existent, other views should construct the missing view. The CorNet architecture and its loss functions reflect these concerns. The different modalities of a single view consist of distinct dimensions. Therefore, the CorrNet projects the views into a common dimensionality. With their encoder and decoder components, Autoencoders (AE) [18] can provide this function by projecting each view into a common subspace. In the CorrNet implementation, the latent space views are added to obtain a common representation of all views. This common representation is of the same dimension as the latent space. After that, the decoder reconstructs the original views based on a composite loss function. The main requirement of reconstructing an absent view from other views is incorporated into CorrNet's loss function. The loss function encompasses three main concerns conveyed by L1 loss, L2 loss, and L3 loss. The L1 loss is the reconstruction loss that is similar to AE architectures' loss. The L2 loss is the reconstruction loss of the views when one of them is missing. The number of L2 losses is equivalent to the number of views. Lastly, the L3 loss calculates the correlation between common representations when one of the views is missing and deducts its value from the reconstruction losses. ## III Related Work This section covers works related to the applied application, global model adaptations to the FL paradigm's shortcomings and NN similarity inference to highlight this work's novelty. The early adoption of FL in literature involved a basic implementation such that each client shares the same feature space and different sample space, referred to as the horizontal FL [19]. Horizontal FL has been applied to different use cases in the field of IoT applications, especially in the fields of energy predictions, and smart buildings. For example, the work of Saptura _et al._[20] investigates the application of FL to predict Electric Vehicle (EV) energy demand. In the context of smart buildings, the FL paradigm is widely adopted to predict energy consumption. Examples of these applications include [21, 22, 23, 24, 25] that share the same purpose, albeit with some prominent differences in the applied methodology. A common theme in these works is to group local clients to minimize the number of participants or personalize the energy consumption profile. The works by [21, 25] cluster users based on their energy consumption profile whereas Gholizadeh _et al._[24] groups users using their hyper-parameter optimization results that reveal this similarity. The works [22, 23] plainly predict the energy consumption profiles in different environments. Many literary works extend the conventional FL paradigm to adapt to different use cases. This line of research is dominated by works that address the data heterogeneity in the FL environment's sample space by altering the Federated Averaging (FedAvg) [4] aggregation method that determines the weights of the shared model. To that end, Probabilistic Federated Neural Matching (PFNM) [26], Federated Matching (FedMA) [27], FedProx [28], and FedNova [29] contributed to the advancement of the state-of-the-art. However, this work targets the heterogeneity in the feature space, which renders the aforementioned methods unfit for the use case under study. Few works explored the model heterogeneity problem in the context of FL. A common approach combines transfer learning and knowledge distillation to address the non-uniformity of client models. In this approach, the local model weights that are transmitted to the global model are replaced by class outputs obtained using public data. These class outputs are obtained by training each client on public and private data. After that, local models are trained to approach the aggregated global model results. Methods such as FedMD [12], Cronus [11], FedDF [10], and FedDistill [30] follow this strategy with variations related to the knowledge distillation technique. Cronus [11] incorporates the public and private datasets for training while FedMD [12] discards the public data after the initial training phase. Cronus and FedMD apply the knowledge distillation on the local agents' side. To mitigate the effects of the changes to the public dataset on the performance of local clients, FedDF [10] performs the distillation on the server side using Generative Adversarial Networks (GANs). The last part discusses the literature that tackles the representational similarity of NNs, which mainly motivates our applied approach. SVCCA [31] and PWCCA [32] are tools that compare two representations of NNs, which combine Canonical Correlation (CCA), and Singular Value Decomposition (SVD). On the one hand, the invariance to affine transformation promotes the use of CCA for comparing NN architectures. On the other hand, SVD determines the most important directions in the original data, which are fed to the CCA to compute the similarity in NN representations. The authors of [27, 33, 34] discuss the permutation invariance of NN. This invariance suggests that NNs can create versions of the same architecture while permuting the parameters' order. This observation suggests the existence of drastic differences between neurons even when sharing the feature space and architectural design. The study by Li _et al._[33] uncovered the existence of one-to-one and one-to-many correspondence of neurons of the same architecture. This discovery validates the SVCCA approach, in terms of the existence of linear relationships between neurons, but, simultaneously, demonstrates that each NN uniquely creates its unique methods of feature engineering through neuron combinations. This fact shows that methods such as SVCCA do not fully capture the spectrum of similarity and new methods that mirror the non-linearity should be proposed. The surveyed literature exposes some limitations that motivate this manuscript. This paper is the first to address CO\({}_{2}\) predictions in a pre-defined time window application using the FL paradigm. Applications in smart buildings are confined to energy prediction, which demonstrates the novelty of this work and highlights its importance on both economic and health levels. The model heterogeneity approaches dominated by knowledge distillation methods necessitate the existence of public data, related to the problem, to establish common grounds between models of different architectures. To that end, generating a public dataset that resembles the private set can potentially violate the privacy-preservation aspect of FL. For these reasons, knowledge distillation approaches are limited to theoretical implementations, which undermines their utility in real-world scenarios. Mitigating the absence of models is neglected in the literature, which sheds light on this paper's contribution and insights into this critical problem in the FL paradigm and in the IoT environment in general. Inspired by the methodologies that explore the similarity between NNs and representational learning, our approach proposes CorrFL as a model aggregation method at the central server that mitigates the absence of models and generates highly correlated features. The direct implementation of the CorrNet on the studied use case is obstructed by multiple limitations. First, the original formulation and implementation consider only a use case consisting of two views. As such, it does not fit the multi-view nature of the IoT device models. Second, the loss function incorporates a weighted correlation between hidden representations to deduct from the sum of reconstruction losses. As a result, the correlation is an auxiliary factor, rather than a central one. To address these shortcomings, the formulations of the L1 and L2 losses are extended to multi-view representations to fit the diverse nature of IoT devices capturing environmental features. Moreover, the formulation of the correlation-based loss function is altered to maximize the correlation between hidden representations. ## IV Proposed Approach: Correlational Federated Learning This section details how multi-view representational learning and correlational analysis are combined with the FL environment (CorrFL) to address model heterogeneity and availability constraints. The processes involved in the realization of the CorrFL are depicted in Figure 2. The explanation focuses on the two subsystems that compose the FL paradigm: the local clients and the central server. The set of all symbols used throughout this section is summarized in Table I. ### _Local Client Training_ The environment under study encompasses different sets of IoT devices, each collecting environmental features to predict CO\({}_{2}\) concentrations. The FL paradigm commences with training local NNs on each IoT node in the environment. This step is depicted in the left side of the dotted line of the upper part of Figure 2, referred to by (1). Each of the raw node data undergoes common preprocessing and feature engineering procedures. However, due to differences in the environmental features collected by each node, the feature engineering step produces different sets of input features. The differences in features post feature engineering step of each IoT are reflected in their corresponding NN architecture. Assuming common NN hidden layers, the input layer is the distinctive property of each local agent model, resulting in differences in the number of weights between the input layer and the hidden layer, and the input feature combinations in the subsequent hidden layers. The alteration of subsequent hidden layers will be addressed in future work. \begin{table} \begin{tabular}{c|c} **Symbol** & **Meaning** \\ \hline \(L\) & Number of neurons in the first layer \\ \(n\) & Number of unique models \\ \(mi\) & Neural network models \(i\in n\) \\ \(n_{i}\) & number of input features for models \(mi\) \\ \(w_{i}\) & \(L\times n_{i}\) represents the number of weights \\ \(w_{i}\) & between the input layer and the first hidden layer \\ \(\widehat{w_{i}}\) & restructured weights after applying CorrFL \\ \(L_{1}\) & reconstruction loss \\ \(L_{2}\) & reconstruction loss when one of the models is absent \\ \(L_{3}\) & correlation loss \\ \(W_{i}\) & set of model weights with an absent model \(mi\) \\ \(h\) / \(g\) / \(\Psi\) & encoding function / decoding \\ & function / encoder-decoder \\ \(H_{i}\) & shared representation of the bottleneck \\ & layer when a model \(mi\) is missing \\ \(r_{i}\) & \(r_{i}\in R\), where \(R\) represents \\ & the combinations of absent models’ representation \\ \hline \end{tabular} \end{table} TABLE I: Methodology Symbols At the end of a pre-defined training time or a communication cycle, each client sends their NN weights to the central server. The processes taking place in this phase are shown on the right side of the dotted line of the upper part of Figure 2, referred to by (2). At the central server, the CorrFL training begins, which represents the primary contribution of these works. The parameters sent by each node include the weights between the input layer and the hidden layer, the weights between hidden layers, and the weights between the last hidden layer and the output layer. ### _Central Server Aggregation_ The applied processes in this step are shown in the lower part of Figure 2. The central server receives sets of model weights that need to be aggregated and transmitted back to the IoT devices when an unavailability concern arises. The devices are grouped based on their shared input layer, representing the features that resulted from the feature engineering step explained in the previous section. To demonstrate these differences, each model is depicted in Figure 2 with a specific shape, denoted by \(m1\), \(m2\), and \(m3\). The central server's main objective is to mitigate the absence of models by generating them from the existing models. Two stages are implemented to realize this task. The first stage of this procedure is analogous to a preprocessing step. It first aggregates the homogeneous models, that share the same feature space, which is achieved using the conventional FedAvg method. FedAvg calculates the element-wise average of the model's weights for each layer. This stage does not present any novelty compared to other approaches in the FL paradigm. Assuming that the number of neurons of the first hidden layer is \(L\) for each model, the number of input features is \(n_{1}\), \(n_{2}\), and \(n_{3}\) for models \(m1\), \(m2\), and \(m3\), the resultant weight matrices are of size \(L\times n_{1}\), \(L\times n_{2}\), and \(L\times n_{3}\), respectively. The matrices are later flattened to a 1-dimensional array and fed as inputs to the CorrFL. These processes are depicted in part (3) of Figure 2, representing the left side of the dotted line of the lower part of the figure. The challenging aspect of this environment is combining the heterogeneous models while adhering to the central server's main objective. In the studied use case, the objectives can be summarized by the servers' need to generate models for IoT nodes that failed to dispatch their model updates due to connectivity issues. The adopted approach referred to as CorrFL extends CorrNet's architecture to multi-view representations and alters its loss function to fit the requirements and assumptions of the aggregated models and the environment's availability constraints. A bird's eye view of the CorrFL implementation is depicted in part (4) of Figure 2. In terms of its architecture, CorrFL extends the CorrNet architecture to multi-view representations that are fed to its input layer, keeping the remainder of its architectural structure intact. Here, input, model, and view are used interchangeably. CorrFL incorporates an AE for each view, which enables the projection of each model weights into a common subspace, also referred to as the latent space. The encoding function of the CorrFL is denoted by \(h\) while the decoding function is referred to as \(g\). Together, the encoding and decoding functions represent the CorrFL transformation functions denoted by \(\Psi\). Defining CorrFL's loss function dictates the realization of the central server's objectives of mitigating the unavailability of local model updates. To that end, the loss functions of CorrNet's original formulation, defined as L1 loss, L2 loss, and L3 loss, are extended and altered to meet the envisioned central server's objectives. In the original formulation, the L1 loss (\(L_{1}\)) minimizes the Fig. 2: Components of CorrFL differences between the original and the reconstructed model, when all the models are present. This loss does not directly contribute to the objectives set for the central server, but it alters the model weights so that the internal AE converges [17]. The staple reconstruction error is obtained using the mean square error (MSE) that amplifies large errors and diminishes the effect of smaller ones. Since this is the first work that applies AE in the field of model weight reconstruction, MSE is employed to calculate the reconstruction loss. However, a thorough investigation of the available loss functions should be carried out to find the loss function that fits the sensitive nature of the models' weights due to its effect on model performance. Assuming that \(n\) models are present, whereby the weights of each input model are denoted by \(w_{i}\) such that \(i\in n\), the reconstructed weights are \(\widehat{w_{n}}\), \(L_{1}\) is defined as follows: \[L_{1}=\frac{1}{n}\sum_{i=1}^{n}(w_{i}-\widehat{w_{i}})^{2} \tag{1}\] The \(L_{1}\) alone drives the CorrFL to reconstruct the inputs if they all exist; however, it does not address the models' absence. Therefore, this loss function should be augmented with other losses to fulfill the central server's main objective. The L2 loss (\(L_{2}\)) integrates the model's availability concern into its formulation. When all model weights are present, \(L_{2}\) assumes that one model is not available, which reflects the studied IoT environment. In this scenario, \(L_{2}\) calculates again the reconstruction loss. This step is crucial so that the models are properly reconstructed when one model is missing. Through the formulation of \(L_{2}\), the common representation when a model is absent is obtained. This representation is retained to be used in the formulation of the L3 loss (\(L_{3}\)). The set of inputs, representing the set of weights is denoted by \(W=\{w_{1},...w_{i},...,w_{n}\}\). The unavailability of a model \(mi\) is emulated by setting its weights \(w_{i}\) to be zeros. In this scenario, the set of inputs \(W\) with missing weights \(i\) is denoted by \(W_{i}\). The formulation of \(L_{2}\) is as follows: \[L_{2}=\frac{1}{n}\sum_{i=1}^{n}(\Psi(W_{i})-w_{i}) \tag{2}\] The \(L_{2}\) provides, as a by-product, the models' common representation when a model is missing. For \(W_{i}\) input, the common representation \(H_{i}\) is obtained as follows: \[H_{i}=\sum_{i=1}^{n}h(w_{i}) \tag{3}\] The data heterogeneity and the intersection of the models' feature spaces produce NN models that can potentially exhibit high correlations in some weights. An option to quantify this correlation is conducting the one-to-one or the one-to-many correspondence on the hidden layers [33]. However, this approach is computationally expensive due to a large number of input weights and the underlying assumption of linearity. Therefore, a correlation-based method that incorporates non-linearity is favourable. The non-linearity is achieved using the AE architecture that maps the input with different dimensions into a common latent space. When a model is absent, it is desired to construct this model from other models, knowing that this unavailable model should incorporate some aspects of the available ones. This requirement can be achieved by maximizing the correlation between common representations when a model is absent (\(H_{i}\)) reflected by \(L_{3}\). Here, the definition of \(L_{3}\) diverges from the L3 loss definition in the original formulation. In particular, the L3 loss acts as an auxiliary loss that is deducted by a specific factor from L1 and L2 losses. This formulation suggests that it is nice to have property instead of a fundamental one. Therefore, to restore parity between different losses and incentivize the production of highly correlated models, a new definition of \(L_{3}\) is proposed. To obtain \(L_{3}\), the correlation between each pair of hidden representations is calculated. The combinations of all these representations are denoted by \(R\). The pairwise correlation denoted by \(c(r_{i})\), such that \(r_{i}\in R\), whereby \(r_{i}\) is equivalent to the combination of \(i_{1}\) and \(i_{2}\) (\(i_{1}\neq i_{2}\)) is calculated as follows: \[c(r_{i})=\frac{(h(w_{i_{1}})-\overline{h(w_{i_{1}})})\times(h(w_{i_ {2}})-\overline{h(w_{i_{2}})})}{\sqrt{(h(w_{i_{1}})-\overline{h(w_{i_{1}})})^{ 2}\times(h(w_{i_{2}})-\overline{h(w_{i_{2}})})^{2}}} \tag{4}\] \[L_{3}=\sum_{r_{i}\in R}1-c(r_{i}) \tag{5}\] This \(L_{3}\) is an altered version of CorrNet's original L3 loss formulation, which only deducts \(c(r_{i})\) from L1 and L2 losses. This new version of \(L_{3}\) increases when the correlation is either low or negative and decreases otherwise. This definition drives the production of highly correlated reconstructed model weights, which are in agreement with the heterogeneous environment that typically results in highly correlated model weights. After the formulation of the \(L_{1}\), \(L_{2}\), and \(L_{3}\), the loss function \(L\) that directs the CorrFL model is as follows: \[L=L_{1}+L_{2}+L_{3} \tag{6}\] ## V Experimental Procedure This section explains the experimental procedure that discusses the dataset used and its processing steps, feature engineering steps, neural network architecture, testing parameters, and implementation details. ### _Dataset Insights and Preprocessing_ The dataset includes environmental features collected over a year in the Nordic climate of Northern Finland using a host of sensors. Only six sensors are utilized to illustrate the utility of the proposed approach. The set of environmental features captured by each sensor is summarized in Table II. The activity \begin{table} \begin{tabular}{|c|c|c|} \hline **Model Name** & **IoT Nodes** & **Collected Features** \\ \hline \multirow{4}{*}{m1} & node\_913 & \multirow{4}{*}{\{humidity, temperature, pressure, activity\}} \\ \cline{2-3} & node\_915 & \\ \cline{1-1} \cline{2-3} & node\_916 & \\ \hline m2 & node\_920 & \{humidity, temperature, pressure\} \\ \hline m3 & node\_924 & \{humidity, temperature, pressure, CO\({}_{2}\)\} \\ \hline \end{tabular} \end{table} TABLE II: Feature Distribution among IoT Nodes level is calculated by aggregating the movement levels in each five-second interval for a one-minute granularity. Other features are captured with one-minute granularity. The dataset is collected in 2 conference rooms that can fit 12 people and 11 cubicties that can fit 2 people. The CorrFL methodology is implemented in one of the conference rooms that fit 12 people, referred to as room00. For this study, the data corresponding to each sensor was first sorted based on its timestamp [35]. Then, the data are aligned to start at the same timestamp and then sampled with one-minute granularity. Lastly, the gaps in data caused by communication issues are mitigated by interpolating the missing data using each feature's median. This way, with every communication round, the client models are trained using the same amount of data so no weighted approach is applied. ### _Feature Engineering and Neural Network Architecture_ The original work that made the dataset available for this study explored different supervised ML techniques, including NN models, to predict the CO\({}_{2}\) concentrations in a future time horizon using data of the history time window [36]. Since only \(m3\) type sensors involve the collection of CO\({}_{2}\) concentrations, the collected values are shared between all participants. This assumption follows the trend in vertical FL [4], whereby all the local agents own the labels, in this case, the CO\({}_{2}\) concentrations. The paper that made available the dataset used throughout this manuscript has experimented with different feature combinations, with the goal of enhancing the CO\({}_{2}\) predictions over a future time horizon [36]. The combination of lagged versions of environmental features over the history time window in addition to the difference in values of each feature between the end and the start of the history time produced satisfactory results. This feature engineering step incorporates time dependencies into the NN models [37, 38]. The input layer size specific to each sensor differs depending on the collected features. One hidden layer with 16 neurons augments the input layer. This layer is connected to a single output layer that predicts the CO\({}_{2}\) concentrations. As highlighted in the methodology section, the input layer is the source of model heterogeneity as the other layers are common between different models. This architecture was chosen because it produced satisfactory results in the original study, and it is a good starting point to benchmark the CorrFL to be later extended to more pervasive and heterogeneous architectures. A single history and future time window of 5 minutes to serve as proof of concept for CorrFL. ### _Testing Parameters_ The explanation of the considered parameters is divided based on the sub-systems of the FL environment. On the client side, many parameters related to the environment and the NN can be optimized. NN's performance can be improved by applying hyper-parameter optimization, encompassing the number of hidden layers, the activation functions, and the learning rate [39]. This process is deemed unessential to the primary purpose of the methodology, and it is left for future work or keen practitioners to explore. The more important concerns pertain to the communication cycles (\(CC\)) and the number of model weights sent at once, referred to as Model Dispatch Frequency (\(MDF\)), with each \(CC\). These parameters determine the amount of data fed to the CorrFL model and dictate its convergence. Accordingly, a prolonged grid search is required to find the optimal combinations of these parameters. After applying some filtration on outlier instances of environmental features, the remainder of the dataset includes 381,419 data points which translates to around 265 days. To reduce the parameter search space, each \(CC\) is assumed to be equal to 14 days, which means that local models train on 20,160 data points, before dispatching their accumulated model weights. The batch size of the training data is assumed to be 8. Investigating the effect of \(CC\) and the batch size is left for future work. CorrFL evaluation experiments with \(MDF\) equal to 5, 10, and 15, which means that model weights are collected at every 5th, 10th, or 15th batch. The time equivalent of these values is a model dispatch every 40, 80, and 120 minutes. The delay \(d\) between the start of the training process on the client side and the server side is also a salient concern. The parameter \(d\) here dictates the amount of training data accumulated to train the data-intensive CorrFL architecture and to stabilize the local agents' training. The delay \(d\) represents some multiplier of \(cc\). \(W_{cc}\) is the set of weights at communication cycle \(cc_{i}\) such that \(cc_{i}\in CC\). The input fed to CorrFL is as follows: \[\bigcup_{i=1}^{d}W_{cc_{i}} \tag{7}\] While the research community has investigated the stability of NN learning [40, 41], a consensus is yet to be reached about concrete analytical methods that quantify the time when this stability is attained. Therefore, in this experimental procedure, empirical evidence is key to identifying the point of relative stability that determines the \(d\) parameter. Each NN is initialized with a random set of weights, obtained from the central server. Therefore, a reasonable \(d\) starts with at least the first epoch. During the first epoch, the local NNs learn different facets of the training data, allowing the weights to be altered to match the data trend. As a result, each local model exits the randomness of weight initialization to weights that are reflective of the underlying heterogeneous data distributions. While sending the weight updates after the first learning epoch is based on theoretical assumptions, delaying this process beyond that point is predicated on empirical evidence. Analyzing the effect of \(d\) on model performance is crucial to determine the utility of CorrFL and its capability to \begin{table} \begin{tabular}{c|c|c} **Symbol** & **Meaning** & **Values** \\ \hline \(CC\) & Communication cycles & 15 per epoch \\ \(d\) & delay & \(\{1,5,10,15\}\) \\ \(MDF\) & Model Dispatch Frequency & \(\{5,10,15\}\) \\ \(VE\) & Validation Epochs & \(1\to 10\) \\ \hline \end{tabular} \end{table} TABLE III: Testing Parameters produce good results under different environments. Therefore, delays of one, five, ten, and fifteen epochs are assumed, equivalent to \(15\), \(75\), \(150\), and \(225\)\(CC\). As for the CorrFL architecture, a deep encoder/decoder of two hidden layers that each include 128 and 32 neurons with no activation functions. The developed NN models and evaluation criteria were built using PyTorch [42] python libraries. The developed models were evaluated on Windows 10 PC with a 3.00 GHz 24-Core AMD Threadripper processor, 128 GB of RAM, and 8 GB Nvidia GeForce RTX 3060 Ti GPU. The code is made available on the GitHub repository1. Footnote 1: [https://github.com/Western-OC2-Lab/CorrFL](https://github.com/Western-OC2-Lab/CorrFL) ## VI Use Case and Evaluation Criteria This section describes the use case to evaluate the CorrFL. It first explains the environment under study, the motivations for applying this scenario, how this scenario transpires, and the evaluation criteria. ### _Environment Description_ Before delving into the details of the adopted use case to evaluate the CorrFL, it is important to provide a bird's-eye view into the interactions occurring between the local models and the Server applying the CorrFL approach. Figure 3 depicts the studied environment and follows the notation established throughout the paper. The sets of IoT devices are grouped based on their collected environmental features. In the illustration, these groups are referred to according to their corresponding local model weights \(m1\), \(m2\), and \(m3\). Figure 3 shows three connectivity links that can experience issues that affect the FL processes and jeopardize the availability of each set of sensors. The first link, denoted by 1, connects the set of IoT nodes to a gateway. The second link, denoted by 2, connects the gateway to the central server. Links 1 and 2 are referred to as the uplinks. On the other hand, the links that connect the server to the gateway and the gateway to each set of nodes are denoted by 3 and referred to as the downlink. In this paper, connectivity issues are encountered on links 1 and 2, whereas the downlink functions normally. The cut in connectivity through link 1 or link 2 is assumed to be long enough so that the ambient environment undergoes a major shift in its properties and underlying relationships. Use cases whereby intermittent connectivity issues are encountered are out of the scope of this paper. ### _Motivation_ The CorrFL approach is evaluated when two events transpire. The first event refers to the occurrence of connectivity issues for one of the model sets. The second event takes place after this disconnection when the characteristics of the environment, described by one or more environmental feature, change drastically. As such, the CorrFL will be tasked with producing model weights of the unavailable IoT devices from the updated models of the available ones. However, it is important to propose a plausible scenario relevant to the studied office environment and its effect on CO\({}_{2}\) predictions. Measuring activity levels can act as a proxy indicator of occupancy, which has shown a strong association with the variation of CO\({}_{2}\) concentrations [43]. In a large conference room that fits 12 people, radical changes can be experienced in occupancy. Since these rooms are dedicated to large meetings that are rarely conducted, it is expected that these rooms are left unoccupied most of the time. Team and executive meetings are rare events, which further enforce the premise of the domination of low occupancy events. Moreover, the emergence of remote work facilitated by the recent pandemic has accentuated this trend [44], which degraded the occupancy prediction models developed before the pandemic. This trend of low occupancy is also observed in the dataset used in this paper. To showcase this trend, Figure 4 depicts the distribution of activity levels greater than 0 over the subset of sensors capturing this environmental feature. The percentage of data with the corresponding activity level is shown on top of each bar. As expected, the dataset is dominated by instances with no activity levels, as the percentage of data with any activity level is in the range of 5-8%. Moreover, extreme activity levels are more frequent in such an environment. The upper half of the existence of activity levels, in the range of 8-12, outnumber the lower levels. This observation aligns with the main function of the conference room, which is only used for larger meetings. The combination of extracted observations and the post-pandemic office environment engenders the perfect recipe for evaluating the CorrFL approach. For this manuscript, the adopted occupancy use case is considered for activity levels that are above 7. This threshold perfectly balances establishing a rare event and supplying enough data for any ML model. Since each sensor monitors a specific aspect of the environment, it is unlikely that the individual sensors would capture the same activity levels in their vicinity. Therefore, the common timestamps, when the studied use case condition is fulfilled are rare if any, which puts any ML model at a disadvantage. As a result, the union of these timestamps is considered, which provides more data, but would include some aspects of previous data characteristics that are detached from the adopted use case. Overall, the total number of timestamps resulting from this process constitutes 15% of the total amount of data, enough for 3 \(CC\). Fig. 3: Bird’s Eye View of the Use Case ### _Use Case in Action_ The data is split into a training dataset and a testing dataset. The training dataset represents the data under normal conditions, whereby the activity levels are under the pre-defined threshold occupancy levels. On the other hand, the testing dataset includes the dataset with occupancy conditions corresponding to activity levels that are above 7. In the training phase, each local agent is trained using its local data and ML models. Meanwhile, the CorrFL model is progressively trained by the weights generated by each local model, depending on the \(MDF\) parameter. During the training process, it is assumed that no availability issues are experienced and all links 1, 2, and 3 are fully functioning. CorrFL's evaluation process begins when one set of IoT devices is absent, caused by the disconnection of links 1 or 2. In this manuscript, set \(m3\) is assumed to experience uplink connection downtime but still able to send the new CO\({}_{2}\) concentrations required for the predictions of other models. Here, the environment flips into heightened activity-level conditions. The testing set is split into validation and testing sets. Each of the available sets of IoT devices, denoted by \(m1\) and \(m2\), continue training on their respective validation sets. This process is denoted by 4 in Figure 3. The number of times the available models train on the validation set before the CorrFL sends the updated weights \(\widehat{w_{3}}\), denoted by 5 in Figure 3, is referred to as the validation epoch (\(VE\)), which represents an additional parameter to experiment with. The \(VE\) can also represent the number of training cycles that the model \(m3\) is missing and CorrFL is trying to compensate. Therefore, the analysis of the effect of \(VE\) translates to the effect of a model being missed over multiple training cycles. The \(\widehat{w_{3}}\) representing the \(m3\)'s first layer generated weights are combined with \(m3\)'s second layer weight obtained solely from the training set. Together, the newly created model is referred to as a CorrFL model or \(m3\_CorrFL\). During the validation process, the CorrFL uses its pre-trained models to generate the missing model and no training of CorrFL is executed. During the testing phase, the available local models only predict the CO\({}_{2}\) concentrations and no training process is involved on their end. The testing parameters are summarized in Table III. Figure 5 summarizes the steps involved in the training and validation phases of CorrFL. ### _Evaluation Criteria_ The evaluation process is split into three phases, training phase, validation phase, and testing phase. Regardless of the ongoing phase, the developed models' predictions are evaluated using the Mean Absolute Error (MAE). It is important to establish the connection between the MAE metric and its contribution to the reduction of HVAC energy consumption. The prediction of CO\({}_{2}\) concentration is instrumental in the activation of HVAC systems. While small deviations in predictions are insignificant in the short term, they can contribute to the superfluous activation of HVAC systems, increasing their overall energy consumption. As such, the envisioned application values small and large deviations equally, which favours the employment of MAE to measure the accuracy of CO\({}_{2}\) predictions. During the validation phase, the CorrFL generates an updated model \(\widehat{w_{3}}\) for the set \(m3\) of unavailable IoT devices. This model is compared to the model trained only Fig. 4: Activity Level Distribution between Sensors Fig. 5: CorrFL Flowchart on the training set, referred to as \(m3\_benchmark\). MAE in CO\({}_{2}\) predictions is the basis for this comparison. The second evaluation criterion encompasses networking concerns pertaining to the FL environment. The additional exchange of models during the validation process presents a considerable communication burden on the networking system. Therefore, to quantify this burden, the number of Megabytes (Mbs) per percentage of improvement (PI) between the CorrFL approach, and the benchmark model weights is calculated. The total memory usage of the CorrFL during the validation phase is denoted by \(U\). The MAE of CO\({}_{2}\) predictions of the CorrFL approach is denoted by \(p_{C}\) and the MAE of CO\({}_{2}\) predictions of the benchmark model is denoted by \(p_{B}\). The improvement ratio is \(IR\) and the PI formulations are as follows: \[IR=100\times\frac{(p_{C}-p_{B})}{p_{B}} \tag{8}\] \[PI=\frac{U}{IR} \tag{9}\] ## VII Results This section provides a detailed analysis and discussion of the CorrFL performance under different experimental parameters and draws conclusions and suggests future directions. ### _CorrFL Evaluation_ This section discusses the general results of applying the CorrFL approach to the explained scenario. The generality refers to simply averaging the \(m3\_CorrFL\) model over all configurations, including the \(d\), \(VE\), and the \(MDF\). This comparison allows extracting the general trend of CorrFL's approach, regardless of the underlying assumptions about the convergence of local models. Figure 6 shows the average results of the CorrFL vs. the benchmark model for validation and testing datasets. The inclusion of the validation set for evaluating the CorrFL approach assesses the available local model's ability to incorporate aspects of the high activity level use case into their models. This capability is reflected by altering their weights so that the weights yielded for the absent model are generalized over the whole testing set. With regard to the benchmark model, no stark differences exist in the average and the standard deviation of its performance on the validation and the testing set with a slight edge for the testing set. Despite the uniform method adopted to split the validation and testing sets, these results show that the testing set has more common properties with the training set than the validation set. The observations on the benchmark model are reversed for the CorrFL models. In particular, the generated models perform on the validation set better than on the test set. This advantage is expected because the available models are trained on the validation set, which yields model weights that are better adjusted to this set. Additionally, the worse testing results align with the observations extracted in relation to the benchmark models. The training of the available models on the validation set incorporated some of its aspects in the updated model weights, which suggests that previously learned environment dynamics are gradually being replaced. This fact combined with the preposition of the existence of some training data aspects in the testing set explains the results obtained by the CorrFL models on the testing set. This section alludes to the superiority of the CorrFL models over the benchmark models for the heightened activity level use case. However, the effects of different configuration parameters on the convergence of local models and the quality of CorrFL models are concealed by only reporting the average MAE. Moreover, the relatively large standard deviations show that there are more interesting insights about the performance of the generated and benchmark models. The PI criterion is not included in this section because it depends only on the validation dataset. Analyzing these parameters is of paramount importance and concluding their effect opens many research questions for keen practitioners to answer. ### _Effect of delay_ This subsection details the effect of the delay parameter on the MAE and PI of the \(m3\_CorrFL\) in comparison with the \(m3\_benchmark\). Since the number of collected models and the \(VE\) contribute to performance variations, the analysis that follows alters \(d\), keeping other parameters the same. This subsection begins by first discussing the effect of \(d\) on the performance of the local models and as a result its contribution to \(m3\_CorrFL\) performance. Fig. 6: Effect of CorrFL Fig. 7: MAE training and testing for Delay = 1 Figures 7, 8, and 9 show the variations for MAE training and testing with the delay, considering a \(VE=1\) and \(MDF=5\). The IoT nodes are grouped as per the description and notation introduced in Table II. The performance of the benchmark model on the validation set is highlighted using a circle in Figures 7, 8, and 9. Moreover, each figure of MAE testing includes the performance results of the benchmark model and the models generated by CorrFL. updated models. Furthermore, the results with \(d=1\) hint that the CorrFL approach can generate good results at the start of the training process, which means that not a lot of dispatched model weights are needed to obtain good results. The observations of the delay effect open the door for applying the CorrFL model to address different concerns in the FL environment, ranging from stragglers to intermittent connectivity issues. ### _Effect of Validation Epochs_ After analyzing the effect of \(d\) on the performance of \(m3\_benchmark\) and \(m3\_CorrFL\) models, the next step discusses the results of varying the \(VE\). \(VE\) determines how much of the novel environment is incorporated into the model weights of available devices. This factor involves a downstream effect on the \(m3\_CorrFL\) model and the utility of the adopted approach in the studied use case. Figures 9(a) and 9(b) illustrate a sample of the effect of \(VE\) on the performance of the available models, \(m3\_benchmark\), and \(m3\_CorrFL\) model when applied to the validation set. Additionally, the effect of \(VE\) on \(m3\) if it were available is denoted by \(m3\) in the figure. The inclusion of \(m3\) and \(m3\_benchmark\) results are instrumental to provide the upper and lower bound performance. In particular, results in Figure 9(a) are common among different combinations of \(VE\) and \(d\) parameters, whereas the observed phenomenon in Figure 9(b) represents an outlier. Both cases are included because they provide interesting insights and trigger intriguing discussions. Figure 9(a) depicts the variation of MAE with the \(VE\) for \(d=10\). With regard to the available models \(m1\) and \(m2\), their respective MAE decreases slightly with the increase in the number of epochs. Similar observation is drawn out for \(m3\). The biggest drop in MAE is attained after the first epoch, suggesting that the model weights are adjusted to the novel environment. After the first epoch, no noticeable gains in performance are acquired. Similar observations are extracted for Figure 9(b); however, a more prominent decrease is observed for the \(m1\) to an MAE that resembles the one in Figure 9(a). The dynamics of the CorrFL model performance are slightly different as a result of the increase in \(VE\). In Figure 9(a), the \(m3\_CorrFL\) performance steadily improves and stabilizes at epoch 5 to deteriorate slightly after that epoch. This variation shows that beyond a specific epoch, the generated CorrFL weights are less reflective of the novel environment. A minimal dissimilarity in this dynamic is observed for Figure 9(b), whereby the inflection point is at epoch 2. Therefore, a sweet spot exists that balances performance gains while avoiding the significant increase in PI. As the number of \(VE\) increases, the PI increase, which diminishes any performance improvement by the \(m3\_CorrFL\). The inclusion of \(m3\) in Figure 9(a) shows that there is room for performance improvement for the generated CorrFL model. This improvement can be achieved by applying a hyper-parameter optimization procedure to the CorrFL model and by experimenting with a wider range of parameters, an aspect that was not touched upon in the current manuscript. The CorrFL-generated models significantly outperformed the benchmark models in all combinations, except for the one depicted in Figure 9(b). Under the same assumptions of \(d\) and \(VE\), the CorrFL models have better performance as illustrated in Figure 8. This outlier can be attributed to the possible slow convergence of the available models during the training phase. One of the models may have been stuck in a local minimum, which engendered model weights of low quality and minimal correlation with respect to the other models. Additionally, the \(m3\_benchmark\) outperforms all of its counterparts, which implies that the model converged to its best performance, a condition predicated on the initialization of weights. Moreover, in this case, the underlying assumption is that each model was trained on the training data for 5 epochs (\(d=5\)). This case is one of the many possible delays that can be encountered in such an environment. The provided discussion misses a very important factor in the field of model retraining during the validation phase, which can explain the underwhelming results in some cases. This phenomenon is referred to as catastrophic forgetting [45], which when projected to the studied environment, assumes that the model forgot the dynamics under normal conditions. Some of these dynamics can be successfully translated to high activity level conditions; however, the training on the validation set contributed to forgetting these dynamics. This Fig. 10: Variation of MAE Validation with different Validation Epochs phenomenon is not studied in this manuscript, and it will be investigated in future work. ### _Effect of Model Dispatch Frequency_ The importance of analyzing the effect of Model Dispatch Frequency (\(MDF\)) stems from its impact on the training data size used by the CorrFL model. As such, studying this impact sheds light on the CorrFL model's ability to train its AEs so that they neither overfit nor underfit the models. To that end, Table V only includes the results for \(d=1\), given that this delay produces the least amount of training data, compared to other delay values. Therefore, analyzing the alteration of this parameter allows for a better understanding of the amount of data required to produce satisfactory results for the \(m3\_CorrFL\). An additional advantage is gauging the networking resources to allocate for the realization of the adopted architecture, especially if the studied IoT devices are deployed in a harsh environment with limited access to bandwidth resources. Table V summarizes the variation of MAE and PI with respect to changing the \(MDF\). While the table only involves \(d=1\), similar observations are reported for other delays. There is no prominent performance trend with the increase of \(MDF\). However, the best-performing models are obtained with \(MDF=15\) outperforming other \(MDF\) values in both the MAE and PI parameters. This observation can be attributed to two main factors. First, decreasing the \(MDF\) means an automatic increase in the data size, which the CorrFL can easily overfit, despite requiring a substantial amount of data to converge. Second, increasing the \(MDF\) results in the collection of coarser model weights from each IoT node. This modification yields more disparate models, instead of the repetitive ones in the finer-grained scenario. Under these circumstances, the CorrFL model is fed with a more diverse dataset that favours the realization of weights that model the normal and high activity levels case environments. ### _Correlation Analysis of CorrFL_ This subsection is devoted to analyzing the correlation between hidden representations \(H_{i}\) obtained from the CorrFL model. This analysis highlights that the CorrFL approach has contributions that extend beyond the FL paradigm into NN similarity inference. Additionally, it presents deeper insights into the similar trajectory that models with shared feature space follow in their training process. Furthermore, in terms of relevance to FL, it alludes to the deduction of the data heterogeneity aspects of the models and the aspirations for model weight compression. Since the models share some of the feature space, it is expected that they share some neuron combinations. However, previous studies have shown that each NN forms its own unique set of features to realize its desired task. Figure 11 depicts the common representation \(H_{i}\) variation for the CorrFL model across one epoch of training, encompassing 783 iterations. The notation in the figure follows this pattern \(H-i\_H-j\) such that \(i\) and \(j\) are the indices of the absent models. The highest correlation is observed for the combination encompassing \(W_{1}\) and \(W_{2}\). This result means that the remainder of the models when these models are absent are capable of capturing common representations. This commonality shows that the data gathered by each IoT device is superfluous and the model weight compression can be applied. These trends are less magnified in the two other cases. Specifically, the common representations when \(m1\) or \(m3\) are absent have lower correlations. This means that their absence is providing unique information that is captured by the common representation. ### _Time and Space Complexity Analysis_ This section analyzes the training time of local agents' models and the server model represented by the CorrFL model and the training data size per delay (\(d\)) and inference data size for the CorrFL model. Delay (\(d\)) and Model Dispatch Frequency (\(MDF\)) are the main contributing factors to the variation in training times. On the one hand, \(d\) determines the amount of training data for the CorrFL model and the number of epochs for the local agents. On the other hand, \(MDF\) controls the number of models sent at once to train the CorrFL model, which also contributes to the increase or shrinkage of its corresponding training data. As a result, the training time of local agents is analyzed in light of parameter \(d\) while the training time of the CorrFL model is investigated based on \(d\) and \(MDF\). Figures 11(a) and 11(b) depict the variations in training time with respect to \(MDF\) and \(d\). Figure 11(a) shows the effect of \(d\) on the training time with a constant \(MDF=5\) and the percentage of CorrFL training time to the sum of the training time of Local Agent and CorrFL models. The local agents' training can be executed in parallel, which means that the depicted values represent the average of all the involved local agents. As expected, the training time of CorrFL and local agents increases with the increase of \(d\). This positive \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Delay** & **Model Dispatch Frequency** & **MAE** & **PI (Mb)** \\ \hline \multirow{3}{*}{1} & 5 & 167.53 & 0.09 \\ \cline{2-4} & 10 & 225.52 & 0.11 \\ \cline{2-4} & 15 & 155.66 & 0.03 \\ \hline \end{tabular} \end{table} TABLE V: Effect of Frequency Models on MAE and PI for \(VE=5\) Fig. 11: Correlation Analysis correlation is attributed to the increase in training data for both parties. A noticeable drop in the contribution of CorrFL's training time with the increase in \(d\) to 5 to increase slightly for \(d=15\) epochs. This drop is caused by the non-linearity in the increase in training data between the CorrFL and the local agent's models. When the \(d\) increases from 1 to 5, the local agents' training data significantly surged compared to a lesser increase for CorrFL models. This trend is curbed with the further increase in \(d\) from 5 to 15, which means the training data proportional increase did not significantly change. Figure 12b shows the variation of CorrFL's training time for \(d=5\) epochs and \(MDF\) values of 5, 10, and 15. Since the CorrFL's training data shrink with less frequent model dispatch, the training time is expected to drop with the increase in \(MDF\). The inference times for local agent models and CorrFL models are negligible within 0.02 seconds for the testing phase, which highlights the applicability of the defined approach. As for the space complexity, the analysis covers the data size required for the training and inference of the CorrFL model. This model takes three inputs, each with 448, 336, and 448 features representing the weights between the input layer and the first hidden layer. The inference time requires a single data point for each input data, amounting to 0.009 Mbs of data. Similar to the CorrFL's training time, the training data size is dependent on \(d\) and \(MDF\). In that regard, the training data size is reported per epoch (\(d=1\)) and \(MDF=5\). For \(MDF=5\) and \(d=1\), a single \(CC\) constituting 20, 160 data points generates 504 weights for each model. In a single epoch (\(d=1\)), \(CC=15\), which is equivalent to 7560 data points. Considering that each CorrFL training iteration requires three models, the input data represents a matrix of \(7560\times 448\), \(7560\times 336\), and \(7560\times 448\) respectively for models \(m1\), \(m2\), and \(m3\). As a result, the size of training data with the defined configuration is equivalent to \(74.51Mb\)s of exchanged data. The extrapolation of data size with respect to different \(d\) and \(MDF\) is a straightforward exercise. Accordingly, it is imperative to calibrate the \(MDF\) parameter based on the computational and communication resources available in the studied environment. ## VIII Conclusion The distributed IoT environment presents a challenge to the conventional centralized approaches to gathering data and applying ML techniques for automation systems. Therefore, FL is proposed as a collaborative method to address the salient issues of the centralized approach. However, the local model's heterogeneity and availability constraints that are encountered in real-world conditions hamper the realization of the envisioned FL models. Together, these challenges and the research questions that follow are referred to as "Oblique Federated Learning". This work devises the CorrFL approach to jointly address these practical hurdles. CorrFL is applied to a use case involving the prediction of CO\({}_{2}\) concentrations in a specific time horizon. The adopted approach is evaluated in a use case with a sudden increase in occupants' activity levels, which is directly linked to CO\({}_{2}\) predictions and the unavailability of one of the models. The results show that the model weights outputted by CorrFL outperform the benchmark models in CO\({}_{2}\) predictions. While the initial results are satisfactory, the evaluation of this approach spawned many research questions. These questions include the optimization of different parameters and addressing the possibility of catastrophic forgetting. Future work will address all of these questions.
2305.06745
Investigating the generative dynamics of energy-based neural networks
Generative neural networks can produce data samples according to the statistical properties of their training distribution. This feature can be used to test modern computational neuroscience hypotheses suggesting that spontaneous brain activity is partially supported by top-down generative processing. A widely studied class of generative models is that of Restricted Boltzmann Machines (RBMs), which can be used as building blocks for unsupervised deep learning architectures. In this work, we systematically explore the generative dynamics of RBMs, characterizing the number of states visited during top-down sampling and investigating whether the heterogeneity of visited attractors could be increased by starting the generation process from biased hidden states. By considering an RBM trained on a classic dataset of handwritten digits, we show that the capacity to produce diverse data prototypes can be increased by initiating top-down sampling from chimera states, which encode high-level visual features of multiple digits. We also found that the model is not capable of transitioning between all possible digit states within a single generation trajectory, suggesting that the top-down dynamics is heavily constrained by the shape of the energy function.
Lorenzo Tausani, Alberto Testolin, Marco Zorzi
2023-05-11T12:05:40Z
http://arxiv.org/abs/2305.06745v1
# Investigating the generative dynamics of energy-based neural networks+ ###### Abstract Generative neural networks can produce data samples according to the statistical properties of their training distribution. This feature can be used to test modern computational neuroscience hypotheses suggesting that spontaneous brain activity is partially supported by top-down generative processing. A widely studied class of generative models is that of Restricted Boltzmann Machines (RBMs), which can be used as building blocks for unsupervised deep learning architectures. In this work, we systematically explore the generative dynamics of RBMs, characterizing the number of states visited during top-down sampling and investigating whether the heterogeneity of visited attractors could be increased by starting the generation process from biased hidden states. By considering an RBM trained on a classic dataset of handwritten digits, we show that the capacity to produce diverse data prototypes can be increased by initiating top-down sampling from chimera states, which encode high-level visual features of multiple digits. We also found that the model is not capable of transitioning between all possible digit states within a single generation trajectory, suggesting that the top-down dynamics is heavily constrained by the shape of the energy function. Keywords:Energy-based models Spontaneous brain activity Generative models ## 1 Introduction One frontier of modern neuroscience is understanding the so-called _spontaneous brain activity_, which arises when the brain is not engaged in any specific task [1]. This intrinsic activity accounts for most of brain energy consumption [2], and has been studied using electrophysiological recordings [3], electroencephalography [4] and functional magnetic resonance imaging [5]. A recently proposed computational framework [6] suggests that spontaneous activity could be interpreted as top-down computations that occur in _generative models_, whose goal is to estimate the latent factors underlying the observed data distribution [7]. This framework entails a strong connection between spontaneous and task-related brain activity: when performing a task, the generative model would focus on maximizing accuracy in the task of interest, while during rest the model would reproduce task-related activation patterns and use them for the computation of generic spatiotemporal priors that summarize a large variety of task representations with a low dimensionality [6]. This is in agreement with modeling work suggesting that the brain at rest is in a state of maximum metastability [8], where brain regions are organized into quasi-syncronous activity, interrupted by periods of segregation, without getting caught in attractor states [9]. Deep learning models are increasingly used to simulate the activity of biological brains and explore the principles of neural computation [10, 11]. For example, deep networks have been used to reproduce some functional properties of cortical processing, particularly in the visual system [12], as well as to simulate a variety of cognitive functions (e.g., [13, 14, 15]) and their progressive development [16, 17]. However, it is not well understood whether existing deep learning architectures could capture key signatures of spontaneous brain activity. Here we propose to investigate the (spontaneous) generative dynamics of a well-known class of generative models called Restricted Boltzmann Machines (RBMs), which are a particular type of energy-based neural networks rooted in statistical physics [18]. The RBM is an undirected graphical model formed by two layers of symmetrically connected units. Visible units encode the data (e.g., pixels of an image), whereas hidden units discover latent features through unsupervised generative learning [18]. In RBMs, sampling from the hidden states leads to generating visible states that correspond to trained patterns, but these configurations represent local energy minima (i.e., attractors) that are difficult to escape. Indeed, large energy barriers need to be crossed to go from one (stable) visible state to another, which makes these transitions very difficult [19]. Our approach aims at finding constrained initializations of hidden states that could induce the network into metastable sample generation, thus simulating the dynamics of spontaneous activity in the brain. We quantify this as the number of digit states explored in a generation round, identified by a trained neural network classifier, avoiding to get caught in attractor states. In the first set of simulations, we exploit the method described in [14] to sample visual patterns starting from hidden states derived by inverting a classifier trained to map internal representations into one-hot encoded labels. Next, we describe two variations of the original method that combine features of different digits to produce "biased" hidden states away from attractor basins, which should be capable of exploring more states during the generation process. Our results indicate that such biased states indeed increase state exploration compared to classical label biasing with digit labels. However, no hidden state is capable of inducing the exploration of all digits in a single generation round, suggesting that the RBM in its classic version is not capable of mimicking the continuous and heterogeneous state exploration demonstrated by biological brains. ## 2 Materials and Methods ### Dataset Our simulations are based on the classic MNIST dataset [20], which contains images of 28x28 pixels representing handwritten digits from 0 to 9, encoded in 8-bit grayscale (values from 0 to 255, normalized between 0 and 1). It encompasses a training set of 60000 examples and a testing set of 10000 examples. Although this is a medium-sized dataset with a limited number of classes, it allows us to more clearly characterize the generative dynamics by measuring the number of different states visited during top-down sampling. ### Restricted Boltzmann machines Boltzmann machines are energy models composed of two different kinds of units: _visible units_, which are used to provide input data (e.g. pixels of an image) and _hidden units_, which are used to extract latent features by discovering higher-order interactions between visible units [18]. In RBMs there are no hidden-to-hidden and visible-to-visible connections: the only connections are between the visible and hidden units, which can be considered as two separate layers of a bipartite, fully-connected graph [21]. Neurons in a Boltzmann machine are conceptualized as stochastic units, whose activity is the result of a Bernoullian sampling with activation probability \(P\left(\sigma_{i}=1\right)\) defined as follows: \[P\left(\sigma_{i}=1\right)=\frac{1}{1+e^{-\Delta E_{i}/T}} \tag{1}\] where \(\Delta E_{i}\) is the difference in the energy of the system caused by the change in the state of the unit \(i\), and \(T\) is the temperature parameter that acts as a noise factor. Given a set of training data \(\mathcal{D}=\left\{x^{(i)}\right\}_{i=1}^{n}\) the parameters \(\theta\) of an RBM (that is, the weights connecting the units and the biases) are updated by maximizing the likelihood \(p(\mathcal{D}|\theta)\), where \(p(\mathcal{D}|\theta)\) is the Boltzmann distribution with temperature \(T=1\). Training is performed by gradient ascent, usually adopting the contrastive divergence training algorithm, which exploits Monte Carlo Markov chain methods to estimate the gradient update [22]. #### 2.1.1 Model architecture and training details In our study, we used an RBM with 784 visible units (that is, equal to the vectorization of single MNIST examples (28x28 = 784)) and 1000 hidden units. The RBM was trained with 1 step contrastive divergence and learning rate \(\eta=0.1\) for both weights and biases (hidden and visible). The parameter update also included a momentum term \(\gamma\) to speed up the training. Following standard practice [23]\(\gamma\) was equal to 0.5 in the first 5 training epochs and 0.9 in successive iterations. Furthermore, the parameter update was decreased by the value of the parameter of interest in the previous training iteration multiplied by a decay factor equal to 0.0002. Both hidden and visible biases were initialized equal to 0, while connection weights were initialized with random numbers sampled from a zero-mean normal distribution with standard deviation equal to 0.1. The model was trained for 100 epochs following a batch-wise approach, with batch size = 125. Learning was monitored using a root mean square error loss function. #### 2.1.2 Top-down sampling from RBM Data generation was performed at the end of the RBM training phase. To generate smoother images, during top-down sampling visible units were not binarized, thus assuming continuous values between 0 and 1. Hidden units were instead binarized through Bernoulli sampling. Data patterns were generated following the _label biasing_ procedure described in [14], where examples are generated top-down from a hidden state vector \(H_{\text{Label biasing}}\) obtained through the inversion of a linear classifier trained to classify the digit class from its hidden representation. A _generation step_ is defined as a single generation of a visible state (generated sample) from a hidden state. The generated sample is then used to instantiate the hidden state of the next generation step. In the first generation step, the activation of the visible layer \(A_{V}\) is computed as the matrix multiplication between \(H_{\text{Label biasing}}\) and the transposed weight matrix W of the RBM model. The result of the operation is added to the visible bias \(b_{V}\): \[A_{V}=(H_{\text{Label biasing}}\cdot W^{T})+b_{V} \tag{2}\] The first visible state \(V_{1}\) is computed as the output of a sigmoid activation function taking as input \(A_{V}\) divided by the temperature \(T\): \[V_{1}=\sigma(\frac{A_{V}}{T}) \tag{3}\] Figure 1: Illustration of the label biasing generation procedure. A hidden state vector \(H_{\text{Label biasing}}\) is obtained using the linear projection method [14]. Then from \(H_{\text{Label biasing}}\) a visible vector \(V_{1}\) is generated. The process is repeated \(k\) times, where \(k\) is the desired number of generation steps. In the following generation steps, the hidden state \(H_{s}\) is computed as follows: \[H_{s}\sim Bernoulli\left(p=\sigma(\frac{V_{s-1}\cdot W+b_{H}}{T})\right) \tag{4}\] where \(V_{s-1}\) is the visible state of the previous reconstruction step and \(b_{H}\) is the hidden bias. The consequent visible states are computed following the same procedure described for step 1 (Fig. 1). ### Digit classifier In order to establish whether top-down generation resulted in well-formed image patterns over visible units, we trained a classifier to identify digit classes taking as input the patterns generated by the RBM. We used a VGG-16 classifier, which is a convolutional architecture widely used in image classification [24]. The model was adapted from 1 and was made up of 4 VGG block units, followed by 3 fully connected layers and a final softmax layer. Unlike 1, the final fully connected layer outputted a vector of 11 entries (i.e., the number of MNIST classes plus one special class representing non-digit samples), which was then processed by a softmax layer. Softmax output was used to classify the example and estimate the uncertainty of the network in the classification, which was measured by calculating the entropy of the softmax output. Footnote 1: [https://colab.research.google.com/drive/11N0HD7-ljljPFtsbstfxLSKWvg2y2ndmO?usp=sharing](https://colab.research.google.com/drive/11N0HD7-ljljPFtsbstfxLSKWvg2y2ndmO?usp=sharing) The classifier was trained on the MNIST dataset, with grayscale images resized to 32x32 pixels. The training set was made up of 113400 examples: 54000 were extracted from the MNIST training set, while the remaining 59400 represented non-digit examples. This was done to exclude random classifications when the network was exposed to unrecognizable digits, which is a situation that often occurs during spontaneous top-down sampling in energy-based models. Among these non-digit examples, 5400 were composed of scrambled digit images, while the remaining 54000 were training set examples with a random number of adjacent active pixels (i.e. intensity \(>0\)) masked. The choice of this method for producing non-digits was motivated by empirical observation of cases in which the RBM generation produced objects that could not be identified as digits by a human observer. Learning was monitored through a validation set made up of the remaining 6000 examples of the MNIST training set. Testing was done on the 10000 images of the MNIST test set. The model was trained using minibatches of size 64, with stochastic gradient descent and learning rate \(\eta=0.01\) with cross-entropy loss. The model was trained for 20 epochs, selecting the model resulting in the highest validation accuracy \((99,3\%)\). ### Generativity metrics In order to measure the diversity and stability of the generative dynamics of the model, we implemented several metrics to characterize changes in visual and hidden activation during top-down sampling. The idea is that the model should develop attractor states in correspondence to digit representations, which are then dynamically visited during spontaneous generation of sensory patterns. For each generation step, the classifier evaluated the class (i.e. digit from 0 to 9 and non-digit case) of the sample produced. The _number of states visited_ was defined as the number of different digits visited during the generation process, without including the non-digit state. Multiple visits to the same state (i.e., same digit recognized by the classifier) during a single generation trajectory were counted as 1. A related metric was the number of generation steps (_state time_ in short) in which the sample remained in each digit state, including the non-digit state. This index measures the stability of each attractor state. Finally, we measured the _number of transitions_ occurring during the generation process. A transition was defined as the change in classification of a sample from one state to another (transitions to the non-digit state were not included in this quantification). Transitions between states, including the non-digit state, were also used to estimate a _transition matrix_ of the entire generation procedure (i.e., taking into account all samples and all generation steps). The aim of the transition matrix was to estimate the probability during the generation process to transition from one digit (or non-digit) state to another. The transition matrix was estimated by counting all transitions from one state to another, normalized by the total number of transitions from that particular state. For each label biasing vector used, 100 samples were generated. For each sample, a generation period of 100 generation steps was performed. Measures are reported together with standard error of the mean. ## 3 Results The classifier accuracy decreased as a function of the generation step for all digits (average classifier accuracy - step 100: 11.2%, Fig. 2b), except for the digit zero, which only saw a moderate decrease (classifier accuracy (digit: 0) - step 100: 86.0%). This indicated that the samples were significantly distorted during the generation period, inducing more errors in the classifier (examples of sample generation from each digit are shown in Fig. 2a). In accordance with this, the average classification entropy increased during the generation period, showing a high anticorrelation with the classifier accuracy (\(\rho=-0.999\)). Interestingly, all digits showed a similar percentage of active units in the hidden layer throughout the generation process, keeping active only \(14-22\%\) of the units (average percentage of active hidden units - step 1: \(14.964\pm 0.079\%\), average percentage of active units - step \(100:21.892\pm 0.054\%\), n = 10 digits, Fig. 2c), which is in line with previous results suggesting the emergence of sparse coding in RBM models [25]. On average, in each generation period \(1.779\pm 0.211\) states were visited, with \(2.903\pm 0.538\) transitions between states (Fig. 2d, n = 1000). The transition matrix shows that most transitions occur within the same class of digits (average probability of transition within the same digit: \(0.870\pm 0.021\), \(n=10\), Fig. 2f), while the probabilities for a digit state to transition to another digit state are low, almost never exceeding 0.01 (\(0.012\pm 0.002\), n = 110). This, combined with the small number of transitions per generation period, suggests that state transitions are sharp and that "bouncing between two states" events are very rare if not present. Non-digits transition almost invariably to themselves: in other words, when a sample transitions to a non-digit, it hardly ever gets out of it in the following generation steps. The consequences of this attractor-like behavior of non-digits states is that all digits except 0 spend the majority of the generation period as non-digits (average non-digit state time between digits (0 excluded): \(76.099\pm 4.213\), n=9 digits, Fig. 2e). Figure 2: Characterization of sample generation. **a)** Example of generations, one per digit. Each column represents a single generation in particular generation steps (rows). Accuracy of the classifier (**b)**), and average percentage of active hidden units (i.e. \(h=1\), **c)**) as a function of the generation step. Each color represents a different digit. **d)** Average number of visited states and states transitions per generation period for different label biasing digits (on the \(x\) axis). **e)** Average state time per each digit state (columns) for different label biasing digits (rows). **f)** Transition matrix estimated from all generated data. Each entry represents the probability of transition from one state (rows) to another (columns). A limitation of the "single digit" label biasing approach described in the previous paragraph is that it does not allow to explore heterogeneous sensory states, as highlighted by the small number of digit states visited on average in a single generation period (Fig. 2d). This might be due to the fact that label biasing forces the RBM to start the generation from a hidden state close to an attractor basin corresponding to the prototype of the selected digit, thus limiting the exploration of other states during top-down sampling. A way to overcome this issue could be to bias the network toward _chimera states_, for example by starting the generation from a hidden state mixing different digit representations. The hypothesis is that this could increase state exploration by decreasing the probability of stranding the generation process in a specific attractor. We implemented two methods to obtain such chimera states, both based on the observation that the distributions of the activations of the hidden states produced through label biasing are right-skewed, with a long tail of outliers at the upper end of the distribution (see Fig. 3a). This suggests that for each state there are only a few active hidden units. In the first method (_intersection method_), chimera states between two digits were computed by activating (i.e. \(h=1\)) only the units in common between the highest \(k\) active units of the label biasing vectors of the two digits, while the others were set to \(0\). Given that we observed that the percentage of active hidden units remained constrained in a small range during the generation process (Fig. 2c), we decided to set \(k\) equal to the rounded down average number of active hidden units in the first step of generation (i.e. 149). In the second method (_double label biasing_), instead of using a one-hot encoded label for label biasing (see [14] for details), we utilized a binary vector with two active entries (i.e. \(=1\)) that corresponded to the digits of the desired chimera state. The resulting \(H_{\text{Label biasing}}\) was then binarized, keeping active only the top \(k\) most active units (also here \(k=149\)). Generativity (quantified as the average number of states visited in a generation period) was characterized in all intersections of two digits (100 samples per digit combination, Fig. 3b,d). Examples of chimera state generations are shown in Fig. 3c,e. Interestingly, both techniques induced higher state exploration than the classic label biasing generation method (average number of visited states between chimera states - intersection method: \(2.951\pm 0.099\), average number of visited states between chimera states - double label biasing: \(2.104\pm 0.122\); \(n=45\) combinations of two digits), although only the intersection method state exploration was significantly higher than the classical label biasing (Mann-Whitney U test (one-sided): \(p=6.703\cdot 10^{-5}\) (intersection method), \(p=0.139\) (double label biasing)). Some combinations of digit states (e.g. \(\{6,9\}\)) seemed to induce particularly high state exploration with both methods; however, the correlation between the number of visited states in the two methods was mild (\(\rho=0.334\), n=45 combinations of two digits). Both methods also induced a significant drop in non-digit state time (average non-digit state time - intersection method: \(12.156\pm 2.272\), Mann-Whitney U test (one-sided): \(p=2.338\cdot 10^{-5}\); average non-digit state time - double label biasing: \(25.032\pm 4.215\), Mann-Whitney U Figure 3: Characterization of generation using chimera states. **a)** Distribution of activation probability of hidden units (\(P(h=1)\)) of label biasing vectors of each digit. **b)** Average number of visited states in a generation period (i.e. 100 generation steps) for each chimera state of two digits using the intersection method (n=100 samples). **c)** Example generation periods with two example intersection method chimera states (i.e. \(\{3,6\}\) (columns 1 to 3) and \(\{6,9\}\) (columns 4 to 6)). The average digit state times are shown in the bar plot on the right. **d)** Average number of visited states in a generation period (i.e. 100 generation steps) for each chimera state of two digits using the double label biasing method (n=100 samples). **e)** Example generation periods with two example double label biasing chimera states (i.e. \(\{3,6\}\) (columns 1 to 3) and \(\{6,9\}\) (columns 4 to 6)). The average digit state times are shown in the bar plot on the right. test (one-sided): \(p=5.054\cdot 10^{-4}\); n = 45 combinations of two digits), suggesting that the increase in exploration leads to visiting more plausible sensory states. ## 4 Discussion In this work we introduced an original framework to study the generation dynamics of restricted Boltzmann machines, a class of generative neural networks that have been largely employed as models of cortical computation. The proposed method exploits label biasing [14] to iteratively generate plausible configuration of hidden and visible states, thus allowing to explore the attractor landscape of the energy function underlying the generative model. To demonstrate the effectiveness of our approach, we characterized the generation dynamics of an RBM trained on a classical dataset of handwritten digits, exploring different sampling strategies to maximize state exploration. The standard label biasing approach initiate the generation of class prototypes from the hidden representation of single digits; our simulations show that this strategy can produce high-quality digit images, but does not allow to explore multiple states during the generative process. We thus explored the possibility of initiating the generation from chimera states, which might be considered as "meta-stable" states that allow to reach different attractors. Both methods developed (intersection method and double label biasing) indeed increased the number of states visited during the generation process, also significantly diminishing the non-digit state time. Nevertheless, the estimated transition matrices indicated that the non-digit state generally acts as a strong attractor, from which the system is unable to escape. This suggest that the generative dynamics of RBMs might not fully mimick the spontaneous dynamics observed in biological brains, which appear more flexible and heterogeneous. Future work should explore more recent version of RBMs, for example the Gaussian-Bernoulli RBM [26], which is capable of generating meaningful samples even from pure noise and might thus develop more interesting generation dynamics. Another interesting research direction could be to explore more complex datasets, perhaps involving natural images, which would increase model realism and might allow to more systematically test neuroscientific hypotheses [6].
2304.07014
AGNN: Alternating Graph-Regularized Neural Networks to Alleviate Over-Smoothing
Graph Convolutional Network (GCN) with the powerful capacity to explore graph-structural data has gained noticeable success in recent years. Nonetheless, most of the existing GCN-based models suffer from the notorious over-smoothing issue, owing to which shallow networks are extensively adopted. This may be problematic for complex graph datasets because a deeper GCN should be beneficial to propagating information across remote neighbors. Recent works have devoted effort to addressing over-smoothing problems, including establishing residual connection structure or fusing predictions from multi-layer models. Because of the indistinguishable embeddings from deep layers, it is reasonable to generate more reliable predictions before conducting the combination of outputs from various layers. In light of this, we propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL). GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem by periodic projection from the low-order feature space onto the high-order space. With more distinguishable features of distinct layers, an improved Adaboost strategy is utilized to aggregate outputs from each layer, which explores integrated embeddings of multi-hop neighbors. The proposed model is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks, which reveals the superior performance improvement of AGNN compared with state-of-the-art models.
Zhaoliang Chen, Zhihao Wu, Zhenghong Lin, Shiping Wang, Claudia Plant, Wenzhong Guo
2023-04-14T09:20:03Z
http://arxiv.org/abs/2304.07014v1
# AGNN: Alternating Graph-Regularized Neural Networks to Alleviate Over-Smoothing ###### Abstract Graph Convolutional Network (GCN) with the powerful capacity to explore graph-structural data has gained noticeable success in recent years. Nonetheless, most of the existing GCN-based models suffer from the notorious over-smoothing issue, owing to which shallow networks are extensively adopted. This may be problematic for complex graph datasets because a deeper GCN should be beneficial to propagating information across remote neighbors. Recent works have devoted effort to addressing over-smoothing problems, including establishing residual connection structure or fusing predictions from multi-layer models. Because of the indistinguishable embeddings from deep layers, it is reasonable to generate more reliable predictions before conducting the combining of outputs from various layers. In light of this, we propose an Alternating Graph-regularized Neural Network (AGNN) composed of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL). GEL is derived from the graph-regularized optimization containing Laplacian embedding term, which can alleviate the over-smoothing problem by periodic projection from the low-order feature space onto the high-order space. With more distinguishable features of distinct layers, an improved Adaboost strategy is utilized to aggregate outputs from each layer, which explores integrated embeddings of multi-hop neighbors. The proposed model is evaluated via a large number of experiments including performance comparison with some multi-layer or multi-order graph neural networks, which reveals the superior performance improvement of AGNN compared with state-of-the-art models. Graph convolutional network, semi-supervised classification, over-smoothing, graph representation learning. ## I Introduction Graph Neural Network (GNN) has become one of the promising technologies manipulating graph-structural data in recent years, obtaining remarkable achievement in various pattern recognition fields, including node classification or clustering [1, 2, 3], recommender systems [4, 5, 6] and computer vision [7, 8, 9]. As one of the typical GNN-based models, Graph Convolutional Network (GCN) is receiving plentiful attention from a population of researchers [10, 11]. Owing to its powerful ability to extract knowledge from sparse weighted networks, GCN has also been adopted to weight prediction for sparse weighted graphs, such as dynamic graphs [12, 13, 14]. Originated from GCN, Graph AutoEncoder (GAE) was also investigated to conduct weighted link predictions via the reconstruction of the adjacency matrix [15, 16, 17]. GCN propagates node representations across topology networks via convolution operators on non-Euclidean space, which integrates node features and relationships involved in a graph. Nonetheless, recent practice and theoretical analysis have indicated that a 2-layer GCN generally performs the best, and a deep GCN often leads to unfavorable performance, which is summarized into the over-smoothing issue. Over-smoothing is a widely concerned deficiency of GCN, which has been extensively investigated. Recent studies have proved that a graph convolution is exactly a special form of Laplacian smoothing, attributed to which a deeper GCN may result in indistinguishable node features and make the downstream classification tasks challenging [21, 22, 23]. It makes most existing GCN-based models shallow and lack the ability to mine knowledge from high-order neighbors, which is more severe for datasets with high-degree nodes. Considerable works have been devoted to solving this problem. On one hand, some research attempted to consider a similar structure of residual connection leveraged in Euclidean deep convolutional networks [19, 24, 25]. Most of these methods made full use of embeddings fro Fig. 1: Architectures of numerous GCN-based methods and the proposed AGNN, where GCL is Graph Convolutional Layer and GEL is the proposed Graph Embedding Layer. GCN [18] is a sequence of GCL, which encounters severe over-smoothing with deep layers. JK-Net [19] adds connections among layers to carry all low-order information to the last layer. AdaGCN [20] aggregates multi-hop embeddings of all layers. The proposed AGNN simultaneously carries low-order features to deep layers and accumulates node predictions from all layers. input matrix to avoid information loss. On the other hand, some studies placed more emphases on effective exploration and combination of hidden representations from different hops of neighbors [20, 23, 26]. A summary of the comparison between the representative algorithms (GCN [10], JK-Net [19], AdaGCN [20]) and the proposed method in this paper are shown in Figure 1. Although some works have succeeded in relieving over-smoothing problems, they were still outperformed by a classical 2-layer GCN. In addition, a direct linear combination of embeddings from hidden layers may not work effectively, because the similar and indistinguishable features from deeper layers can annihilate useful information from shallow layers and confound the predictions of classifiers. Accordingly, it is crucial to develop a reliable network where each layer can yield accurate and distinguishable outputs before conducting the prediction fusion. In pursuit of addressing the aforementioned problems, in this paper, we design an Alternating Graph-regularized Neural Network (AGNN) that enables the construction of deep layer architecture. AGNN alternately performs forward computation of Graph Convolutional Layer (GCL) and Graph Embedding Layer (GEL). In order to get rid of similar and indistinguishable features caused by over-smoothing, GEL is designed to project original node embeddings onto low-dimensional space in deep layers and preserve critical features via sparse outputs. Thus, each proposed GEL aims to learn Laplacian-constrained sparse representations from original features, on the basis of the optimization problem w.r.t. the Laplacian-based graph regularization and sparsity constraint. We derive the updating rules of this optimization target and transform them into GEL that preserves discriminative node embeddings during network training and alleviates the over-smoothing problem. We analyze the network architecture and draw a conclusion that both GCL and GEL can be approximately regarded as solutions to distinct graph regularization problems. Furthermore, with more accurate predictions yielded by GCL and GEL, an improved Adaboost algorithm is adopted to aggregate node representations from varying hidden layers, so that multi-order information from different depths of networks can be leveraged. In summary, the contributions of this paper primarily lie in: 1) According to a graph-regularized optimization problem and its iterative solutions, we construct a new layer dubbed GEL, which can alleviate over-smoothing phenomenon via carrying low-order information to deep layers. 2) A graph-regularized neural network with alternating GCLs and GELs is proposed, which adopts both residual connection and embedding aggregation architecture. Its layers can be regarded as approximations of different graph optimization problems, which promote the interpretability of the model. 3) With more accurate embeddings yielded by deep layers, an improved Adaboost algorithm is designed to leverage features from distinct hidden layers, enabling the model to aggregate high-quality node representations from multi-hop neighbor propagation. 4) Substantial experimental results reveal the superiority of the proposed AGNN, which succeeds in coping with over-smoothing issue and outperforms the widely applied 2-layer GCN and other multi-layer GCN-based methods with deep network structures. The rest contents of this paper are organized as follows. Recent works of GCN and approaches to cope with the over-smoothing issues are discussed in Section II. In Section III, we elaborate on the proposed framework, including detailed analysis and comparison between AGNN and other models. We evaluate AGNN with comprehensive experiments in Section IV, looking into the performance under varying experimental settings. Eventually, we conclude our works in Section V. ## II Related Works ### _Graph Convolutional Network_ GCN has been applied to a multitude of applications and attracted attention from a wide range of researchers in recent years. Xu et al. came up with a deep feature aggregation model with a graph convolutional network to conduct high spatial resolution scene classification [27]. A GCN-based approach under the autoencoder framework was proposed to perform unsupervised community detection [28]. In order to reduce the computational cost of graph convolutions, a low-pass collaborative filter was proposed to utilize GCN with a large graph [29]. Gan et al. designed a multi-graph fusion model that combined the local graph and the global graph to produce a high-quality graph for GCN [30]. An aggregation scheme was applied to promote the robustness of GCN against structural attacks [31]. Geometric scattering transformations and residual convolutions were leveraged to enhance the conventional GCN [18]. Xu et al. presented a spatiotemporal multi-graph convolutional fusion network, which exploited the graph-structural road network for urban vehicle emission estimation [32]. GCN with a question-aware gating mechanism was presented to aggregate evidences on the path-based graph [33]. A new graph convolution operator was proposed to obtain robust embeddings in the spectral domain [34]. The variant of GCN was derived via a modified Markov diffusion kernel, which explored the global and local contexts of nodes [35]. Weighted link prediction is also a critical application of GCN. For example, a dynamic GCN was proposed with a tensor M-product technique, to cope with adjacency tensor and feature tensor yielded from dynamic graphs [36]. Cui et al. proposed an adaptive graph encoder to strengthen the filtered features for more discriminative node embeddings, which was applied to link prediction tasks [17]. Wang et al. designed a temporal GAE, which encoded the fundamentally asymmetric nature of a directed network from neighborhood aggregation and captured link weights via reshaping the adjacency matrix [15]. However, most of these GCN-based models suffer from shallow network structure owing to the over-smoothing issue. ### _Over-smoothing Issue_ Numerous works have investigated approaches to alleviate the over-smoothing issue. An improved normalization trick applying the "diagonal enhancement" was introduced to help build a deep GCN [37]. Simple graph convolution [38] was proposed to mine high-order embeddings in the graph via utilizing the \(k\)-th power of the graph convolutional matrix and removing the ReLU function. A multi-layer GCN was constructed with AdaBoost to linearly combine embeddings from varying layers [20]. Cui et al. restricted over-smoothing by extracting hierarchical multi-scale node feature representations [39]. PPNP and APPNP [26] were presented to replace the power of the graph convolutional matrix inspired by the personalized PageRank matrix. Residual connections and dilated convolutions in CNN were applied to promote the training of a deep GCN model. Jumping knowledge networks preserved the locality of node embeddings via dense skip connections that merged features from each layer [19]. A deep GCN was proposed with residual connection and identity mapping to relieve the over-smoothing problem [24]. Most of these methods attempted to alleviate over-smoothing via connecting distinct network layers, simplifying multi-order graph convolutions, or conducting multi-layer feature fusion. Nonetheless, these existing works did not simultaneously consider cross-layer feature connection and aggregation of embeddings from varying layers, which benefits a multi-layer model to obtain a more precise prediction. ## III The Proposed Method Given a connected undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(n\) nodes and \(e\) edges, we define the corresponding adjacency matrix as \(\mathbf{A}\in\mathbb{R}^{n\times n}\). The node features are denoted by the matrix \(\mathbf{X}\in\mathbb{R}^{n\times m}\), i.e., \(\mathbf{x}_{i}\) is an \(m\)-dimensional feature vector of the \(i\)-th node. The proposed AGNN aims to carry out the semi-supervised classification task with the given set \(\Omega\) of partially labeled samples and its corresponding ground truth matrix \(\mathbf{Y}\in\mathbb{R}^{n\times c}\) encoding one-hot vectors, where \(c\) is the number of classes. For the purpose of better readability, we summarize the primarily used mathematical notations in Table I. As described in Figure 2, AGNN is a sequence of alternating GCL and GEL, and an improved AdaBoost strategy is adopted to merge multi-layer features. Both GCL and GEL are constructed from graph-regularized optimization problems, which form a basic network block of AGNN. In particular, GEL periodically projects the original node embeddings onto deep layers to alleviate over-smoothing, which introduces residual connections into AGNN. In Section III-A, we first analyze two distinct graph-regularized optimization problems, on the basis of which AGNN is constructed. After that, an improved AdaBoost is designed to conduct multi-layer feature fusion in Section III-B. Finally, we summarized and analyzed the proposed model in Section III-C, including time complexity analysis and comparison to related works. ### _Alternating Graph Convolutional Layers and Graph Embedding Layers_ First, we revisit the definition of a vanilla graph convolution operator. A GCL is formulated as \[\mathbf{H}^{(l)}=\sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{ A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}^{(l-1)}\mathbf{W}_{g}^{(l)} \right), \tag{1}\] where \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) is the adjacency matrix that adds self-loop and \([\tilde{\mathbf{D}}]_{ii}=\sum_{j}[\tilde{\mathbf{A}}]_{ij}\) denotes the diagonal degree matrix. The optional activation function is denoted as \(\sigma(\cdot)\). In fact, the added self-loop \(\mathbf{A}+\mathbf{I}\) can be regarded as a simple residual connection to the previous layer. Actually, GCL can be formulated as a graph-regularized optimization problem. Namely, we have the following theorem. **Theorem 1**.: _With a linear transformation matrix \(\mathbf{W}_{g}^{(l)}\) and the node embedding \(\mathbf{H}^{(l-1)}\) from the previous layer, the \(l\)-th GCL defined in Eq. (1) is the first-order approximation of the following optimization problem:_ \[\mathbf{H}^{(l)}=\arg\min_{\mathbf{E}^{(l)}}\left\|\mathbf{E}^{(l)}-\mathbf{H }^{(l-1)}\mathbf{W}_{g}^{(l)}\right\|_{F}^{2}+\text{Tr}\left(\mathbf{E}^{(l)} \tilde{T}\tilde{\mathbf{L}}\mathbf{E}^{(l)}\right), \tag{2}\] _where \(\tilde{\mathbf{L}}=\mathbf{I}-\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{ A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}\)._ Proof.: The derivative w.r.t. \(\mathbf{E}^{(l)}\) of the optimization problem defined in Eq. (2) is \[\frac{\partial\mathcal{J}}{\partial\mathbf{E}^{(l)}}=2\left(\mathbf{E}^{(l)}- \mathbf{H}^{(l-1)}\mathbf{W}_{g}^{(l)}\right)+2\tilde{\mathbf{L}}\mathbf{E}^{( l)}. \tag{3}\] Setting the derivative to 0, we have the closed-form solution \[\mathbf{E}^{(l)}=\left(\mathbf{I}+\tilde{\mathbf{L}}\right)^{-1}\mathbf{H}^{( l-1)}\mathbf{W}_{g}^{(l)}. \tag{4}\] Because the term \(\left(\mathbf{I}+\tilde{\mathbf{L}}\right)^{-1}\) can be decomposed into Taylor series, i.e., \[\left(\mathbf{I}+\tilde{\mathbf{L}}\right)^{-1}=\mathbf{I}-\tilde{\mathbf{L}} +\tilde{\mathbf{L}}^{2}+\ldots+(-1)^{t}\tilde{\mathbf{L}}^{t}, \tag{5}\] we have the first-order truncated approximation as \[\left(\mathbf{I}+\tilde{\mathbf{L}}\right)^{-1}\approx\mathbf{I}-\tilde{ \mathbf{L}}=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{\mathbf{D }}^{-\frac{1}{2}}. \tag{6}\] Consequently, we obtain the approximation of \(\mathbf{H}^{(l)}\) as \[\mathbf{H}^{(l)}\approx\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}^{(l-1)}\mathbf{W}_{g}^{(l)}, \tag{7}\] which indicates that GCL is a first-order approximation of Problem (2). However, as we have analyzed before, deep graph convolutions often suffer from extremely indistinguishable features due to the over-smoothing phenomenon. A solution is enabling \begin{table} \begin{tabular}{l|l} \hline \hline Notations & Explanations \\ \hline \(\mathbf{X}\) & Feature matrix of nodes. \\ \(\mathbf{A}\) & Adjacency matrix. \\ \(\mathbf{Y}\) & Label information. \\ \(\mathbf{H}^{(l)}\) & Output of the \(l\)-th GCL. \\ \(\mathbf{Z}^{(l)}\) & Output of the \(l\)-th GEL. \\ \(\mathbf{D}\) & Diagonal degree matrix. \\ \(\mathbf{L}\) & Laplacian matrix. \\ \(\mathbf{W}_{g}^{(l)}\) & Weight matrix for the \(l\)-th GCL. \\ \(\mathbf{W}_{e1}^{(l)}\), \(\mathbf{W}_{e2}^{(l)}\) & Weight matrices for the \(l\)-th GEL. \\ \(\mathbf{Prox}_{gg}(\cdot)\) & Proximal operator. \\ \(\xi_{(\theta_{1},\theta_{2})}\) & MSReLU function with hyperparameters \(\theta_{1}\) and \(\theta_{2}\). \\ \(c(\cdot)\) & Weak classifier. \\ \(\mathbf{S}\) & Weighted embedding of multi-order feature fusion. \\ \(\alpha^{(l)},\beta^{(l)}\) & Weights of classifiers for GCL and GEL. \\ \(\pi_{t}\) & Node weights for AdaBoost. \\ \(e_{\mathbf{H}}^{(l)}\), \(e_{\mathbf{z}}^{(l)}\) & Weighted classification error rates. \\ \(\eta_{t}\) & Node weight updating weight for AdaBoost. \\ \(R\) & Number of classes. \\ \hline \hline \end{tabular} \end{table} TABLE I: A summary of primary notations in this paper. the model to carry low-order information by connecting initial node features to each GCL. Thus, we develop a new layer to bring features from the original space to deep layers. To be consistent with GCL, we define a graph-regularized optimization problem to formulate this layer. Instead of directly adding initial embeddings to the end of each GCL, a trainable projection derived from graph regularization optimization is applied, which adaptively learns low-dimensional representations from original node embeddings. Namely, with graph embedding \(\mathbf{H}\), we consider the following sparsity-constrained optimization \[\mathbf{Z}^{(l)}=\arg\min_{\mathbf{H}}\left\|\mathbf{X}-\mathbf{H}\mathbf{P}^{ (l)}\right\|_{F}^{2}+\text{Tr}\left(\mathbf{H}^{T}\tilde{\mathbf{L}}\mathbf{H} \right)+\left\|\mathbf{H}\right\|_{1}, \tag{8}\] which explores Laplacian-constrained representations from the original feature space after the \(l\)-th GCL. In pursuit of obtaining more distinguishable compressed node embeddings, we adopt \(\left\|\mathbf{H}\right\|_{1}\) to consider sparse representations. The sparsity constraint enables GEL to yield more discriminative node representations that only include important features, and alleviates the similar features of different nodes in deep layers, which is beneficial to solve the over-smoothing issue. Consequently, it should have the same dimension as the previous GCL, and we can project it onto the original feature space with an over-complete dictionary matrix \(\mathbf{P}^{(l)}\in\mathbb{R}^{d_{l}\times m}\), where \(d_{l}<m\) is the number of hidden units at the \(l\)-th GCL. In addition, we adopt the Laplacian embedding criterion \(\text{Tr}\left(\mathbf{H}^{T}\tilde{\mathbf{L}}\mathbf{H}\right)\) to make nodes close when they are connected, where the Laplacian matrix \(\tilde{\mathbf{L}}\) is precomputed. In order to obtain more representative low-dimensional features, \(\left\|\mathbf{H}\right\|_{1}\) promoting the sparsity of outputs is added to extract robust projected embeddings during training. Letting \(f\left(\mathbf{H}\right)=\text{Tr}\left(\mathbf{H}^{T}\tilde{\mathbf{L}} \mathbf{H}\right)+\left\|\mathbf{X}-\mathbf{H}\mathbf{P}^{(l)}\right\|_{F}^{2}\) and \(g\left(\mathbf{H}\right)=\left\|\mathbf{H}\right\|_{1}\), we can derive the updating rules of Problem (8) at \(\mathbf{H}^{(l)}\) via proximal gradient descent method. Namely, \[\mathbf{Z}^{(l)} =\arg\min_{\mathbf{H}}f(\mathbf{H}^{(l)})+\left\langle\nabla f( \mathbf{H}^{(l)}),\mathbf{H}-\mathbf{H}^{(l)}\right\rangle \tag{9}\] \[+\frac{\tau}{2}\left\|\mathbf{H}-\mathbf{H}^{(l)}\right\|_{F}^{2 }+\left\|\mathbf{H}^{(l)}\right\|_{1}\] \[=\arg\min_{\mathbf{H}}\frac{\tau}{2}\left\|\mathbf{H}-\mathbf{Y} \right\|_{F}^{2}+\left\|\mathbf{H}^{(l)}\right\|_{1},\] where \(\mathbf{Y}=\mathbf{H}^{(l)}-\frac{1}{\tau}\nabla f\left(\mathbf{H}^{(l)}\right)\), and \(\tau\) is the Lipschitz constant. Given the proximal operator \(\mathbf{Prox}_{g}(\cdot)\), Problem (9) can be solved by the proximal mapping w.r.t. \(\ell_{1}\) norm. Because we have the derivatives \[\nabla f(\mathbf{H}^{(l)})=2\tilde{\mathbf{L}}\mathbf{H}^{(l)}+2\left(\mathbf{ H}^{(l)}\mathbf{P}^{(l)}-\mathbf{X}\right)\mathbf{P}^{(l)}{}^{T}, \tag{10}\] the proximal mapping can be derived from \[\mathbf{Z}^{(l)}=\mathbf{Prox}_{g}\left(\mathbf{H}^{(l)}-\frac{1}{ \tau}\nabla f(\mathbf{H}^{(l)})\right) \tag{11}\] Transforming terms \(\mathbf{I}-\frac{2}{\tau}\mathbf{P}^{(l)}\mathbf{P}^{(l)}{}^{T}\) and \(\frac{2}{\tau}\mathbf{P}^{(l)}{}^{T}\) into trainable weight matrices \(\mathbf{W}_{e1}^{(l)}\in\mathbb{R}^{d_{l}\times d_{l}}\) and \(\mathbf{W}_{e2}^{(l)}\in\mathbb{R}^{m\times d_{l}}\) respectively, we have the following proximal projection \[\mathbf{Z}^{(l)}=\mathbf{Prox}_{g}\left(\mathbf{H}^{(l)}\mathbf{W}_{e1}^{(l)}+ \mathbf{X}\mathbf{W}_{e2}^{(l)}-\lambda\tilde{\mathbf{L}}\mathbf{H}^{(l)} \right), \tag{12}\] where \(\lambda=\frac{2}{\tau}\) is a hyperparameter. Because \(\mathbf{Prox}_{g}(\cdot)\) can be regarded as an activation function, Eq. (12) is similar to the definition of a neural network layer with two trainable weight matrices. In particular, the proximal operator for \(\ell_{1}\) constraint promoting the sparsity is \[\mathbf{Prox}_{g}\left(\mathbf{Z}_{ij}^{(l)}\right)=\text{sign}\left(\mathbf{ Z}_{ij}^{(l)}\right)\left(\left|\mathbf{Z}_{ij}^{(l)}\right|-\theta\right)_{+}, \tag{13}\] which is the Soft Thresholding (ST) function and \(\theta\) is the hyperparameter to guarantee the sparsity of the output [40]. It can be realized by a parameterized ReLU-based activation function, i.e., \[\xi_{\theta}(z)=\text{ReLU}\left(z-\theta\right)-\text{ReLU}\left(-z-\theta \right). \tag{14}\] Due to the definition of the ST function, \(\xi_{\theta}(z)\) is actually smaller than \(|z|\) when \(z>\theta\) and \(z<-\theta\). This may be problematic due to the gap between original features and outputs of \(\xi_{\theta}(z)\) when \(\theta\) is relatively large. For the sake of Fig. 2: The framework of a 6-layer AGNN, which consists of three GCLs and three GELs. AGNN is a block-wise graph neural network framework constructed with alternating GCL and GEL, where each block contains a GCL and a GEL. For the purpose of exploiting reliable and discriminative multi-hop information, an improved AdaBoost strategy is utilized to aggregate node predictions yielded by weak classifiers in all layers, and the whole framework is evaluated by cross-entropy loss. relieving the influence of this problem, in this paper, we adopt a multi-stage proximal projection for the sparsity constraint, as shown below: \[\xi_{(\theta_{1},\theta_{2})}(z)=\begin{cases}(\frac{2\theta_{2}-\theta_{1}}{ \theta_{2}})(z-\theta_{1}),&\theta_{1}\leq z<\theta_{2},\\ 0,&-\theta_{1}\leq z<\theta_{1},\\ (\frac{2\theta_{2}-\theta_{1}}{\theta_{2}})(z+\theta_{1}),&-\theta_{2}\leq z <-\theta_{1},\\ z,&z<-\theta_{2},\end{cases} \tag{15}\] where \(\theta_{2}\geq\theta_{1}>0\). As a matter of fact, it also can be implemented by the combination of ReLU functions. Consequently, we define a new ReLU-based activation function as \[\xi_{(\theta_{1},\theta_{2})}\left(\mathbf{Z}^{(l)}\right)\] \[= w_{1}\left(\text{ReLU}\left(\mathbf{Z}^{(l)}-\theta_{1}\right) -\text{ReLU}\left(-\mathbf{Z}^{(l)}-\theta_{1}\right)\right)\] \[- w_{2}\left(\text{ReLU}\left(\mathbf{Z}^{(l)}-\theta_{2}\right) -\text{ReLU}\left(-\mathbf{Z}^{(l)}-\theta_{2}\right)\right), \tag{16}\] where \(w_{1}\) and \(w_{2}\) are computed according to the parameter settings of \(\theta_{1}\) and \(\theta_{2}\), that is, \[w_{1} =\frac{2\theta_{2}-\theta_{1}}{\theta_{2}}, \tag{17}\] \[w_{2} =w_{1}-1=\frac{\theta_{2}-\theta_{1}}{\theta_{2}}.\] Consequently, we have \(2\geq w_{1}\geq 1\geq w_{2}\geq 0\). Eq. (16) is termed as a Multi-Stage ReLU (MSReLU) function. The comparison of MSReLU and other activation functions is shown in Figure 3. It can be observed that with suitable \(\theta_{1}\) and \(\theta_{2}\), it has less gap between \(\xi_{\theta_{1},\theta_{2}}(z)\) obtained by MSReLU and \(|z|\) due to the increasing slope when \(\theta_{1}<z<\theta_{2}\) and \(-\theta_{2}<z<-\theta_{1}\), which is beneficial to obtaining more accurate features. When \(z>\theta_{2}\) and \(z<-\theta_{2}\), the slope is the same as ReLU and soft thresholding to maintain the feature distribution of outputs. Associated with GCL, we can formulate a basic block of the alternating forward computation (contains 2 layers) as \[\mathbf{H}^{(l)}=\sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}^{(l-1)}\mathbf{ W}_{g}^{(l)}\right), \tag{18}\] \[\mathbf{Z}^{(l)}=\xi_{(\theta_{1},\theta_{2})}\left(\mathbf{H}^{ (l)}\mathbf{W}_{e1}^{(l)}+\mathbf{X}\mathbf{W}_{e2}^{(l)}-\lambda\tilde{ \mathbf{L}}\mathbf{H}^{(l)}\right), \tag{19}\] where \(\mathbf{H}^{(l-1)}=\mathbf{Z}^{(l-1)}\) for \(l=2,\ldots,t\) and \(\mathbf{H}^{(0)}=\mathbf{X}\). We term the forward computation defined in Eq. (19) as Graph Embedding Layer (GEL). The definition of GEL shows that it refines graph representations from the previous GCL and considers one-hop embeddings of neighbors via \(\tilde{\mathbf{L}}\mathbf{H}^{(l)}\). Here we adopt \(\mathbf{H}^{(l)}\) generated by GCL as the input of GEL, because GCL also implicitly optimizes the graph Laplacian regularization term. Actually, both GCL and GEL are one-step approximations of Laplacian-based graph regularization problems. GEL also leverages the information of original features via an input-injected computation defined by \(\mathbf{X}\mathbf{W}_{e2}^{(l)}\) to preserve sparse and discriminative representations of nodes at the hidden and the last layers, thereby alleviating the over-smoothing problem. On the basis of Eqs. (18) and (19), we can construct a deep block-wise graph neural network with \(2t\) layers that consists of GCL and GEL alternately. ### _Alternating Graph-regularized Neural Network with Improved Adaboost_ In order to further leverage underlying features at each layer and obtain results contributed by different hops of neighborhood relationships, we adopt a variant of Adaboost to compute the final predictions of the model. For the purpose of obtaining graph representations with the same dimension, we adopt a weak classifier \[c\left(\mathbf{H}^{(l)}\right)=\text{Softmax}\left(\sigma\left(\mathbf{H}^{(l )}\mathbf{W}_{c}+b\right)\right) \tag{20}\] for each layer of GCL, where \(\mathbf{W}_{c}\in\mathbb{R}^{d_{l}\times d_{L}}\). The weak classifier \(c\left(\mathbf{Z}^{(l)}\right)\) for GEL is homologous. We assign corresponding weights \(\alpha^{(l)}\) and \(\beta^{(l)}\) for each GCL and GEL. Formulaically, the final weighted result of various classifiers is \[\mathbf{S}=\sum_{l=1}^{t}\left(\alpha^{(l)}c\left(\mathbf{H}^{(l)}\right)+ \beta^{(l)}c\left(\mathbf{Z}^{(l)}\right)\right), \tag{21}\] where \(\alpha^{(l)}\) indicates the weight of classifier w.r.t. \(\mathbf{H}^{(l)}\) and \(\beta^{(l)}\) indicates the weight of classifier w.r.t. \(\mathbf{Z}^{(l)}\). We measure the performance of each weak classifier on labeled nodes to calculate classifier weights, which ensures that classifiers with higher accuracy on the training set are assigned to larger weights. First, the weighted error rates of two types of classifiers are computed by \[e_{\mathbf{H}}^{(l)} =\sum_{i\in\Omega}\pi_{i}\mathbb{I}\left(c\left(\mathbf{H}^{(l)} \right)_{i}\neq y_{i}\right)/\sum_{i\in\Omega}\pi_{i}, \tag{22}\] \[e_{\mathbf{Z}}^{(l)} =\sum_{i\in\Omega}\pi_{i}\mathbb{I}\left(c\left(\mathbf{Z}^{(l)} \right)_{i}\neq y_{i}\right)/\sum_{i\in\Omega}\pi_{i}, \tag{23}\] where \(\Omega\) is the set of samples having supervision information and \(\pi_{i}\) is the weight of a labeled node. The sample weights Fig. 3: Comparison of different activation functions (ReLU, soft thresholding and MSReLU) for sparse proximal projection, where hyperparameters are fixed as \(\theta=0.05\) for soft thresholding, and \(\theta_{1}=0.05\), \(\theta_{2}=0.10\) for MSReLU. are initialized by \(\pi_{i}=\frac{1}{|\mathbb{M}|}\). Therefore, classifier weights \(\alpha^{(l)}\) and \(\beta^{(l)}\) are computed by \[\alpha^{(l)} =\frac{1}{2}log\frac{1-e_{\mathbf{H}}^{(l)}}{e_{\mathbf{H}}^{(l)}}+ log(R-1), \tag{24}\] \[\beta^{(l)} =\frac{1}{2}log\frac{1-e_{\mathbf{Z}}^{(l)}}{e_{\mathbf{Z}}^{(l)} }+log(R-1), \tag{25}\] where \(R\) is the number of classes. We apply the softmax normalization to all classifier weights, i.e., \[[\boldsymbol{\alpha},\boldsymbol{\beta}]\leftarrow\text{Softmax}([ \boldsymbol{\alpha},\boldsymbol{\beta}]), \tag{26}\] where \(\boldsymbol{\alpha}=[\alpha^{(1)},\cdots,\alpha^{(l)}]\) and \(\boldsymbol{\beta}=[\beta^{(1)},\cdots,\beta^{(l)}]\). For the purpose of increasing weights on incorrect classified nodes, we update \(\pi_{i}\) by \[\pi_{i}\leftarrow(1+\eta_{i})\pi_{i}\mathbb{I}\left(c_{i}\neq y_{ i}\right), \tag{27}\] \[\pi_{i}\leftarrow\max(1-\eta_{i},\rho)\pi_{i}\mathbb{I}\left(c_{i }=y_{i}\right), \tag{28}\] where \(c_{i}\) is the predicting result of the former classifier and \(y_{i}\) is the ground truth. \(\eta_{i}\) is an updating rate that changes the sample weight automatically according to predictions of the weak classifier. The threshold \(0<\rho<1\) is adopted to avoid nodes with weights of zeros. In particular, the updating rate \(\eta_{i}\) applied in this paper is defined by \[\eta_{i}=exp\left(log\left(\frac{p_{i,r}}{\max\left(\sum_{j=1,j\neq r}^{R}p_{i,j},\epsilon\right)}\right)\right), \tag{29}\] where \(p_{i,r}\) is the probability of the \(i\)-th sample belonging to the \(r\)-th class and is obtained from the \(r\)-th entry of \(\left[c\left(\mathbf{H}^{(l)}\right)\right]_{i}\) or \(c\left[\left(\mathbf{Z}^{(l)}\right)\right]_{i}\). Namely, \(p_{i,r}=\left[c\left(\mathbf{H}^{(l)}\right)\right]_{i,r}\) or \(p_{i,r}=\left[c\left(\mathbf{Z}^{(l)}\right)\right]_{i,r}\). Here \(\epsilon\) is a tiny value avoiding the divide-by-zero error. A higher \(\eta_{i}\) indicates that the importance of the \(i\)-th sample should be larger if it is incorrectly classified, and should be smaller otherwise. For a correctly predicted node, the weight of it would decrease remarkably if \(p_{i,r}\) is higher. This indicates that the model should pay less attention to correct predictions with high confidence. As for a misclassified node, the weight of it would grow up considerably with higher \(p_{i,r}\), attributed to the reason that the node prediction result is much against the ground truth. With the weighted node embedding obtained by Eq. (21), the objective of the proposed AGNN is the cross-entropy loss function, i.e., \[\mathcal{L}=-\sum_{i\in\Omega}\sum_{j=1}^{c}\mathbf{Y}_{ij}\text{ln}\mathbf{S }_{ij}, \tag{30}\] which only works on nodes in the training set \(\Omega\) to perform the semi-supervised classification task. ### _Model Analysis_ Algorithm 1 depicts the procedure of AGNN. In general, the procedure of AGNN is divided into two parts: forward computation of multiple network layers and calculation on weighted graph embedding \(\mathbf{S}\) via the variant of Adaboost. Given weight matrix \(\mathbf{W}_{g}^{(l)}\in\mathbb{R}^{d_{l-1}\times d_{l}}\), the computational complexity for the \(l\)-th GCL is linear to the number of edges \(|\mathcal{E}|\). Namely, it is \(\mathcal{O}(|\mathcal{E}|d_{l-1}d_{l})\). As to the \(l\)-th GEL, the computational complexity is \(\mathcal{O}(|\mathcal{E}|d_{l}+nmd_{l})\). Consequently, the forward computation of a basic block with a GCL and a GEL is approximately \(\mathcal{O}(|\mathcal{E}|d_{l-1}d_{l}+nmd_{l})\). Owing to \(d_{l}\ll\min(n,m)\), GEL does not significantly increase the computational cost of the networks. ``` 0: Adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\), feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times m}\), hyperparameters \(\lambda\), \(\rho\), \(\theta_{1}\) and \(\theta_{2}\); 0: Graph embedding \(\mathbf{S}\in\mathbb{R}^{n\times c}\). while not convergent do for\(l=1\to t\)do Compute the output \(\mathbf{H}^{(l)}\) of the \(l\)-th GCL via Eq. (18); Compute the output \(\mathbf{Z}^{(l)}\) of the \(l\)-th GEL via Eq. (19); endfor Initialize weights \(\{\pi_{i}\}_{i\in\Omega}\) by \(\pi_{i}=\frac{1}{|\Omega|}\); for\(l=1\to t\)do Update sample weights \(\{\pi_{i}\}_{i\in\Omega}\) for the \(l\)-th GCL via Eqs. (27), (28) and (29); Calculate the classifier weight \(\alpha^{(l)}\) for the \(l\)-th GCL via Eqs. (22) and (24); Update sample weights \(\{\pi_{i}\}_{i\in\Omega}\) for the \(l\)-th GEL via Eqs. (27), (28) and (29); Calculate the classifier weight \(\beta^{(l)}\) for the \(l\)-th GEL via Eqs. (23) and (25); endfor Obtain weighted embeddings \(\mathbf{S}\) via Eqs. (26) and (21); Update all trainable parameters via back propagation; endwhile return Weighted graph embedding \(\mathbf{S}\). ``` **Algorithm 1** Alternating Graph-regularized Neural Network Algorithm 2 depicts the procedure of AGNN. In general, the procedure of AGNN is divided into two parts: forward computation of multiple network layers and calculation on weighted graph embedding \(\mathbf{S}\) via the variant of Adaboost. Given weight matrix \(\mathbf{W}_{g}^{(l)}\in\mathbb{R}^{d_{l-1}\times d_{l}}\), the computational complexity for the \(l\)-th GCL is linear to the number of edges \(|\mathcal{E}|\). Namely, it is \(\mathcal{O}(|\mathcal{E}|d_{l-1}d_{l})\). As to the \(l\)-th GEL, the computational complexity is \(\mathcal{O}(|\mathcal{E}|d_{l}+nmd_{l})\). Consequently, the forward computation of a basic block with a GCL and a GEL is approximately \(\mathcal{O}(|\mathcal{E}|d_{l-1}d_{l}+nmd_{l})\). Owing to \(d_{l}\ll\min(n,m)\), GEL does not significantly increase the computational cost of the networks. In light of previous analysis, both GCL and GEL are approximations of optimization problems w.r.t. graph regularization, attributed to which they can be considered as two distinct layers. Hence, AGNN can be approximately regarded as an alternating optimization procedure of Problems (2) and (8). The difference between the two layers is that the former optimization performs graph convolutions, and the latter optimization is a sparse graph-regularized projection from the original feature space. In a nutshell, the proposed AGNN is a block-wise graph neural network that simultaneously considers cross-layer connection and aggregation of multi-hop information, which is beneficial to obtaining reliable high-order neighborhood embeddings before conducting information fusion. The primary differences to existing models are summarized as follows: 1. Different from methods that directly combine node embeddings from outputs of varied layers (e.g., AdaGCN [20]), AGNN gets rid of inaccurate predictions of deep layers via periodic projection from original feature space to latent embeddings. 2. Instead of widely used additive connections from previous layers, AGNN establishes an optimization-inspired GEL module which is derived from the Laplacian-based graph regularization problem. This makes the initial features propagate to each GCL dexterously with less information loss. ## IV Experimental Analysis In this subsection, comprehensive experiments are conducted including evaluation against several state-of-the-art models and ablation studies. All experiments are run on a platform with AMD R9-5900X CPU, NVIDIA GeForce RTX 3060 12G GPU and 32G RAM. ### _Experimental Setup_ For the following experiments, we compare the proposed AGNN with numerous methods. Apart from classical baselines (MLP and Chebyshev [41]), other state-of-the-art methods can be divided into two categories: vanilla GNN-based models (GraphSAGE [42], GAT [43] and ScatteringGCN [18]), and multi-layer or high-order-information-based GCN methods (GCN [10], APPNP [26], JK-Net [19], SGC [38], ClusterGCN [37], GCNII [24], SSGC [35] and AdaGCN [20]). In particular, APPNP, SGC and SSGC propagate node information via the proposed high-order filters, where numbers of order can be regarded as numbers of layers for other multi-layer approaches. The compared models are demonstrated in detail as follows. 1. **MLP** is a classical baseline for classification, which is a multi-layer perceptron architecture with a softmax function as the classifier. 2. **Chebyshev** is a GCN-like baseline that adopts Chebyshev filters to perform graph convolutions with the given node features and the topology network. 3. **GCN** conducts a variant of convolution on the graph, which is exactly the first-order approximation of the Chebyshev polynomial. 4. **GraphSAGE** constructs a graph neural network that explores node embeddings through sampling and accumulating features from local neighbors of a node. 5. **GAT** is a graph neural network adopting an attention mechanism to explore node attributes across the graph, which enables the implicit assignment of weights to distinct nodes in a neighborhood. 6. **JK-Net** dexterously exploits various neighborhood ranges of nodes via a jumping knowledge structure that considers residual connections. 7. **SGC** proposes a faster variant of GCN via successively removing nonlinearities and collapsing weight matrices between consecutive layers. 8. **APPNP** leverages personalized PageRank to improve the performance of GCN-like models, which derives an improved propagation scheme. 9. **ClusterGCN** is a GCN-based framework that samples a group of nodes by a graph clustering algorithm, which alleviates the over-smoothing problem via a diagonal enhancement architecture. 10. **GCNII** is a variant of GCN with residual connection and identity mapping, which effectively alleviates the over-smoothing phenomenon. 11. **ScatteringGCN** builds an augmented GCN with geometric scattering transforms and residual convolutions to alleviate the over-smoothing issue. 12. **SSGC** develops a variant of GCN by adopting a modified Markov diffusion kernel, which explores the global and local contexts of nodes. 13. **AdaGCN** integrates learned knowledge from distinct layers of GCN in an Adaboost way, which updates layer weights iteratively. In this paper, eight different graph-structural datasets are adopted to evaluate the performance of numerous methods, as listed below: 1. **Citeseer1** is a benchmark dataset for literature citation networks, where nodes represent papers and edges represent citations between them. Footnote 1: [https://lings.soe.ucsc.edu/data](https://lings.soe.ucsc.edu/data) 2. **CoraFull2** is the larger version of Cora dataset, which is another well-known citation network. Herein, each node denotes paper and edge stands for citation. All nodes are classified according to their topics. Footnote 2: [https://github.com/shchur/gm-benchmark/datasets](https://github.com/shchur/gm-benchmark/datasets) 3. **Chameleon3** contains node relationships of a large number of articles on a topic of the English Wikipedia website, where edges represent the mutual links among articles. Footnote 3: [https://github.com/benedekrozemberzki/MUSAE/](https://github.com/benedekrozemberzki/MUSAE/) 4. **BlogCatalog4** includes a large number of bloggers and their social relationships from the website. Node features are extracted from the keywords of user information and all bloggers are divided into 6 distinct types. Footnote 4: [https://networkrepository.com/soc-BlogCatalog.php](https://networkrepository.com/soc-BlogCatalog.php) 5. **ACM5** is a paper network where each node denotes a paper. Different from citation networks, edges connect papers that share the same authors. Footnote 5: [https://github.com/hyb1993/HAN](https://github.com/hyb1993/HAN) 6. **Flickrb** is a social network that records relationships among users from an image and video hosting website. All users are grouped into 9 categories on the basis of their personal interests. Footnote 7: [https://github.com/shtun2017/AM-GCN](https://github.com/shtun2017/AM-GCN) 7. **UAI7** is a dataset for the test of GCN on community detection, which is a webpage citation network. Nodes representing webpages are collected from multiple universities and each edge denotes the citation. Footnote 7: [https://github.com/CUAI/Non-Homophily-Large-Scale](https://github.com/CUAI/Non-Homophily-Large-Scale) 8. **Actor8** is a subgraph of the film-director-actor-writer network, which only includes the connections of various actors. Each edge represents the co-occurrence of two actors on the same Wikipedia page. Footnote 8: [https://github.com/shtun2017/AM-GCN](https://github.com/shtun2017/AM-GCN) Footnote 8: [https://github.com/CUAI/Non-Homophily-Large-Scale](https://github.com/CUAI/Non-Homophily-Large-Scale) A statistical summary of these datasets is demonstrated in Table II. For fair comparison and avoiding undesired influence raised by data distribution, we shuffle all datasets and randomly select 20 labeled samples per class for training, 500 samples for validation and 1,000 samples for testing. In order to provide a fair test bed for all compared methods, we list some hyperparameters in experiments. Learning rates of these methods are fixed as \(0.01\) or \(0.005\), which are preferred to be smaller when more network layers are utilized. For all GNN-based methods, we fix the number of hidden units at each layer as 128 or 16. Other method-specific hyperparameters are fixed as their settings in original papers. As for the proposed AGNN, we also apply the same hidden layers as compared methods. The learning rates are also selected from 0.01 and 0.005. In general, a deeper AGNN requires a smaller learning rate, and we adopt learning rate adaptation via decreasing it when there is no loss drop for a period of training epochs. The Adam optimizer is adopted and the weight decay is fixed as \(5\times 10^{-4}\). The activation function \(\sigma(\cdot)\) is \(\text{tanh}(\cdot)\) for weak classifiers while \(\text{ReLU}(\cdot)\) for GCL. As for the thresholds in the MSReLU function of GEL, we fix them as \(\theta_{1}=0.02\) and \(\theta_{2}=0.04\). For the Adaboost strategy, the tiny value in Eq. (29) is fixed as \(10^{-4}\). ### _Experimental Results_ #### Iv-B1 Performance comparison First of all, we compare the performance of the proposed AGNN with all selected approaches. Table III exhibits the semi-supervised classification accuracy on eight datasets. In pursuit of conducting the ablation study and validating the effectiveness of the designed network structure, we further examine the performance of AGNN without the Adaboost framework (dubbed AGNN w/o AdaBoost), which does not aggregate embeddings of all network layers but directly outputs predictions of the final GEL. Because multi-layer or multi-order information-based models aim to improve GCN via mining information from deep layers, we record the highest accuracy of these models and the corresponding numbers of layers. The optimal numbers of layers or orders of neighbors are shown in brackets. In order to validate the statistical significance of the experimental results, we follow [12] and adopt Friedman test. The average ranks of all compared models are recorded in the last column of Table III, on the basis of which we obtain the Friedman testing score \(F_{F}=10.57\). With \(15\) compared models and \(8\) test datasets, the critical value is \(1.794\) for \(\alpha=0.05\), which indicates that \(F_{F}\) is higher than the critical value. Thus, we can reject the null hypothesis, which points out that the performance of all compared methods is significantly different with a confidence level at \(95\%\). From experimental results, we draw the following conclusions. First, the experimental results indicate that the proposed AGNN attains encouraging performance and outperforms the other methods by a considerable margin on most datasets. Second, it can be observed that AGNN obtains the optimal classification accuracy with more layers. In most cases, AGNN achieves the best performance with more than 6 layers. Although other compared methods sometimes also achieve better performance with more layers, 2 or 4 layers are still the best choice for most datasets. Last but not the least, AGNN w/o Adaboost obtains competitive classification results and sometimes gets higher accuracy with over 10 layers (Flickr, UAI and Actor datasets). This phenomenon indicates that AGNN w/o Adaboost guarantees the discrimination of node embeddings and the reliability of deep layers. From the ablation study, we find that AGNN performs satisfactorily compared with AGNN w/o Adaboost, which indicates the effectiveness of the \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline Datasets & \# Nodes & \# Edges & \# Features & \# Classes & \# Train & \# Valid & \# Test & Data types \\ \hline Citeseer & 3,327 & 4,732 & 3,703 & 6 & 120 & 500 & 1,000 & Citation network \\ CoraFull & 19,793 & 63,421 & 8,710 & 70 & 1,400 & 500 & 1,000 & Citation network \\ Chameleon & 2,277 & 18,050 & 2,325 & 5 & 100 & 500 & 1,000 & Link network \\ BlogCatalog & 5,196 & 171,743 & 8,189 & 6 & 120 & 500 & 1,000 & Social network \\ ACM & 3,025 & 13,128 & 1,870 & 3 & 60 & 500 & 1,000 & Paper network \\ Flickr & 7,575 & 239,738 & 12,047 & 9 & 180 & 500 & 1,000 & Social network \\ UAI & 3,067 & 28,311 & 4,973 & 19 & 380 & 500 & 1,000 & Citation network \\ Actor & 7,600 & 15,009 & 932 & 5 & 100 & 500 & 1,000 & Social network \\ \hline \hline \end{tabular} \end{table} TABLE II: A brief statistics of all graph datasets and data split modes. \begin{table} \begin{tabular}{l|c c c c c c c|c} \hline \hline Methods / Datasets & Citeseer & CoraFull & Chameleon & BlogCatalog & ACM & Flickr & UAI & Actor & Avg Ranks \\ \hline MLP & 0.366 & 0.051 & 0.286 & 0.646 & 0.812 & 0.431 & 0.188 & 0.224 & 13.3 \\ Chebyshev [41] & 0.693 & 0.534 & 0.217 & 0.357 & 0.829 & 0.304 & 0.215 & 0.182 & 13.4 \\ \hline GraphSAGE [42] & 0.620 & 0.521 & 0.437 & 0.525 & 0.886 & 0.286 & 0.483 & 0.191 & 13.0 \\ GAT [43] & 0.683 & 0.571 & 0.460 & 0.681 & 0.889 & 0.429 & 0.597 & 0.246 & 7.38 \\ ScatteringGCN [18] & 0.679 & 0.519 & 0.410 & 0.690 & 0.890 & 0.419 & 0.364 & 0.214 & 11.4 \\ GCN [10] & 0.697 (2) & 0.567 (2) & 0.447 (2) & 0.711 (2) & 0.875 (2) & 0.414 (2) & 0.498 (2) & 0.240 (4) & 9.25 \\ JK-Net [19] & 0.703 (4) & 0.568 (2) & 0.475 (20) & 0.747 (16) & 0.892 (8) & 0.547 (2) & 0.494 (18) & 0.224 (18) & 6.38 \\ APPNP [26] & 0.698 (4) & 0.576 (4) & 0.404 (2) & 0.813 (8) & 0.885 (4) & 0.521 (2) & 0.560 (2) & 0.212 (4) & 7.75 \\ SGC [38] & 0.697 (10) & 0.583 (2) & 0.445 (2) & 0.716 (2) & 0.887 (2) & 0.410 (2) & 0.571 (2) & 0.247 (4) & 7.50 \\ ClusterGCN [37] & 0.681 (2) & 0.576 (2) & 0.449 (2) & 0.731 (2) & 0.893 (2) & 0.483 (2) & 0.525 (2) & 0.239 (8) & 6.75 \\ GCNII [24] & **0.710 (12)** & 0.576 (18) & 0.449 (16) & 0.845 (12) & 0.901 (12) & 0.545 (12) & 0.619 (14) & 0.238 (10) & 3.50 \\ SSGC [35] & 0.702 (20) & 0.575 (4) & 0.446 (2) & 0.760 (2) & 0.889 (2) & 0.478 (2) & 0.523 (10) & 0.248 (2) & 6.50 \\ AdaGCN [20] & 0.663 (2) & 0.587 (10) & 0.479 (4) & 0.800 (2) & 0.894 (2) & 0.552 (2) & 0.588 (2) & 0.230 (2) & 4.88 \\ \hline AGNN w/o Adaboost & 0.689 (2) & 0.564 (2) & 0.440 (2) & 0.766 (10) & 0.888 (2) & 0.503 (20) & 0.574 (10) & 0.254 (16) & 7.13 \\ AGNN & 0.707 (6) & **0.589 (6)** & **0.503 (14)** & **0.849 (10)** & **0.903 (8)** & **0.584 (4)** & **0.647 (6)** & **0.256 (6)** & **1.13** \\ \hline \hline \end{tabular} \end{table} TABLE III: Performance (accuracy) comparison with 20 labeled samples per class as supervision signals, where the highest accuracy is highlighted in red. The last column shows the average ranks of the performance of different methods. For multi-layer or multi-order information-based models, the optimal layer numbers or orders are recorded in brackets. proposed improved Adaboost. In addition, the experimental results point out that the optimal layer numbers of AGNN are not always higher than that of AGNN w/o Adaboost. This may be owing to the fact that the aggregation process enables the model to obtain competitive accuracy with fewer layers. Besides, deep-layer models do not always mine more information on some datasets, which depends on the topological structure of datasets. However, it is significant that AGNN obtains higher accuracy with more layers on some datasets, especially on Chameleon and ACM. In a nutshell, these observations reveal that the performance leading of AGNN is significant with larger numbers of layers. #### Iv-A2 Performance with deep layers Because the proposed AGNN aims to tackle the over-smoothing issue and extract more distinctive characteristics with deep layers, we further conduct comparing experiments on some multi-layer or multi-order information-based GCN methods to explore accuracy trends with varying numbers of layers. Table IV demonstrates the classification accuracy of selected methods with 20 labeled nodes for each class, from which we have the following observation. As most existing works have analyzed, GCN encounters a dramatic accuracy plunge with over 2 graph convolutional layers on all tested datasets. In contrast, the performance decline of other compared methods is not as severe as GCN, and some of them even gain marginal performance improvement as the number of layers rises. Nevertheless, several compared approaches still attain the highest accuracy with a 2-layer architecture, and sometimes performance may dwindle as the numbers of layers are larger. In general, AdaGCN which also integrates multi-hop node embeddings behaves favorably on most datasets. Nonetheless, as we have discussed, it still suffers from indistinguishable node features from deep layers on some datasets (e.g., BlogCatalog), owing to which deep AdaGCN leads to unsatisfactory performance. We can discover that AGNN achieves competitive performance with fewer layers, and often outperforms other models with more stacked layers. Above all, the proposed model maintains accuracy at a high level with more layers, and a suitable multi-layer AGNN is helpful to exploring representative high-order node features. \begin{table} \begin{tabular}{c|l|c c c c c c c c c c} \hline \hline Datasets & Models & 2-layer & 4-layer & 6-layer & 8-layer & 10-layer & 12-layer & 14-layer & 16-layer & 18-layer & 20-layer \\ \hline \multirow{8}{*}{CoraFull} & GCN [10] & **0.567*** & 0.495 & 0.451 & 0.443 & 0.408 & 0.376 & 0.332 & 0.204 & 0.119 & 0.019 \\ & JK-Net [19] & **0.568*** & 0.534 & 0.531 & 0.493 & 0.553 & 0.506 & 0.456 & 0.523 & 0.527 & 0.530 \\ & APPNP [26] & 0.569 & **0.576** & 0.560 & 0.561 & 0.550 & 0.556 & 0.552 & 0.549 & 0.547 & 0.544 \\ & SGC [38] & **0.583*** & **0.576** & 0.562 & 0.551 & 0.534 & 0.512 & 0.495 & 0.474 & 0.441 & 0.418 \\ & ClusterGCN [37] & **0.576*** & 0.518 & 0.494 & 0.475 & 0.442 & 0.391 & 0.337 & 0.264 & 0.257 & 0.214 \\ & GCNII [24] & 0.539 & 0.536 & 0.558 & 0.565 & 0.568 & 0.571 & 0.568 & 0.574 & **0.576*** & 0.565 \\ & SSGC [35] & 0.572 & **0.575*** & 0.572 & 0.561 & 0.562 & 0.561 & 0.541 & 0.564 & 0.562 & 0.537 \\ & AdaGCN [20] & 0.552 & 0.553 & 0.571 & 0.573 & **0.587*** & **0.579** & **0.586** & 0.575 & 0.564 & 0.535 \\ & AGNN w/o Adaboost & **0.564*** & 0.544 & 0.532 & 0.554 & 0.544 & 0.554 & 0.523 & 0.536 & 0.545 & 0.541 \\ & AGNN & 0.570 & 0.574 & **0.589*** & **0.583** & 0.580 & 0.568 & 0.565 & **0.577** & **0.584** & **0.574** \\ \hline \multirow{8}{*}{BlogCatalog} & GCN [10] & **0.697*** & 0.548 & 0.231 & 0.125 & 0.142 & 0.154 & 0.187 & 0.164 & 0.159 & 0.171 \\ & JK-Net [19] & 0.725 & 0.711 & 0.693 & 0.711 & 0.670 & 0.724 & 0.696 & **0.747*** & 0.668 & 0.698 \\ & APPNP [26] & 0.791 & 0.810 & 0.811 & **0.313*** & 0.809 & 0.806 & 0.809 & 0.811 & 0.805 & 0.804 \\ & SGC [38] & **0.716*** & 0.616 & 0.490 & 0.394 & 0.313 & 0.238 & 0.232 & 0.225 & 0.220 & 0.237 \\ & ClusterGCN [37] & **0.731*** & 0.542 & 0.395 & 0.256 & 0.171 & 0.192 & 0.182 & 0.176 & 0.172 & 0.171 \\ & GCNII [24] & **0.816** & 0.813 & 0.799 & 0.802 & 0.843 & **0.845*** & 0.810 & **0.838** & 0.801 & 0.796 \\ & SSGC [35] & **0.760*** & 0.744 & 0.744 & 0.736 & 0.683 & 0.728 & 0.726 & 0.723 & 0.722 & 0.661 \\ & AdaGCN [20] & **0.800*** & 0.723 & 0.678 & 0.682 & 0.681 & 0.684 & 0.678 & 0.688 & 0.684 & 0.688 \\ & AGNN w/o Adaboost & 0.762 & 0.745 & 0.746 & 0.741 & **0.766*** & 0.736 & 0.737 & 0.751 & 0.754 & 0.748 \\ & AGNN & 0.720 & **0.824** & **0.824** & 0.805 & **0.849*** & **0.815** & **0.820** & 0.814 & **0.808** & **0.814** \\ \hline \multirow{8}{*}{Flick} & GCN [10] & **0.414*** & 0.127 & 0.161 & 0.091 & 0.100 & 0.092 & 0.089 & 0.091 & 0.094 & 0.095 \\ & JK-Net [19] & **0.547*** & 0.421 & 0.392 & 0.418 & 0.422 & 0.409 & 0.439 & 0.445 & 0.343 & 0.345 \\ & APPNP [26] & **0.521*** & 0.502 & 0.461 & 0.485 & 0.465 & 0.475 & 0.468 & 0.474 & 0.487 & 0.464 \\ & SGC [38] & **0.410*** & 0.337 & 0.220 & 0.197 & 0.154 & 0.179 & 0.167 & 0.160 & 0.155 & 0.154 \\ & ClusterGCN [37] & **0.483*** & 0.398 & 0.314 & 0.322 & 0.217 & 0.198 & 0.184 & 0.143 & 0.112 & 0.103 \\ & GCNII [24] & 0.489 & 0.499 & 0.514 & 0.511 & 0.530 & **0.545*** & 0.523 & 0.538 & 0.524 & 0.524 \\ & SSGC [35] & **0.478*** & 0.433 & 0.388 & 0.356 & 0.340 & 0.328 & 0.320 & 0.315 & 0.309 & 0.304 \\ & AdaGCN [20] & **0.552*** & 0.546 & **0.539** & **0.539** & 0.542 & **0.545** & **0.545** & **0.545** & 0.544 & 0.546 \\ & AGNN w/o Adaboost & 0.481 & 0.494 & 0.488 & 0.490 & 0.495 & 0.494 & 0.499 & 0.495 & 0.493 & **0.503*** \\ & AGNN & **0.560** & **0.584*** & 0.529 & 0.521 & **0.543** & 0.511 & 0.522 & 0.535 & **0.545** & **0.557** \\ \hline \multirow{8}{*}{UAI} & GCN [10] & **0.498*** & 0.301 & 0.195 & 0.202 & 0.175 & 0.186 & 0.192 & 0.123 & 0.109 & 0.080 \\ & JK-Net [19] & 0.474 & 0.467 & 0.4 As for AGNN w/o Adaboost, although it does not always outperform other models, it succeeds in lessening the negative influence of over-smoothing compared with other models and performs satisfactorily on all tested datasets. We also visualize the performance trends of compared methods in Figure 4 with more layers (32 and 64 layers), which intuitively shows the ability of compared models to overcome over-smoothing. AGNN generally performs the best with deeper layers. We find that AGNN also gains marginal improvement or keeps stable with 32/64 layers, which indicates that it gets rid of over-smoothing. Generally, AGNN with no more than 20 layers can achieve the optimal accuracy, as recorded in Table III. In conclusion, these experimental results point out that the proposed AGNN has a powerful ability to mine underlying node embeddings with a deep network architecture. #### Iv-A3 Weak classifier weight distribution In this section, we explore the assigned weights of weak classifiers in the proposed method with varying numbers of layers, as shown in Figure 5. The weight assignments demonstrate that shallow layers account for a significant portion of final predicted results, indicating that classification problems of most nodes can be effectively solved by extracting representations of one or two hops of neighbors. In general, the top 4 layers (top 2 blocks) of AGNN play the most critical role in the final prediction, and the rest layers complement the prediction with more high-order information. Figure 5 reveals that AGNN achieves the best performance with 8 layers on both two selected datasets, indicating that multi-layer models are essential for improving accuracy via exploring remote neighbors. Although Figure 5 shows that AGNN with more than 8 layers is not the optimal selection, the improved Adaboost can maintain the classification accuracy of extremely deep networks by assigning tiny weights to deep layers, if most nodes have been correctly classified through shallow layers. In a word, a multi-layer architecture often benefits the embedding learning, and AGNN attempts to leverage high-order information at the best. #### Iv-A4 Model analysis In this section, we further analyze the proposed model. First, the impact of hyperparameters used in AGNN is discussed. The accuracy changes w.r.t. \(\lambda\) and \(\rho\) on all datasets are demonstrated in Figure 6, from which we find that the performance of AGNN fluctuates marginally and a suitable choice of two parameters is crucial on most datasets. Overall, AGNN is robust to varied hyperparameters on Citeseer and ACM datasets. Although the optimal selections of hyperparameters differ on other datasets, small values of \(\lambda\) and \(\rho\) often lead to undesired performance, especially on CoraFull, Chameleon, UAI and BlogCatalog datasets. In our previous experiments, we select the optimal combination of these two hyperparameters to obtain better experimental results. Furthermore, we validate the effectiveness of the designed activation function MSReLU in GEL, as exhibited in Table V. All parameter settings except those in compared activation functions are the same. We also evaluate the performance of AGNN with identify function and ReLU function. It is noted that ReLU only preserves non-zero entries in the matrix. Experimental results indicate that MSReLU function succeeds in promoting classification accuracy compared with taking other functions as activation functions, attributed to the ability of making sparse outputs closer to original features. Sometimes, AGNN with ReLU encounters severe performance decline (e.g., Flickr and UAI datasets). This is because that it ignores negative entries in the feature matrix, which often results in the information loss. In a word, these observations suggest that a suitable MSReLU function benefits the learning of more accurate and robust node embeddings. #### Iv-A5 Convergence analysis Convergence curves of the proposed AGNN on BlogCatalog, Flickr, Actor and Chameleon datasets are demonstrated in Figure 7. These curves indicate that loss values of AGNN drop as the number of iterations increases and are finally convergent. Although loss values may sometimes fluctuate, the overall trends of curves are suggestive of their convergence. The fluctuation during training is caused by the Adaboost strategy that reassigns sample weights at each iteration. Nevertheless, loss values are stable and converge eventually. The figure also shows that AGNN with shallow layers generally converges more quickly than that with deep layers, due to the larger solution space caused by more trainable parameters. Overall, AGNN with all numbers of layers leads to similar convergent points. However, AGNN with deeper layers generally reaches lower values of cross-entropy, indicating the ability of exploring multi-hop embeddings. It is noteworthy that AGNN with deep layers does not always correspond to better convergence, attributed to the various data distributions of different datasets. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methods / Datasets & BlogCatalog & Flickr & UAI & Chameleon \\ \hline AGNN + IF & 0.805 & 0.568 & 0.615 & 0.450 \\ AGNN + ST & 0.813 & 0.561 & 0.612 & 0.437 \\ AGNN + ReLU & 0.815 & 0.411 & 0.594 & 0.458 \\ AGNN + MSReLU & **0.824** & **0.584** & **0.630** & **0.480** \\ \hline \hline \end{tabular} \end{table} TABLE V: Impact of Identity Function (IF), ST, ReLU and MSReLU in GEL, where \(\theta=0.02\) (ST), \(\theta_{1}=0.02\) and \(\theta_{2}=0.04\) (MSReLU). Layer numbers are fixed as 4. Fig. 4: Performance of baselines and the proposed AGNN with 2/4/8/16/32/64 layers. Fig. 5: Weight distribution of AGNN with 4/8/12/16 layers for each weak classifier on CoraFull and UAI datasets. Fig. 6: Parameter sensitivity of AGNN w.r.t. \(\lambda\) and \(\rho\) on various datasets. Fig. 7: Training convergence curves of AGNN with varying numbers of layers ranging in \(\{4,8,\dots,20\}\) on BlogCatalog, Flickr, Actor and Chameleon datasets. ## V Conclusion In this paper, we proposed an Alternating Graph-regularized Neural Network to improve the performance of GCN in terms of semi-supervised node classification tasks, which coped with the over-smoothing issue that occurred in most GCN-based models. We first reviewed the concept of GCN and validated that it was an approximation of a graph-regularized optimization problem. Next, we elaborated on the proposed GEL, which was derived from another graph-regularized optimization objective formulating the transformation from the original feature space to the intermediate graph embedding space at each layer. Therefore, GEL allowed the model to carry low-order information from the input to deep layers. Theoretically, the proposed AGNN alternately propagated node information on the basis of two graph-constrained problems. Furthermore, an improved Adaboost strategy was leveraged to integrate hidden graph representations from all layers. Due to more reliable and distinguishable node embeddings learned from GCL and GEL, this strategy could obtain more accurate predictions. Extensive experiments validated that the proposed method succeeded in promoting the performance of GCN with deeper layers. In the future, we will devote ourselves into further investigation of multi-layer GCN with techniques like attention mechanism and residual networks.
2307.08663
Quaternion Convolutional Neural Networks: Current Advances and Future Directions
Since their first applications, Convolutional Neural Networks (CNNs) have solved problems that have advanced the state-of-the-art in several domains. CNNs represent information using real numbers. Despite encouraging results, theoretical analysis shows that representations such as hyper-complex numbers can achieve richer representational capacities than real numbers, and that Hamilton products can capture intrinsic interchannel relationships. Moreover, in the last few years, experimental research has shown that Quaternion-Valued CNNs (QCNNs) can achieve similar performance with fewer parameters than their real-valued counterparts. This paper condenses research in the development of QCNNs from its very beginnings. We propose a conceptual organization of current trends and analyze the main building blocks used in the design of QCNN models. Based on this conceptual organization, we propose future directions of research.
Gerardo Altamirano-Gomez, Carlos Gershenson
2023-07-17T17:27:06Z
http://arxiv.org/abs/2307.08663v1
# Quaternion Convolutional Neural Networks: Current Advances and Future Directions ###### Abstract Since their first applications, Convolutional Neural Networks (CNNs) have solved problems that have advanced the state-of-the-art in several domains. CNNs represent information using real numbers. Despite encouraging results, theoretical analysis shows that representations such as hyper-complex numbers can achieve richer representational capacities than real numbers, and that Hamilton products can capture intrinsic interchannel relationships. Moreover, in the last few years, experimental research has shown that Quaternion-Valued CNNs (QCNNs) can achieve similar performance with fewer parameters than their real-valued counterparts. This paper condenses research in the development of QCNNs from its very beginnings. We propose a conceptual organization of current trends and analyze the main building blocks used in the design of QCNN models. Based on this conceptual organization, we propose future directions of research. deep learning, quaternion algebra, computer vision, natural language processing ## I Introduction In the last decade, the use of deep learning models has become ubiquitous for solving difficult and open problems in science and engineering. Convolutional Neural Networks (CNN) were one of the first deep learning models [1, 2, 3], and their success in tackling the large scale object recognition and classification problem (Imagenet challenge) [4], led to its application in other domains. The core components of a CNN architecture are the convolution and pooling layers. A convolution layer is as a variation of a fully connected layer (Perceptron), as shown in Figure 1. In the former case, a weight-sharing mechanism over locally connected inputs is applied [5]. This technique is inspired by the local receptive fields discovered by Hubel and Wiesel in their experiments with macaques [6]. Formally speaking, the convolution layer applies the mathematical definition of convolution between discrete signals; thus, for a bi-dimensional input: \[\mathbf{D}=[d(x,y)]\in\mathbb{R}^{N_{1}\times N_{2}}, \tag{1}\] the convolution with a kernel: \[\mathbf{W}=[w(x,y)]\in\mathbb{R}^{M_{1}\times M_{2}}, \tag{2}\] is defined as follows: \[\mathbf{F} = \mathbf{W}*\mathbf{D} \tag{3}\] \[= \sum_{r=-\infty}^{\infty}\sum_{s=-\infty}^{\infty}\left[w(r,s)d(x- r,y-s)\right];\] thus, \(\mathbf{F}\in\mathbb{R}^{N_{1}-M_{1}+1\times N_{2}-M_{2}+1}\) is called _feature map_. The convolution layer is typically followed by a pooling layer; this provides a sort of local invariance to small rotations and translations of the input features [7, 8]. Moreover, T. Poggio proved that the combination of convolution and pooling layers produce an invariant signature to a group of geometric transformations [9, 10, 11]. However, in the design of very deep architectures, researchers have encountered some difficulties, e.g. reducing the number of parameters without losing generalization, and finding fast optimization methods for adjusting millions of parameters avoiding the vanishing and exploding gradient problems [12, 13]. Fundamental theoretical, as well as experimental analysis, have shown that some algebraic systems, different from the real numbers, have the potential to solve these problems. For example, using a complex numbers representation avoids local minima caused by the hierarchical structure [14], exhibits better generalization [15], and faster learning [16]. Because of the Cayley-Dickson construction [17], it could be inferred that these properties would hold on quaternion-valued neural networks. Recent experimental work has favored this conjecture, where quaternion-valued neural networks show a reduction in the number of parameters, and improved classification accuracies compared to its real-valued counterparts [18, 19, 20, 21, 22, 23, 24]. In addition, a quaternion representation can deal with four-dimensional signals as a single entity [25, 26, 27, 28, 29], models efficiently 3D transformations [30, 31], captures internal latent relationships via the Hamilton product [32], among other properties. Because of the diversity of deep learning models, this paper focuses in those using quaternion convolution as the main component. Consequently, we have identified three dominant conceptual trends: the classic, the geometric, and the equivariant approaches. These differ in the definition and interpretation of the quaternion convolution layer. The main contributions of this paper can be summarized as follows: 1. This paper presents a classification of QCNNs models based on the definition of quaternion convolution. 2. This paper provides a description of all atomic components needed for implementing QCNNs models, the motivation behind each component, the challenges when they are applied in the quaternion domain, and future directions of research for each component. 3. This paper presents an organized overview of the models that have been found in the literature. They are organized by application domain, classified in: classic, geometric, or equivariant approach; and presented by the type of model: recurrent, residual, convNet, generative or CAE. The organization of the paper is as follows: In Section II, we introduce fundamental concepts of quaternion algebra, then in Section III we explain the seminal works on each of the three conceptual trends (classic, geometric, and equivariant), followed by a presentation of the key atomic components to construct QCNNs architectures in Section IV. Thereafter, in Section V, we show a classification of current works by application, and present the diverse types of architectures. Finally, Section VI presents open issues and guidelines for future work, followed by the conclusions in Section VII. ## II Quaternion algebra This mathematical system was developed by W.R. Hamilton (1805-1865) at the middle of the XIX century [33, 34, 35]. His work on this subject started by exploring ratios between geometric elements, consequently he called _quaternion_ to the quotient of two vectors. He also noticed that for two similar triangles lying on a common plane, \(AOB\) and \(COD\), which are similarly turned, see Figure 2a), the following equality holds [34, pp. 112]: \[\vec{OB}:\vec{OA}=\vec{OD}:\vec{OC} \tag{4}\] or expressed as _geometric fractions_: \[\frac{\vec{OB}}{\vec{OA}}=\frac{\vec{OD}}{\vec{OC}}, \tag{5}\] where \(\vec{OA}\), \(\vec{OB}\), \(\vec{OC}\), and \(\vec{OD}\), are vectors from \(O\) to point \(A\), \(B\), \(C\), and \(D\), respectively. Then, by making the triangle \(COD\) into \(BOA^{\prime}\), see Figure 2b); where \(\vec{OA}\) and \(\vec{OB}\) represents any two equally long vectors, we obtain the following relationship [34, pp. 130]: \[\frac{\vec{OB}}{\vec{OA}}=\frac{\vec{OA^{\prime}}}{\vec{OB}} \tag{6}\] multiplying by \(\vec{OB}/\vec{OA}\): \[\left(\frac{\vec{OB}}{\vec{OA}}\right)^{2} = \left(\frac{\vec{OA^{\prime}}}{\vec{OB}}\right)\left(\frac{\vec{OB }}{\vec{OA}}\right)\] \[\left(\frac{\vec{OB}}{\vec{OA}}\right)^{2} = \frac{\vec{OA^{\prime}}}{\vec{OA}} \tag{7}\] Since \(\vec{OA}\) and \(\vec{OA^{\prime}}\) have the same magnitude, but opposite direction, \(\vec{OA}=-\vec{OA^{\prime}}\), we obtain: \[\left(\frac{\vec{OB}}{\vec{OA}}\right)^{2}=-1. \tag{8}\] The quotient of two perpendicular equally long vectors, like in this case, is called _right radial quaternion_. Since not particular assumption was made about quotients in Equation (8), e.g. the plane of that quotient is arbitrary, Hamilton concluded that every right radial quaternion, was one of the square roots of negative unity [34, pp. 131]. In addition, for a quaternion: \(\textbf{q}=\vec{v_{1}}:\vec{v_{2}}\), where \(\vec{v_{1}}\) and \(\vec{v_{2}}\) are vectors, we can rewrite it as \(\textbf{q}\vec{v_{1}}=\vec{v_{2}}\). In this case, **q** is called a _versor_, i.e. an element that transforms \(\vec{v_{1}}\) into \(\vec{v_{2}}\) by rotating it [34, pp. 133]. In this way, these results connected quaternion representation with the root \(\sqrt{-1}\), and its geometric meaning. Now, considering Figure 3, let \(\textbf{q}=\vec{OB}:\vec{OA}\) be a quaternion, and let \(\vec{OB^{\prime}}\) and \(\vec{OB^{\prime\prime}}\) be parallel and perpendicular vectors to \(\vec{OA}\), respectively; such that \(\vec{OB}=\vec{OB^{\prime}}+\vec{OB^{\prime\prime}}\). Then, we can decompose the quaternion, **q**, into: \[\textbf{q}=\vec{OB^{\prime}}:\vec{OA}+\vec{OB^{\prime\prime}}:\vec{OA} \tag{9}\] Since \(\vec{OB^{\prime}}\) and \(\vec{OA}\) are parallel vectors, their quotient is just a scale factor representing the projection of \(\vec{OA}\) into \(\vec{OB^{\prime}}\), so first term turns into a scalar. The second term represents the projection of \(\vec{OA}\) on the plane through \(O\), which is perpendicular to \(\vec{OA}\); in addition, this means \(\vec{OB^{\prime\prime}}\) can be obtained from \(\vec{OA}\) by applying a versor transformation. Since \(\vec{OB^{\prime\prime}}\) and \(\vec{OA}\) are perpendicular to each other, its quotient is called _right quaternion_ (note that it differs from right radial Fig. 1: Fully connected layer (a) vs. convolutional layer (b). For an input array of \(2\times 3\) elements, the Perceptron uses \(12\) weights and \(24\) connections (not locally connected,) while the convolution layer uses \(4\) weights and \(8\) connections. In the convolutional layer, we have a reduction in the number of parameters and connections, because a single weight is connected to several inputs, and the weights are applied over locally connected inputs. quaternion in the length of the vectors), and can be expressed as a linear combination of right radial quaternions (versors). This leads to the well know result that every quaternion can be reduced to the quadrinomial form [34, pp. 160]: \[\textbf{q}=q_{R}+q_{I}\hat{i}+q_{J}\hat{j}+q_{K}\hat{k}, \tag{10}\] where \(q_{R},q_{I},q_{J},q_{K}\) are scalars, and \(\hat{i},\hat{j},\hat{k}\) are three right versors. In terms of modern mathematics, the quaternion algebra, \(\mathbb{H}\), is: the 4-dimensional vector space over the field of the real numbers, generated by the basis \(\{1,\hat{i},\hat{j},\hat{k}\}\), and endowed with the following multiplication rules (Hamilton product): \[(1)(1)= 1\] \[(1)(\hat{i})= \hat{j}\hat{k}= -\hat{k}\hat{j}= \hat{i}\] \[(1)(\hat{j})= \hat{k}\hat{i}=-\hat{i}\hat{k}= \hat{j}\] \[(1)(\hat{k})= \hat{i}\hat{j}=-\hat{j}\hat{i}= \hat{k}\] \[\hat{i}^{2}= \hat{j}^{2}= \hat{k}^{2}= -1 \tag{11}\] Therefore, this definition makes the quaternion algebra associative and non-commutative. Thus, for two arbitrary quaternions: \(\textbf{p}=p_{R}+p_{I}\hat{i}+p_{J}\hat{j}+p_{K}\hat{k}\) and \(\textbf{q}=q_{R}+q_{I}\hat{i}+q_{J}\hat{j}+q_{K}\hat{k}\), their multiplication is calculated as follows: \[\textbf{p}\textbf{q} = p_{R}q_{R}-p_{I}q_{I}-p_{J}q_{J}-p_{K}q_{K} \tag{12}\] \[+(p_{R}q_{I}+p_{I}q_{R}+p_{J}q_{K}-p_{K}q_{J})\hat{i}\] \[+(p_{R}q_{J}-p_{I}q_{K}+p_{J}q_{R}+p_{K}q_{I})\hat{j}\] \[+(p_{R}q_{K}+p_{I}q_{J}-p_{J}q_{I}+p_{K}q_{R})\hat{k}\] Notice that each coefficient of the resulting quaternion, is composed of real and imaginary parts of the factors \(p\) and \(q\). In this way, the Hamilton product capture interchannel relationships between both factors. Next, there are introduced some useful operations with quaternions. Let, \(\textbf{q}=q_{R}+q_{I}\hat{i}+q_{J}\hat{j}+q_{K}\hat{k}\), be a quaternion, its _conjugate_ is defined as: \[\bar{\textbf{q}}=q_{R}-q_{I}\hat{i}-q_{J}\hat{j}-q_{K}\hat{k} \tag{13}\] And its magnitude is computed as follows: \[\|\textbf{q}\|=\sqrt{\textbf{q}\bar{\textbf{q}}} \tag{14}\] As well as complex numbers, quaternions can be represented in polar form [36, 37]: \[\textbf{q}=\|\textbf{q}\|\left[\cos(\theta)+\sin(\theta)\frac{q_{I}\hat{i}+q_ {J}\hat{j}+q_{K}\hat{k}}{\|q_{I}\hat{i}+q_{J}\hat{j}+q_{K}\hat{k}\|}\right], \tag{15}\] where: \[\theta=atan\left(\frac{\sqrt{q_{I}^{2}+q_{J}^{2}+q_{K}^{2}}}{q_{R}}\right). \tag{16}\] Even though other polar parametrizations has been proposed, see for example [38, 39, 25, 40], current QCNNs apply the former equation. As was mention before, a quaternion can represent a geometric transformation. Let \(\textbf{w}_{\theta}\) be a unitary versor expressed in polar form: \[\textbf{w}_{\theta}=\cos(\theta)+\sin(\theta)(w_{I}\hat{i}+w_{J}\hat{j}+w_{K} \hat{k}), \tag{17}\] and let \(\textbf{q}\) be any quaternion; then, their multiplication: \[\textbf{p}=\textbf{w}_{\theta}\textbf{q} \tag{18}\] Fig. 3: Given two vectors \(O\bar{A}\) and \(O\bar{B}\), we construct the following geometric configuration: plane \(\Pi\) is orthogonal to \(O\bar{A}\), vector \(O\bar{B}^{\prime\prime}\) lies on the plane \(OAB\) and is the projection of \(O\bar{B}\) into \(\Pi\), while vector \(O\bar{B}^{\prime}\) is the projection of \(O\bar{B}\) into \(O\bar{A}\). Adapted from [34]. Fig. 2: Two similar triangles, turned and in a common plane. a) General configuration b) Right triangles with equal length cathetus configuration. Adapted from [34]. applies a rotation, with angle \(\theta\), along an axis \(\mathbf{w}=w_{I}\hat{i}+w_{J}\hat{j}+w_{K}\hat{k}\). Since we have not made any particular assumption on \(\mathbf{q}\), we are applying a rotation in the four-dimensional space of quaternions. Denoting by \(\Pi_{1}\) to the plane defined by the scalar axis and the vector \(w_{I}\hat{i}+w_{J}\hat{j}+w_{K}\hat{k}\), and \(\Pi_{2}\) the plane perpendicular to \(w_{I}\hat{i}+w_{J}\hat{j}+w_{K}\hat{k}\); it can be proved that this 4-dimensional rotation is composed of a simultaneous rotation of the elements on plane \(\Pi_{1}\), and of the elements on plane \(\Pi_{2}\)[37]. Alternatively, we can split the transformation as a sandwiching product: \[\mathbf{p}=\mathbf{w}_{\frac{q}{2}}\mathbf{q}\mathbf{\bar{w}}_{\frac{q}{2}}; \tag{19}\] in this case, the angle of each versor, \(\mathbf{w}_{\theta/2}\), is divided to half. From the group theory perspective, the set of unitary versors lies on a 3-Sphere, \(\mathbb{S}^{3}\), embedded in a 4D Euclidean space [30]; and together with the Hamilton product form a group, which is isomorphic to the 4D rotation group SO(4) [37], and to the special unitary group SU(2) [41, 30]. Moreover, there exist a two to one homomorphism with the rotation group SO(3) [41, 30]. Another operation of interest is quaternion convolution; because the non-commutative property of quaternion multiplication, we have 3 different definitions of discrete quaternion convolution. First, left-side quaternion convolution, defined as follows [42, 43, 44]: \[(\mathbf{w}\ast\mathbf{q})(x,y)=\sum_{r=-\frac{L}{2}}^{\frac{L}{2}}\sum_{s=- \frac{L}{2}}^{\frac{L}{2}}[\mathbf{w}(r,s)\mathbf{q}(x-r,y-s)] \tag{20}\] Second, right-side quaternion convolution, defined as follows [43]: \[(\mathbf{q}\ast\mathbf{w})(x,y)=\sum_{r=-\frac{L}{2}}^{\frac{L}{2}}\sum_{s=- \frac{L}{2}}^{\frac{L}{2}}[\mathbf{q}(x-r,y-s)\mathbf{w}(r,s)] \tag{21}\] In third place, we have two-sided quaternion convolution [43, 44, 45]: \[(\mathbf{w_{left}}*\mathbf{q}*\mathbf{w_{right}})(x,y)=\] \[\sum_{r=-\frac{L}{2}}^{\frac{L}{2}}\sum_{s=-\frac{L}{2}}^{\frac{ L}{2}}[\mathbf{w_{left}}(r,s)\mathbf{q}(x-r,y-s)\mathbf{w_{right}}(r,s)] \tag{22}\] where \(q,w,w_{left},w_{right}\in\mathbb{H}\), \(\ast\) represents the convolution operator, and is applied quaternion product between \(\mathbf{q}\)'s and \(\mathbf{w}\)'s. Finally, Table I summarizes the notation that will be used in the rest of this paper. ## III Development of the classic, geometric, and equivariant approaches. In the previous section, it was presented the different definitions of quaternion convolution. From this definition, different conceptual approaches can be obtained by setting some restrictions on the quaternions, e.g. using unitary quaternions, or using just the real part of one of the quaternions. In the following paragraphs, we describe the seminal works that led to the development of the three main conceptual treats found in most of the current works on QCNNs. Since these approaches were not developed in an incremental manner, we focus on tracking the seminal ideas of each approach, describing what components were introduced, and specifying the domain of application in which they were tested. Thereafter, in Section IV-A, it is presented the formal definition of each approach. In the _classic approach_, the definition of quaternion convolution is a natural extension of the real and complex convolution; its role is to compute the correlation between input data and kernel patterns, in the quaternion domain. First works refer to Altamirano [46], whom based on the work on Quaternion-Valued Multilayer Perceptrons (QMLP's) by Arena _et al._[47, 48, 18], defines the main components of a QCNN: quaternion convolution layers, quaternion pooling layers that use the magnitude of the quaternion, quaternion split-RELU activation function, quaternion fully connected layers, and the quaternion back-propagation training method. As a proof of concept, these components were applied to a simple pattern classification problem. In an independent manner, Gaudet and Maida [19] arrived to a similar definition of convolution layers, but their work is based on Travelsi's research on Complex-Valued CNNs [49]; in addition, they proposed quaternion bath-normalization and weight initialization algorithms. Their models were tested by classifying images of the CIFAR-10 and CIFAR-100 databases [50], as well as the KITTI Road Estimation Benchmark [51]. At the same time, and also following the work of Travelsi [49], T. Parcollet implemented quaternion-valued CNNs [22], and LSTM models [52]; these were applied on a voice recognition task using the TIMIT Dataset [53]. In addition, they proposed a decoding-encoding model that converts grayscale images from the KODAK PhotoCD Dataset [54] to RGB images. Thereafter, Yin _et al._[55] proposed a quaternion attention mechanism and evaluates it on the image classification task using the CIFAR-10 dataset [50]. Moreover, he proposed a model for detection of double compression JPEG images and tested it on the Uncompressed Color Image Database (UCID) dataset [56]. Their models implement a pooling method that uses the magnitude of the quaternion, quaternion split activation functions, and an alternative bath normalization method, which reduces the computational cost of other methods by using a single variance value instead of a multichannel covariance matrix. Alternatively, we have the _geometric approach_ which was constructed based on the previous work of Matsui _et al._[57] and Isokawa _et al._[58]. In this case, the quaternion product applies affine transformations over the input features, consequently the quaternion convolution inherits this property. Thus, Zhu _et al._[24] defines quaternion convolution based on a geometric transformation that applies fixed-axis rotation and scaling (2DoF); moreover, they apply the same concept for constructing quaternion fully connected layers. Their model was tested on the image classification problem using the CIFAR-10, CIFAR-100 [50], and Oxford flowers [59] datasets. In addition, they tested their models for the noise elimination task using the Oxford flowers dataset [59] and a subset of Microsoft COCO dataset [60]. Under the same approach, Hongo _et al._[20] define a quaternion convolution layer that applies affine transformations; in addition, they applied quaternion pooling layers using the magnitude of the quaternion, quaternion split-RELU activation function, and the batch-normalization method of [55]. From these concepts, they implemented QCNNs and residual QCNNs for the image classification problem on the CIFAR-10 dataset [50]. In a recent work, Matsumoto _et al._[61] note the limited expression ability of working with fixed axes, and propose a model that learns general rotation axis (4DoF). They applied this model in a pixel classification task with PolSAR images obtained from the Japan Aerospace Exploration Agency (JAXA). Finally, the third approach is based on the concept of rotation _equivariance_, i.e. if an input feature is rotated a specific angle, then the output feature produced by the convolution layer is equivalent to: take the non-rotated input feature, apply the convolution layer, and then apply the rotation to the output feature. In order to satisfy the rotation equivariant property, Shen _et al._[23] define a convolution layer involving products between quaternion inputs and real-valued kernels. In addition, they define a rotation equivariant RELU layer and a batch normalization algorithm. Their models were applied to 3D point cloud classification on the ModelNet40 [62] and 3D MNIST [63] datasets. They show that these type of networks are robust to rotated input features, and the feature maps produced by inner layers are invariant to permutations of the input data points. Another work lying in this category is the one by Jing _et al._[64], who uses the middle element of a convolutional window as a pivot, and define a convolution-like operation that produces rotation equivariant features. By taking the magnitude of the quaternion output, the convolutional blocks can be used to construct rotation invariant classifiers. Most work on Quaternion-valued CNN is based on these, or lies in one of the preceding categories; in the following section, we will give the formal definition of each approach, and will explain the key atomic components used to construct QCNNs. ## IV QCNN components In this section, we present the main building blocks for implementing quaternion-valued convolutional deep learning architectures. Future directions for each component are specified at the end of each subsection. ### _Quaternion convolution layers_ Lets assume a dataset, where each sample has dimension \(N\times M\times 4\), i.e. an input can be represented as a 4-channels matrix of real numbers. Then, each sample, \(\mathbf{Q}\), is represented as a \(N\times M\) matrix where each element is a quaternion: \[\mathbf{Q}=[\mathbf{q}(x,y)]\in\mathbb{H}^{N\times M} \tag{23}\] then, \(\mathbf{Q}\) can be decomposed in its real and imaginary components: \[\mathbf{Q}=Q_{R}+Q_{I}\hat{i}+Q_{J}\hat{j}+Q_{K}\hat{k} \tag{24}\] where \(Q_{R},Q_{I},Q_{J},Q_{K}\in\mathbb{R}^{N\times M}\), and \(\hat{i},\hat{j},\hat{k}\) represent the complex basis of the quaternion algebra. In the same way, a convolution kernel of size \(L\times L\) is represented by a quaternion matrix, as follows: \[\mathbf{W}=[\mathbf{w}(x,y)]\in\mathbb{H}^{L\times L} \tag{25}\] which can be decomposed as: \[\mathbf{W}=W_{R}+W_{I}\hat{i}+W_{J}\hat{j}+W_{K}\hat{k}, \tag{26}\] where \(W_{R},W_{I},W_{J},W_{K}\in\mathbb{R}^{L\times L}\), and \(\hat{i},\hat{j},\hat{k}\) represent the basis of the quaternion algebra. Then, in the _classic approach_, Altamirano [46] and Gaudet and Maida [19] define the convolution layer using left-sided convolution: \[\mathbf{F}=\mathbf{W}*\mathbf{Q}. \tag{27}\] Thus, \(\mathbf{F}\in\mathbb{H}^{(N-L+1)\times(M-L+1)}\) represents the output of the layer, i.e. a quaternion feature map, and each element of the tensor is computed as follows: \[\mathbf{f}(x,y)=(\mathbf{w}*\mathbf{q})(x,y). \tag{28}\] This approach does not make any particular assumption about quaternions \(\mathbf{w}\) and \(\mathbf{q}\). Thus, the convolution represents the integral transformation of a quaternion function on a quaternion input signal. On contrast, in the _geometric approach_, Zhu _et al._[24] apply the two-sided convolution definition: \[\mathbf{F}=\mathbf{W}*\mathbf{Q}*\bar{\mathbf{W}} \tag{29}\] where \(\mathbf{F}\in\mathbb{H}^{(N-L+1)\times(M-L+1)}\), and each element of the output is computed as follows: \[\mathbf{f}(x,y)=\sum_{r=-\frac{L}{s}}^{\frac{L}{2}}\sum_{s=-\frac{L}{2}}^{\frac {L}{2}}\frac{\mathbf{w}(r,s)\mathbf{q}(x-r,y-s)\bar{\mathbf{w}}(r,s)}{a_{r,s}}. \tag{30}\] In addition, each quaternion \(\mathbf{w}\), is represented in its polar form: \[\mathbf{w}(r,s)=a_{r,s}\left(\cos\frac{\theta_{r,s}}{2}+\sin\frac{\theta_{r,s }}{2}\bar{u}\right) \tag{31}\] where \(\theta_{r,s}\in[-\pi,\pi]\), \(a_{r,s}\in\mathbb{R}\), and \(\bar{u}\) represents the unitary rotation axis. Thus, quaternion convolution applies rotation and scaling transformations on the quaternion \(\mathbf{q}\). Similarly, Hongo _et al._[20] use the two-sided quaternion convolution definition, but adds a threshold value for each component of the quaternion: \[\mathbf{F}=\mathbf{W}*\mathbf{Q}*\bar{\mathbf{W}}+\mathbf{B}. \tag{32}\] where \(\mathbf{B}\in\mathbb{H}\). In both cases, quaternion convolution applies geometric transformations over the input data. Finally, in the _equivariant approach_ Shen _et al._[23] propose a simplified version; they convolutes the quaternion input, \(\mathbf{Q}\in\mathbb{H}\), with a kernel of real numbers, \(\mathbf{W}\in\mathbb{R}\): \[\mathbf{F} = \mathbf{W}*\mathbf{Q} \tag{33}\] \[= \mathbf{W}*\mathbf{Q_{R}}+\mathbf{W}*\mathbf{Q_{I}}\hat{i}+ \mathbf{W}*\mathbf{Q_{J}}\hat{j}\] \[+\mathbf{W}*\mathbf{Q_{K}}\hat{k}.\] This version, allows to extract equivariant features, e.i. if an input sample, \(\mathbf{Q}\), produce a feature map, \(g(\mathbf{Q})\), then the rotated input sample, \(R(\mathbf{Q})\), will produce a rotated feature map, \(g(\mathbf{Q})\), thus: \[g(R(\mathbf{Q}))=R(g(\mathbf{Q})). \tag{34}\] Independently of the approach that we follows, a related problem when we implement QCNNs, is how to deal with multidimensional inputs. In real-valued CNNs, the common way of dealing with them is: defining 2D convolution kernels with the same number of channels as the input data, apply 2D convolution separately for each channel; then, the resulting 2D outputs are summed over all channels to produce a single-channel feature map. A different approach is to apply N-dimensional convolution; in this case, the multidimensional kernel is convoluted over all the channels of the input data. For quaternion-valued CNNs, current implementations use variations of the first approach; i.e. the input data is divided into 4-channel sub-inputs, thereafter quaternion convolution is computed for each sub-input. Before explaining the details, some notation is introduced. Let \(\mathbf{X}\in\mathbb{R}^{N\times M\times C}\) be an input data, \(N\) is the number of rows, \(M\) the number of columns, and \(C\) is the number of channels, where \(C\%4=0\), then \(X\) is partitioned as follows: \[\mathbf{X}=[\mathbf{Q_{0}},\mathbf{Q_{1}},\dots,\mathbf{Q_{(C/4)-1}}] \tag{35}\] where each \(\mathbf{Q_{s}}\in\mathbb{H}^{N\times M}\), \(0<s<(C/4)-1\) is a _quaternion channel_. Let \(\mathbf{V}\in\mathbb{R}^{L\times L\times K}\), with \(K\%4=0\), be the convolution kernel, then: \[\mathbf{V}=[\mathbf{W_{0}},\mathbf{W_{1}},\dots,\mathbf{W_{(K/4)-1}}] \tag{36}\] where each \(\mathbf{W_{s}}\in\mathbb{H}^{L\times L}\), \(0<s<(K/4)-1\) is a quaternion channel. Thus, there are three different ways of dealing with multi-dimensional quaternion inputs, see Figure 4: 1. Autoencoder convolution: Kernel and input have the same number of channels (\(K=C\)). In this case, each quaternion channel input is assigned to one quaternion channel kernel, and convolution between them is computed using Equation (27), (29), (32) or (33); Thus, if we use left-sided convolution, each individual output, \(\mathbf{F_{s}}\in\mathbb{H}^{N-L+1\times M-L+1}\), is computed as follows: \[\mathbf{F_{s}}=\mathbf{W_{s}}*\mathbf{Q_{s}},\] (37) where \(0<i<(C/4)-1\), and the final quaternion feature map is obtained by concatenating all outputs: \[\mathbf{F}=[\mathbf{F_{0}},\mathbf{F_{2}},\dots,\mathbf{F_{(C/4)-1}}]\] (38) This method produces an output with the same number of channels as the input data; and could be used in Convolutional Auto-Encoders (CAE). 2. Pyramidal convolution: Kernel and input have different number of channels (\(K\neq C\)), but they are multiples of \(4\). In [46], it is proposed the computing of feature maps using a pyramidal approach: each kernel \(\mathbf{W_{t}}\), where \(0<t<(K/4)-1\) is convolved with each sub-input \(\mathbf{Q_{s}}\), where \(0<s<(C/4)-1\), hence, it produces \(C/4\) quaternion outputs. Since each quaternion input channel is convolved with each quaternion kernel channel, see Figure 4, a convolution kernel \(\mathbf{V}\in\mathbb{R}^{L\times L\times K}\) will produce \(C*K/16\) quaternion outputs: \[\mathbf{F}=[\mathbf{F_{0}},\mathbf{F_{2}},\dots,\mathbf{F_{C-K/16}}].\] (39) Thus, if left-sided convolution is applied, each quaternion output channel, \(\mathbf{F_{k}}\), is computed as follows: \[\mathbf{F_{(t+C/4)+s}}=\mathbf{W_{t}}*\mathbf{Q_{s}}.\] (40) Even though [46] used left-sided convolution, this approach is valid with any of the other convolution definitions. The intuition behind this approach is to detect a quaternionic pattern in any sub-input; however, its application is impractical Beacuse of the exponential growth of the number of channels in deep architectures. Calculation of summed outputs can alleviate the computational cost. 3. Summed convolution: Similarly to the former method, but in this approach, each quaternion input channel is convolved with a different quaternion kernel channel: \[\mathbf{F_{s}}=\mathbf{W_{s}}*\mathbf{Q_{s}},\] (41) where \(0<s<(C/4)-1\). Then, the final quaternion feature map, \(\mathbf{F}\in\mathbb{H}^{N\times M}\), is obtained by summing all outputs: \[\mathbf{F}=\sum_{s=0}^{(C/4)-1}\mathbf{F_{s}}.\] (42) _Future directions:_ As was mentioned before, N-dimensional convolution has been applied on Real-Valued CNNs for processing multichannel inputs. In a similar way, the equations presented can be extended to 8-channel inputs using an octonion or hypercomplex algebra, see for example [65, 66, 67, 68]. For larger number of inputs, a geometric algebra [69] representation can be applied. To the best of our knowledge, this type of architecture has not been published to date, but a first approach in this direction can be found in [70, 71]. In these type of deep learning architectures: quaternion, hyper-complex, or geometric, a major concern is the selection of the signature of the algebra, which will embed data into different geometric spaces, and the processing will take distinct meanings accordingly. Thus, a sensible selection of the dimension and signature should be made according to the nature of the problem and the meaning of input data as well as intermediate layers. ### _Quaternion Fully Connected Layers_ Let \(\mathbf{Q}\) be a \(N_{1}\times N_{2}\times N_{3}\) tensor, representing the input to a fully connected layer; then each element of \(\mathbf{Q}\) is a quaternion: \[\mathbf{Q}=[\mathbf{q}(x,y,z)]\in\mathbb{H}^{N_{1}\times N_{2}\times N_{3}}, \tag{43}\] where \(N_{1},N_{2},N_{3}\) are the height, width, and number of channels of the input. Now, for the fully connected layer, it is defined a quaternion kernel, \(\mathbf{W}\), of size \(N_{1}\times N_{2}\times N_{3}\), where each element is a quaternion: \[\mathbf{W}=[\mathbf{w}(x,y,z)]\in\mathbb{H}^{N_{1}\times N_{2}\times N_{3}}. \tag{44}\] where \(N_{1},N_{2},N_{3}\) are the height, width, and number of channels of the input. Note that elements of the input and weight tensors are denoted as \(q(x,y,z)\), and \(w(x,y,z)\), respectively; and the output of the layer will be a quaternion, \(\mathbf{f}\in\mathbb{H}\). Thus, for classic fully connected layers, the output, \(\mathbf{f}\), is computed as follows [46]: \[\mathbf{f}=\sum_{r,s,t}^{N_{1},N_{2},N_{3}}\left[\mathbf{w}(r,s,t)\mathbf{q}( r,s,t)\right]. \tag{45}\] Similarly, for geometric fully connected layers, the output is computed as follows [24]: \[\mathbf{f}=\sum_{r,s,t}^{N_{1},N_{2},N_{3}}\left[\frac{1}{\|\mathbf{w}(r,s,t) \|}\mathbf{w}(r,s,t)\mathbf{q}(r,s,t)\bar{\mathbf{w}}(r,s,t)\right]. \tag{46}\] The difference between quaternion-valued and real-valued fully connected layers relies on the application of the Hamilton product, which captures interchannel relationships, and in the former case, the output is a quaternion. Moreover, it should be noticed that for real-based networks, fully connected layers are equivalent to inner product layers, but for quaternion-valued networks, the output of quaternion fully connected layers and quaternion inner product layers are different. _Future directions:_ Current implementations of fully quaternion layers follows a classic or geometric approach; the equivariance approach should be implemented and tested. ### _Quaternion Pooling_ Most of the current methods for applying quaternion pooling rely on channel-wise pooling, in a similar way as is applied in Real-Valued CNNs. For example, [19, 61] use channel-wise global average pooling layers, and [72, 73, 24] applied channel-wise average as well as channel-wise max pooling layers, while [20, 74, 22] applied just channel-wise max pooling layers, defined as follows: \[splitPool(\mathbf{Q}) = \underset{(x,y)}{\max}(q_{R}(x,y))+\underset{(x,y)}{\max}(q_{I}(x, y))\hat{i} \tag{47}\] \[+\underset{(x,y)}{\max}(q_{J}(x,y))\hat{j}\] \[+\underset{(x,y)}{\max}(q_{K}(x,y))\hat{k},\] where \(\mathbf{Q}\) is a quaternion submatrix. In contrast, [23, 55, 46] use a max-pooling approach, but instead of using the channel-wise maximum value, they select the quaternion with maximum amplitude within a region: \[FullyPool(\mathbf{Q}) = \mathbf{q}(\bar{x},\bar{y})\ s.t.\ (\bar{x},\bar{y}) \tag{48}\] \[= \underset{(x,y)}{\arg\max}(\|\mathbf{q}(x,y)\|).\] In addition, since this method can obtain multiple maximum amplitude quaternions, [55] applies the angle cosine theorem to discriminate between them. _Future directions:_ Future works should introduce novel fully quaternion pooling methods, emphasizing the use of interchannel relationships. For example: using polar representations, introducing quaternion measures [75], or taking inspiration from information theory. ### _Quaternion batch normalization_ Internal covariance shift [76], is a statistical phenomenon that occurs during training: the change of the network parameters causes changes in the statistical distribution of the inputs in hidden layers. Whitening procedures [77, 78], i.e. apply linear transformations to obtain uncorrelated inputs with zero means and unit variances, alleviate this phenomenon. Since whitening the input layers is computationally expensive, because of the computing of covariance matrices, its inverse square root, and the derivatives of these transformations, Ioffe and Szegedy [76] introduced the batch normalization algorithm, which normalizes each dimension independently. It uses mini-batchs to estimate the mean and variance of each channel, and transform the channel to have zero mean and unit variance using the following equation: \[\tilde{x}=\frac{x-E(x)}{\sqrt{\sigma^{2}+\epsilon}} \tag{49}\] where \(\epsilon\) is a constant added for numerical stability. In order to maintain the representational ability of the network, an affine Fig. 4: Visualization of different approaches for computing quaternion convolution on a multichannel input. This example shows a 2D input with 2 quaternion channels; the output of an autoencoder convolution layer is shaded in pink, the output of the pyramidal convolution layer is framed in cyan, and the summed convolution layer is shaded in orange. transformation with two learnable parameters, \(\gamma\) and \(\beta\), is applied: \[BN(\tilde{x})=\gamma\tilde{x}+\beta \tag{50}\] Even though this method does not produce uncorrelated inputs, it improves convergence time by enabling higher learning rates, and has allowed the training of deeper neural network models. On the other hand, uncorrelated inputs reduce overfitting and improve generalization [79]. Since channel-wise normalization does not assure equal variance in the real and imaginary components, Gaudet and Maida [19] proposed a quaternion batch-normalization algorithm using the whitening approach [80], and treat each component of the quaternion as an element of a four dimensional vector. We call this approach _Whitening Quaternion Batch-Normalization (WQBN)_. Let \(\mathbf{x}\) be a quaternion input variable, \(\mathbf{x}=[x_{R},x_{I},x_{J},x_{K}]^{T}\), \(E(\mathbf{x})\) its expected value, and \(V(\mathbf{x})\) its quaternion covariance matrix, both computed over a mini-batch, then: \[V(\mathbf{x})=\begin{bmatrix}v_{rr}&v_{ri}&v_{rj}&v_{rk}\\ v_{ir}&v_{ii}&v_{ij}&v_{ik}\\ v_{jr}&v_{ji}&v_{jj}&v_{jk}\\ v_{kr}&v_{ki}&v_{kj}&v_{kk}\end{bmatrix}, \tag{51}\] where subscripts represent the covariance between real or imaginary components of \(\mathbf{x}\), e.g. \(v_{ij}=cov(x_{I},x_{J})\). Thus, the Cholesky decomposition of \(V^{-1}\) is computed, and one of the resulting matrices, \(W\), is selected. Thereafter, the whitened quaternion variable, \(\tilde{\mathbf{x}}\), is calculated with the following matrix multiplication [19]: \[\tilde{\mathbf{x}}=W(\mathbf{x}-E(\mathbf{x})), \tag{52}\] and finally: \[WQBN(\tilde{\mathbf{x}})=\mathbf{\Gamma}\tilde{\mathbf{x}}+\beta \tag{53}\] where \(\beta\in\mathbb{H}\) is a trainable parameter, and: \[\mathbf{\Gamma}=\begin{bmatrix}\Gamma_{rr}&\Gamma_{ri}&\Gamma_{rj}&\Gamma_{rk} \\ \Gamma_{ri}&\Gamma_{ii}&\Gamma_{ij}&\Gamma_{ik}\\ \Gamma_{rj}&\Gamma_{ij}&\Gamma_{jj}&\Gamma_{jk}\\ \Gamma_{rk}&\Gamma_{ik}&\Gamma_{jk}&\Gamma_{kk}\end{bmatrix}, \tag{54}\] is a symmetric matrix with trainable parameters. In constrast, Yin _et al._[55] applied the quaternion variance definition proposed by [81]: \[V(\mathbf{x})=\frac{1}{T}\sum_{i=1}^{T}\mathbf{v}\bar{\mathbf{v}}, \tag{55}\] where \(\mathbf{v}=\mathbf{x}-E(\mathbf{x})\). Note that in this case, the variance is a single real value. We call this approach the _Variance Quaternion Batch Normalization (VQBN)_. Thus, the batch normalization is computed as follows: \[VQBN(\mathbf{x})=\gamma\frac{\mathbf{x}-E(\mathbf{x})}{\sqrt{V(\mathbf{x})^{2 }+\epsilon}}+\beta, \tag{56}\] where \(\gamma,\epsilon\in\mathbb{R}\), \(\beta\in\mathbb{H}\) are trainable parameters. Recently, Grassucci _et al._[82] noted that for proper quaternion random variables [83, 84], the covariance matrix in Equation (51) becomes the diagonal matrix: \[V(\mathbf{x})=4\sigma^{2}I, \tag{57}\] and the batch-normalization procedure is simplified to applying Equations (55) and (56) with \(V(\mathbf{x})=2\sigma\). A third approach is the one proposed by [23], who define the batch-normalization operation to be rotation-equivariance, thus: \[RQBN(\mathbf{x})=\frac{\mathbf{x}}{\sqrt{E(\|\mathbf{x}\|^{2})+\epsilon}}. \tag{58}\] Generative Adversarial Networks apply another type of normalization, called Spectral Normalization [85]. In these networks, having a Lipschitz-bounded discriminative function is crucial to mitigate the gradient explosion problem [86, 87, 88]; thus, based on their real-valued counterpart, Grassucci _et al._[82] proposed a Quaternion Spectral Normalization algorithm that constrain the spectral norm of each layer. To explain this procedure, we introduce some definitions. Let \(f\) be a generic function, it is K-Lipschitz continuous if, for any two points, \(x_{1},x_{2}\), it satisfies: \[\frac{\|f(x_{1})-f(x_{2})\|}{|x_{1}-x_{2}|}\leq K \tag{59}\] Let \(\sigma(\cdot)\) be the spectral norm of a matrix, i.e. the largest singular value of a matrix, the Lipschitz norm of a function \(f\), denoted by \(\|f\|_{Lip}\), is defined as follows: \[\|f\|_{Lip}=\sup_{x}\sigma(\nabla f(x)). \tag{60}\] Thus, for a generic linear layer, \(f(h)=Wx+b\), whose gradient is \(W\), their Lipschitz norm is: \[\|f\|_{Lip}=\sigma(W). \tag{61}\] Now, let \(\mathbf{W}\) be a quaternion matrix; their Lipschitz norm, \(\sigma(\mathbf{W})\), is computed by estimating the largest singular value of \(\mathbf{W}\) via the power iteration method [82]. Then, Quaternion Spectral Normalization is applied, in a split way, using the following equations: \[\bar{W}_{R} = \frac{W_{R}}{\sigma(W)},\bar{W}_{I}=\frac{W_{I}}{\sigma(W)},\bar{ W}_{J}=\frac{W_{J}}{\sigma(W)},\text{ and}\] \[\bar{W}_{K} = \frac{W_{K}}{\sigma(W)}. \tag{62}\] For applying the Lipschitz bound to the whole network, it is used the following relationship: \[\|f_{1}\circ f_{2}\|_{Lip}\leq\|f_{1}\|_{Lip}\cdot\|f_{2}\|_{Lip}. \tag{63}\] provided the Lipschitz norm of each layer is constrained to \(1\)[82]. _Future directions:_ The WQBN algorithm produces uncorrelated, zero mean, and unit variance inputs, but is computationally expensive. In addition, Kessy _et al._[80] states that there are infinitely many possible matrices satisfying \(V^{-1}=\bar{W}W\). In contrast, VQBN algorithm produces zero mean and unit variance input (according to the Wang _et al._ definition [81], which averages the variance of the four channels), but since we use a single value variance, scaling is isotropic, and the input channels still correlated. However, this approach greatly reduces the computational time, by avoiding the decomposition of the covariance matrix. Further theoretical and experimental analysis is required to grasp its advantages versus independent channel batch-normalization. ### _Activation functions_ Biological neurons produce an output signal if a set of input stimuli surpasses a threshold value within a lapse of time; for artificial neurons, the role of the activation function is to simulate this triggering action. Mathematically, this behavior is modeled by a mapping: in the case of real-valued neurons, the domain and image is the field of real numbers, while complex and quaternion neurons map complex or quaternion inputs to complex or quaternion outputs, respectively. Since back-propagation has become the standard method for training artificial neural networks, it is required for the activation function to be analytic [89], i.e. the derivative exists at any point. In the complex domain, by the Liouville theorem, it is known that a bounded function, which is analytical at any point, is a constant function; reciprocally, complex non-constant functions have non-bounded images [90]. Accordingly, some common real-valued functions, like the hyperbolic tangent and sigmoid, will diverge to infinity when they are extended to the complex domain [89]; this makes them unsuitable for representing the behavior of a biological neuron. A similar problem arises in the quaternion domain: "the only quaternion function regular with bounded norm in \(E^{4}\) is a constant" [91], where \(E^{4}\) stands for a 4-dimensional Euclidean space. The relationship between regular functions and the existence of their quaternionic derivatives is stated by the Cauchy-Riemann-Fueter equation [92]; leading to the result that the only globally analytic quaternion functions are some linear and constant functions. Moreover, non-linear activation functions are required for constructing a neural network architecture that works as a universal interpolator of a continuous quaternion valued function [18]. In this manner, a typical approach to circumvent this problem, has been to relax the constraints, and use non-linear non-analytic quaternion functions satisfying input and output properties, where the learning dynamics is built using partial derivatives on the quaternion domain. An example of this approach are split quaternion functions, defined as follows: \[f(\mathbf{q})=f_{R}(\mathbf{q})+f_{I}(\mathbf{q})\hat{i}+f_{J}(\mathbf{q}) \hat{j}+f_{K}(\mathbf{q})\hat{k}, \tag{64}\] where \(\mathbf{q}\in\mathbb{H}\), \(f_{R}\), \(f_{I}\), \(f_{J}\), and \(f_{K}\) are mappings over the real numbers: \(f_{*}:\mathbb{R}\rightarrow\mathbb{R}\). Nowadays, split quaternion functions remain the only type of activation function that has been implemented on QCNNs. Even though any type of quaternion split activation function used in QMLP can be applied, split quaternion ReLU is currently the most common activation function applied on QCNNs. For the sake of completeness, we present the activation functions found in current works. Let \(\mathbf{q}=q_{R}+q_{I}\hat{i}+q_{J}\hat{j}+q_{K}\hat{k}\in\mathbb{H}\), we have the following activation functions: 1. Split Quaternion Sigmoid. It was introduced in [93, 46], and is defined as follows: \[\mathbb{Q}S(\mathbf{q})=S(q_{R})+S(q_{I})\hat{i}+S(q_{J})\hat{j}+S(q_{K})\hat {k},\] (65) where \(S:\mathbb{R}\rightarrow\mathbb{R}\) is the real-valued sigmoid function: \[S(x)=\frac{1}{1+e^{-x}}.\] (66) 2. Split Quaternion Hyperbolic Tangent. It was introduced in [52, 24], and is defined as follows: \[\mathbb{Q}tanh(\mathbf{q}) = \tanh(q_{R})+\tanh(q_{I})\hat{i}\] (67) \[+\tanh(q_{J})\hat{j}\] \[+\tanh(q_{K})\hat{k},\] where \(\tanh:\mathbb{R}\rightarrow\mathbb{R}\) is the real-valued hyperbolic tangent function: \[\tanh(x) = \frac{\sinh(x)}{\cosh(x)}\] (68) \[= \frac{\exp(2x)-1}{\exp(2x)+1}.\] 3. Split Quaternion Hard Hyperbolic Tangent. It was introduced in [32], and is defined as follows: \[\mathbb{Q}H^{2}T(\mathbf{q}) = H^{2}T(q_{R})+H^{2}T(q_{I})\hat{i}\] (69) \[+H^{2}T(q_{J})\hat{j}\] \[+H^{2}T(q_{K})\hat{k},\] where \(H^{2}T:\mathbb{R}\rightarrow\mathbb{R}\)[94]: \[H^{2}T(x)=\begin{cases}-1&\text{si }x<-1\\ x&\text{si }-1\leq x\leq 1\\ 1&\text{si }x>1.\end{cases}\] For this type of activation functions, Collobert [94] proved that hidden layers of MLP works as local SVM's when the real-value hard hyperbolic tangent is used. 4. Split Quaternion ReLU. It was introduced in [19, 20, 55], and is defined as follows: \[\mathbb{Q}ReLU(\mathbf{q}) = ReLU(q_{R})+ReLU(q_{I})\hat{i}\] (71) \[+ReLU(q_{J})\hat{j}\] \[+ReLU(q_{K})\hat{k},\] where \(ReLU:\mathbb{R}\rightarrow\mathbb{R}\) is the real-valued ReLU function [95, 1]: \[ReLU(x)=max(0,x).\] (72) 5. Split Quaternion Parametric ReLU. It was introduced in [22], and is defined as follows: \[\mathbb{Q}PReLU(\mathbf{q}) = PReLU(q_{R})+PReLU(q_{I})\hat{i}\] (73) \[+PReLU(q_{J})\hat{j}\] \[+PReLU(q_{K})\hat{k},\] where \(PReLU:\mathbb{R}\rightarrow\mathbb{R}\)[96]: \[PRELU(x)=\begin{cases}x&\text{si }x>0\\ \alpha x&\text{si }x\leq 0\end{cases}\] (74) and \(\alpha\) is a parameter learned during the training stage, which controls the slope of the negative side of the ReLU function. Equivalently, we have: \[PReLU(x)=max(0,x)+\alpha min(0,x).\] (75) 6. Split Quaternion Leaky ReLU. It was introduced in [97] as a particular case of the \(\mathbb{Q}\)PReLU function; in this case, \(\alpha\) is a small constant value, e.g. \(0.01\)[98]. An advantage of using split activation functions is that by processing each channel separately, we can adopt existing frameworks without additional modifications of the source code; however, this separate processing does not adequately capture the cross-dynamics of the data channels. A different approach in the design of quaternion activation functions comes from quaternion analysis [92]. So far, it is clear that the key problem is designing suitable activation functions, and computing its quaternion derivatives; thus, some mathematicians have been working in redefining quaternion calculus. For example, De Leo and Rotelli [99, 100], as well as Schwartz [101], have introduced the concept of local quaternionic derivative. The trick was to extend "the concept of a derivative operator from those with constant (even quaternionic) coefficients, to one with variable coefficients depending upon the point of application... [thus] the derivative operator passes from a global form to local form" [100]. Using this novel approach, they define _local analyticity_, which, in contrast to global analyticity, does not reduce functions to a trivial class (constant or linear functions). Following this idea, Ujang _et al._[102] define the concept of fully quaternion functions, and the properties they should fulfill to be locally analytic, non-linear, and suitable for gradient-based learning. They displayed better performance over split-quaternion functions when they were applied for designing adaptive filters. In the same train of thought, T. Isokawa proposed a QMLP and a backpropagation algorithm [103, 104]. Opposed to split functions, fully quaternion functions capture interchannel relationships, making them suitable for quaternion-based learning; however, experimental comparison over a standard benchmark remain an open issue. To the best of our knowledge, currently, the only fully quaternion activation function that has been applied in QCNNs is the _Rotation-Equivariant ReLU_ function. It was proposed by Shen _et al._[23] as part of a model that extracts rotation equivariant features. Let \(\{\mathbf{q_{1}},\mathbf{q_{2}},\ldots,\mathbf{q_{N}}\}\) be a set of quaternions; then, for a quaternion \(\mathbf{q_{k}}\), with \(1\leq k\leq N\), the activation function is defined as follows: \[\mathbb{Q}REReLU(\mathbf{q_{k}})=\frac{\|\mathbf{q_{k}}\|}{max(\|\mathbf{q_{ k}}\|,c)}\mathbf{q_{k}}; \tag{76}\] where \(c\) is a positive constant, computed as follows: \[c=\frac{1}{N}\sum_{j=1}^{N}\|\mathbf{q_{j}}\|. \tag{77}\] In the same trend of developing novel quaternion calculus tools, but from a different perspective, Mandic _et al._[105] start from the observation that for gradient-based optimization, a common objective is to minimize a positive real function of quaternion variables, e.g. \(J(\mathbf{e},\mathbf{\bar{e}})=\mathbf{e\bar{e}}\), and for that purpose it is used the pseudo-gradient, i.e. the sum of component-wise gradients. Formalization of these ideas is achieved by "establishing the duality between derivatives of quaternion valued functions in \(\mathbb{H}\) and the corresponding quadrivariate real functions in \(\mathbb{R}^{4}\)" [105], leading to what is called Hamiltonian-Real Calculus (HR). In addition, Mandic _et al._[105] proved that for a real function of quaternion vector variable, the maximum change is in the direction of the conjugate gradient, establishing a general framework for quaternion gradient-based optimization. Going further, Xu and Mandic [106] proposed the product and chain rules for computing derivatives, as well as the quaternion counterparts of the mean value and Taylor's theorems; this establishes an alternative framework called Generalize HR calculus (GHR). Thereafter, novel quaternion gradient algorithms using GHR calculus were proposed [107, 108]. Although in [109] is shown that a QMLP trained with a GHR-based algorithm obtains better prediction gains, on the 4D Saito's chaotic signal task, than other quaternion-based learning algorithms [18, 110, 57], further experimental analysis on a standard benchmark and proper comparison with real and complex counterparts is required. To this date, there is no published work on the use of QRH calculus on QCNNs. _Future directions:_ So far, we have identified the fundamental trends of thought for activation functions: split-quaternion functions, whose derivatives for training are computed in a channel wise manner, and fully quaternion functions, whose derivatives are computed locally, or using partial derivatives. Theoretical analysis, as well as some preliminary experimental results, have indicated a better performance of fully quaternion activation functions over others, as well as better performance of quaternion training methods based on GHR calculus. However, these ideas have not been set in practice on QCNNs. Intended works should focus on introducing novel fully connected activation functions that exploit specific properties of the quaternion representation, and establishing proper benchmarks for comparison of existing functions and training methods. In respect to quaternion calculus, the following section will connect the preceding ideas with the training of QCNNs and its future work. ### _Training_ Currently, all of the methods for training that have been tested on QCNNs, rely on adaptations of the QMLP back-propagation algorithm [48], or an extension of the generalized complex chain rule for real-valued loss functions [49]. Both approaches are equivalent and relax the analyticity condition by using partial derivatives with respect to the real and imaginary parts. Alternative approaches, such as GHR calculus or local derivatives have not been tested in current implementations of QCNNs. Thus, Gaudet and Maida [19] introduced the Generalized Quaternion Chain Rule for a Real-valued function: Let \(L\) be a real-valued loss function and \(\mathbf{q}=q_{R}+q_{I}\hat{i}+q_{J}\hat{j}+q_{K}\hat{k}\), where \(q_{R},q_{I},q_{J},q_{K}\in\mathbb{R}\), then [19, 92]: \[\frac{\partial L}{\partial\mathbf{q}}=\frac{\partial L}{\partial q_{R}}+\frac {\partial L}{\partial q_{I}}\hat{i}+\frac{\partial L}{\partial q_{J}}\hat{j}+ \frac{\partial L}{\partial q_{K}}\hat{k}. \tag{78}\] Now, assuming \(\mathbf{q}\) can be expressed in terms of a second quaternion variable, \(\mathbf{g}=g_{R}+g_{I}\hat{i}+g_{J}\hat{j}+g_{K}\hat{k}\), where \(g_{R},g_{I},g_{J},g_{K}\in\mathbb{R}\). Then, the chain rule is calculated as follows [19]: \[\frac{\partial L}{\partial\mathbf{g}} = \frac{\partial L}{\partial q_{R}}\left(\frac{\partial q_{R}}{\partial g _{R}}+\frac{\partial q_{R}}{\partial g_{I}}\hat{i}+\frac{\partial q_{R}}{ \partial g_{J}}\hat{j}+\frac{\partial q_{R}}{\partial g_{K}}\hat{k}\right)+ \tag{79}\] \[\frac{\partial L}{\partial q_{I}}\left(\frac{\partial q_{I}}{ \partial g_{R}}+\frac{\partial q_{I}}{\partial g_{I}}\hat{i}+\frac{\partial q_ {I}}{\partial g_{J}}\hat{j}+\frac{\partial q_{I}}{\partial g_{K}}\hat{k}\right)+\] \[\frac{\partial L}{\partial q_{J}}\left(\frac{\partial q_{J}}{ \partial g_{R}}+\frac{\partial q_{J}}{\partial g_{I}}\hat{i}+\frac{\partial q _{J}}{\partial g_{J}}\hat{j}+\frac{\partial q_{J}}{\partial g_{K}}\hat{k}\right)+\] \[\frac{\partial L}{\partial q_{K}}\left(\frac{\partial q_{K}}{ \partial g_{R}}+\frac{\partial q_{K}}{\partial g_{I}}\hat{i}+\frac{\partial q _{K}}{\partial g_{J}}\hat{j}+\frac{\partial q_{K}}{\partial g_{K}}\hat{k}\right)\] These equations are applied in the implementation of the backpropagation algorithm. Since this is the most used approach, we call it the _Standard Quaternion Backpropagation Algorithm_[46, 93, 111], summarized as follows: Let \(\mathbf{x}\in\mathbb{H}\) be the input to a layer \(C\), \(\mathbf{w_{nm}}\in\mathbb{H}\) represents the weight connecting input \(n\) to output \(m\), and \(\mathbf{d}^{top},\mathbf{d}^{bottom}\in\mathbb{H}\) are the error propagated from the top and to the bottom layers, respective, and \(\epsilon\in\mathbb{R}\) be the learning rate; then, for the current convolution layer, \(C\): 1. Update its weights using the following equation: \[\mathbf{w_{nm}}=\mathbf{w_{nm}}+\epsilon\mathbf{d_{m}^{top}}\mathbf{\bar{x}_ {n}}\] (80) 2. Update the bias term: \[\mathbf{b_{m}}=\mathbf{b_{m}}+\epsilon\mathbf{d_{m}^{top}}\] (81) 3. Propagate the error to the bottom layer according to the following equation: \[\mathbf{d_{n}^{bottom}}=\sum_{m}(\mathbf{\bar{w}_{nm}}\mathbf{d_{m}^{top}})\] (82) Note that Equations (80) and (82) use quaternion products. For an activation layer, the error is propagated to the bottom layer according to the following equation: \[\mathbf{d_{n}^{bottom}}=\mathbf{d_{n}}^{top}\odot f^{\prime}(\mathbf{x_{n}}) \tag{83}\] where \(\odot\) is the Hadamard or Schur product (component wise). Adopting the same definition of the chain rule, but a different definition of convolution, we have the work of Zhu _et al._[24], whom apply the two-sided convolution and a polar representation of the quaternion weights. Since their model applies rotation and scaling on the input features, the quaternion gradient for his model is simplified to a rotation transformation over the same axis, but with a reversed angle. A general model of this approach, with learnable arbitrary axes, is presented by [61]. Therefore, the backpropagation algorithm is similar to the one presented before, but Equation (80) changes accordingly [58, 57, 61]: \[\frac{\epsilon}{\|\mathbf{w_{nm}}\|}\left\{\frac{\mathbf{d_{m}^{top}}\cdot( \mathbf{w_{nm}}\mathbf{x_{n}}\mathbf{\bar{w}_{nm}})}{\|\mathbf{w_{nm}}\|^{2}} \mathbf{w_{nm}}-2\mathbf{d_{m}^{top}}\mathbf{w_{nm}}\mathbf{\bar{x}_{n}}\right\} \tag{84}\] and Equation (82) changes to [58, 57, 61]: \[\mathbf{d_{n}^{bottom}}=\sum_{m}\frac{\mathbf{\bar{w}_{nm}}\mathbf{d_{m}^{top}} \mathbf{w_{nm}}}{\|\mathbf{w_{nm}}\|} \tag{85}\] From the point of view of supervised learning, the problem of training a QCNN can be stated as an optimization one: \[\underset{\mathbf{w_{1}},\dots,\mathbf{w_{n}}}{\mathrm{argmin}}\ L(\mathbf{w_{1 }},\dots,\mathbf{w_{n}}) \tag{86}\] where \(L\) is the loss function, and \(\mathbf{w_{1}},\dots,\mathbf{w_{n}}\) are all the quaternion weights of the network. Besides the problem of computing derivatives of non-analytic quaternion activation functions; another problem is that algorithms such as gradient descendant and back propagation could be trapped in local minima. Therefore, QCNNs could be trained with alternative methods that not rely on the computing of quaternion derivatives, such as evolutionary algorithms, ant colony optimization, particle swarm optimization, etc. [112]. In this case, general-purpose optimization algorithms will have the same performance in average [113, 114]. _Future directions:_ Novel methods for training should be applied, for example: modern quaternion calculus techniques, such as local or GHR calculus, and meta-heuristic algorithms working in the quaternion domain. In addition, fully quaternion loss functions should be introduced. ### _Quaternion weight initialization_ At the beginning of this century, real-valued neural networks were showing the superiority of deep architectures; however, the standard gradient descent algorithm from random weight initialization performed poorly when used for training these models. To understand the reason of this behavior, Glorot and Bengio [115] established an experimental setup and observed that some activation functions can cause saturation in top hidden layers. In addition, by theoretically analyzing the forward and backward propagation variances (expressed with respect to the input, output and weight initialization randomness), they realized that, because of the multiplicative effect through layers, the variance of the back-propagated gradient might vanish or explode in very deep networks. Consequently, they proposed, and validated experimentally, a weight initialization procedure, which makes the variance dependent on each layer, and maintains activation and back-propagated gradient variances as we move up or down the network. Their method, called _normalized initialization_, uses a scaled uniform distribution for initialization. However, one of its assumptions is that the network is in a linear regime at the initialization step; thus, this method works better for softsign units than for sigmoid or hyperbolic tangent ones [115]. Since the linearity assumption is invalid for activation functions such as ReLU, He _et al._[96] developed an initialization method for non-linear ReLU and Parametric ReLU activation functions. Their initialization method uses a zero-mean Gaussian distribution with a specific standard deviation value, and surpasses the performance of the previous method for training extremely deep models, e.g. 30 convolution layers. In the case of classic QCNNs, Trabelsi _et al._[49] extended these results to deep complex networks. Similar results are presented by Gaudet and Maida [19] for deep quaternion networks, and by Parcollet _et al._[52] for quaternion recurrent networks. These works treat quaternion weights as 4 -dimensional vectors, whose components are normally distributed, centered at zero, and independent. Hence, it can be proved that the weights and their magnitude follow a 4DOF Rayleigh distribution [19, 52], reducing the weight initialization by selecting a single parameter, \(\sigma\), which indicate the mode of the distribution. If \(\sigma=\frac{1}{\sqrt{2(n_{in}+n_{out})}}\) we have a _quaternion normalized initialization_ which ensures that the variances of the quaternion input, output and their gradients are the same; while \(\sigma=\frac{1}{\sqrt{2n_{in}}}\) is used for the _quaternion ReLU initialization_, where \(n_{in}\), and \(n_{out}\) are the number of neurons of the input and output layers, respectively. The method is presented in Algorithmic 1. On the other hand, for geometric QCNNs, the quaternion weights represent an affine transformation composed by a scale factor, \(s\), and a rotation angle \(\theta\). Thus, to keep the same variance of the gradients during training, Zhu _et al._[24] proposed a simple initialization procedure using the uniform distribution, \(U[\cdot]\): \[s_{j} \sim U\left[-\frac{\sqrt{6}}{\sqrt{n_{j}+n_{j+1}}},\frac{\sqrt{6}}{ \sqrt{n_{j}+n_{j+1}}}\right]\] \[\theta \sim U\left[-\frac{\pi}{2},\frac{\pi}{2}\right] \tag{87}\] _Future directions:_ In the case of geometric QCNNs, further theoretical analysis of weight initialization techniques is required, as well as the extension for ReLU units. For equivariant QCNNs, weight initialization procedures have not been introduced or or are not described in the literature. In addition, current works only consider the case of split quaternion activation functions. The propagation of variance using fully quaternion activation functions should be investigated. ## V Architectural design for applications. The previous section deals with the individual building blocks of QCNNs; the possible ways we can interconnect them, give rise to numerous models. In this manner, there are three leading factors to consider when implementing applications: 1. Domain of application. Current works are primarily focused on 3 areas: vision, language, and forecasting. 2. Mapping the input data from real numbers to the quaternion domain. 3. Topology. Based on current works, we have the following types: * ConvNets. They use convolution layers without additional tricks. * Residual. They use a shortcut connection from input to forward blocks, allowing to learn with reference to the layer inputs instead of learning unreferenced functions. * Convolution Auto-Encoders (CAE). Its aim is to reconstruct the input feature at the output. * Point-based. They focused on unordered sets of vectors such as point-cloud input data. * Recurrent. They exploit connections that can create cycles, allowing the network to learn long-term temporal information. * Generative. They use an adversarial learning paradigm, where one player generates samples resembling the real data, and the other discriminates between real and fake data; the solution of the game is the model achieving Nash equilibrium. This section is devoted to show how the blocks presented in Section IV can be used to construct different architectures, and we focus our analysis on the three points previously introduced. The information is organized in three subsections corresponding to the domains of applications: vision, language, and forecasting. Within each subsection, the models are ordered according to the type of convolution they applied: classic, geometric, or equivariant. In addition, different networks are presented according to their topology: ConvNet, Residual, CAE, Point-based, Recurrent, or Generative. Methods for mapping input data to the quaternion domain are presented in each subsection. A graphic depiction of several models is shown in Figures 5, 6, and 7. Some architectures available in the literature are presented from a high-level abstraction point of view; showing their topologies and the blocks they use. Information about the number of quaternion kernels, size, and stride was included when available. Further implementation details, such as weight initialization, optimization methods, etc. were omitted to avoid a cumbersome presentation. Readers interested in specific models are encouraged to consult the original sources. In addition, Tables II to V provide a comparison between different QCNNs. They provide extra information, such as datasets on which the models were tested, performance metrics, comparison to real-valued models, etc. For comparison between quaternion and real-valued models, we only report real-valued architectures that have similar topology. The difference in the number of parameters between real-valued and quaternion-valued networks is due to the following reasons: Some authors prefer to compare networks with the same number of real-valued units, and quaternion-valued units, where the quaternion-valued units have \(4\) times more parameters. Others compare networks with the same number of parameters: they reduce the number of quaternion units to \(25\%\) of the real valued networks, or quadruple the number of real-valued units. In addition, some quaternion-value networks use real-valued components in some parts of the network. To conclude, Subsection V-D summarizes some insights obtained from this works. ### _Vision_ For mapping data to the quaternion domain, several methods are available in the literature; in the case of color images, the most common approach is to encode the red, green, and blue channels into the imaginary parts of the quaternion: \[\mathbf{q}=0+R\hat{i}+G\hat{j}+B\hat{k}, \tag{88}\] see for example [127, 129, 97, 82, 132, 20, 122, 74, 32, 116, 55, 24]. Another method, proposed by Gaudet and Maida [19], is to use a residual block: \(BN\to ReLU\to Conv\to BN\to ReLU\to Conv\), where a shortcut connection sums the input to the output of the block. In contrast, for grayscale images, Beijing _et al._[129] propose to map the grayscale values to the real part of the quaternion as follows: \[\mathbf{q}=grayscale+0\hat{i}+0\hat{j}+0\hat{k} \tag{89}\] In the case of POLSAR images containing scattering matrices: \[\begin{bmatrix}S_{HH}&S_{HV}\\ S_{VH}&S_{VV}\end{bmatrix} \tag{90}\] Matsumoto _et al._[61] propose to compute a Pauli decomposition from the complex scattering matrix: \[a=\frac{S_{HH}+S_{VV}}{\sqrt{2}},b=\frac{S_{HH}-S_{VV}}{\sqrt{2}},c=\frac{S_{ Hv}+S_{Vh}}{\sqrt{2}}, \tag{91}\] and assign the square magnitude of the components \(b\), \(c\), and \(a\) to the red, green, and blue channels, respectively, of an RGB image. Thereafter, the image is mapped to the quaternion domain as usual. Alternatively, Matsumoto _et al._[61] propose to transform the scattering matrix into 3D Stokes vectors normalized by their total power. Thereafter, each component is mapped to the imaginary parts of the quaternion. Finally, for point clouds, Shen _et al._[23] propose to map the \((x,y,z)\) coordinates of each 3D point to a quaternion as follows: \[\mathbf{q}=0+x\hat{i}+y\hat{j}+z\hat{k}. \tag{92}\] #### V-A1 Vision under the classic convolution paradigm Residual Networks were one of the first models to be proposed in the QCNNs literature. Gaudet and Maida [19] tested deep and shallow quaternion-valued residual nets. For both cases, they use three stages; the shallow network contains 2, 1 and 1 residual blocks in each stage, while the deep network uses 10, 9 and 9 residual blocks in each stage, see Figure 5a). These models were applied on image classification tasks using the CIFAR-10 and CIFAR-100 [50] datasets, and the KITTI Road Estimation benchmark [51]. Recently, Sfikas _et al._[119] proposed a standard residual model for keyword spotting in handwritten manuscripts, see Figure 5c). Their model is applied on the PIOP-DAS [120] and the GRPOLY-DB [121] datasets, containing digitized documents written in modern Greek. A different type of model is ConvNets. The first proposal came from Yin _et al._[55], whom implemented a basic QCNN model, see Figure 6f). Since quaternion models extract \(4\) times more features, they propose to use an attention module, whose purpose is to filter out redundant features. These models were applied to the image classification problem on the CIFAR-10 dataset [50]. In addition, they propose a similar model for Double JPEG compression detection on the UCID dataset [56]. Thereafter, Jin _et al._[122] propose to include a deformable layer, leading to a Deformable Quaternion Gabor CNN (DQG-CNN), see Figure 6h). They apply it on a facial recognition task using the Oulu-CASIA [123], MMI [124], and SFEW [125] datasets. Beijing _et al._[127] propose an architecture based on quaternion versions of Fully Convolutional Neural Networks (FCN) [139]. Their model is applied for the color image splicing localization problem, and tested on CASIA v1.0, CASIA v2.0 [128], and Columbia color DVMM [130] datasets. The basis of their architecture is the Quaternion-valued Fully Convolutional Network (QFCN), see Figure 6d). Then, the final model is composed of three QFCNs working in parallel, each one has different up-sampling layers: The first network does not have extra connections, the second network has a shortcut connection fusing the results of the fifth pooling layer with the output of the last layer, while the third network combines results of the third, and fourth pooling layers with the output of the last layer. In addition, each network uses a Super-pixel-enhanced pairwise Condition Random Field module to improve the results from the QCNN. This fusion of the outputs allows to work with different scales of image contents [140]. Thereafter, Beijing _et al._[129] propose the quaternion-valued two-stream Region-CNN. They extend the real-valued RGB-N [141] to the quaternion domain, improve it for pixel-level processing, and implement two extra-modules: an Attention Region Proposal Network, based on CBAM [142], for enhancing the features of important regions; and a Feature Pyramid Network, based on Quaternion-valued ResNet, to extract multi-scale features. Their results improved the ones previously published on [127]. Because of the complexity of these models, it is recommended to read it directly from [127, 129]. In the case of CAE's, which aim is to reconstruct the input feature at the output, Parcollet _et al._[32] propose a quaternion convolutional encoder-decoder (QCAE), see Figure 7a), and tested on the KODAK PhotoCD dataset [54]. Another type of CAE's is Variational Autoencoder, which estimates the probabilistic relationship between input and latent spaces. The quaternion-valued version was proposed by Grassucci _et al._[97], see Figure 7b), and applied it to reconstruct and generate faces of the CelebFaces Attributes Dataset (CelebA) [126]. A more sophisticated type of architecture is the generative models. Sfikas _et al._[116] propose a Quaternion Generative Adversarial Network for text detection of inscriptions found on byzantine monuments [118, 117]. The generator is a U-Net [143] like model, and the discriminator a cascade of convolution layers, see Figures 7e) and 7f), where the activation function of the last layer sums the output of real and imaginary parts of the quaternion to produce a real-valued output. Grassucci _et al._[82] adapted a Spectral Normalized GAN's (SNGAN) [144, 145] to the quaternion domain, and apply it on an image to image translation task using the CelebA-HQ [131] and 102 Oxford Flowers [59] datasets. The model is presented in Figure 7c) and 7d). Thereafter, Grassucci _et al._[132] proposed the quaternion-valued version of the StarGANv2 model [146]. It is composed of the generator, mapping, encoding, and discriminator networks; this model was evaluated on an image to image translation task using the CelebA-HQ dataset [131]. Because of the complexity of the model, it is recommended to consult them directly from [132]. #### Iv-A2 Vision under the geometric paradigm The only residual model lying in this paradigm is the one of Hongo _et al._[20], whom proposed a residual model, based on ResNet34 [147], see Figure 5a). This is applied to the image classification task using the CIFAR-10 dataset [50]. On ConvNet models, we have the work of Zhu _et al._[24], whom proposed shallow QCNN and quaternion VGG-S [148] models for image classification problems. These models are shown in Figures 6b) and 6c); the former was tested on the CIFAR-10 dataset [50], and the latter on the 102 Oxford flower dataset [59]. Hongo _et al._[20] proposed a different QCNN model, see Figure 6g), and tested on the image classification task using the CIFAR-10 dataset [50]. Moreover, Matsumoto _et al._[61] propose QCNN models for classifying pixels of PolSAR images. This type of images contain additional experimental features given in complex scattering matrices. For their experiments, they labeled each pixel of two images in one of 4 classes: water, grass, forest, or town. Then, they converted the complex scattering matrices into PolSAR pseudocolor features, or into normalized Stoke vectors. By testing similar models under these two different representations, they found out that the classification results largely depend on input features. The proposed model is shown in Figure 6j). In the context of CAE's, Zhu _et al._[24] propose a U-Net-like encoder-decoder network [143] for the color image denoising problem. The model was tested for a denoising task on images of the 102 Oxford flower dataset [59], and on a subset of the COCO dataset [60], see details of the model in [149, 143, 150, 24]. #### Iv-A3 Vision under the equivariant paradigm Inspired by the PointNet model [151] for processing point clouds, Shen _et al._[23] modified it by exchanging all its layers for Rotation Equivariant Quaternion Modules, and remove its Spatial Transformer module since it discards rotation information. The rotation equivariant properties of the modules are evaluated on the ShapeNet dataset [152]. They experimentally proved that point clouds reconstructed using the synthesized quaternion features had the same orientations as point clouds generated by directly rotating the original point cloud. In addition, they modify other models, i.e. PointNet++ [153], DGCNN [154], and PointConv [155], by replacing their components into its equivariance counterparts, and tested on a 3D shape classification task on the ModelNet40 [62] and 3D MNIST [63] datasets. Because the variety of components and interconnections, the reader is directed to [151, 153, 23, 154, 155] to consult the details of these models. ### _Language_ For language applications, all works that have been proposed are ConvNets. Under the classic paradigm, Parcollet _et al._[22] propose a Connectionist Temporal Classification CTC-QCNN model, see Figure 6a), and tested on a phoneme recognition task, with TIMIT dataset [53]. The mapping of input signals to the quaternion domain is achieved by transforming the raw audio into 40-dimensional log mel-filter-bank coefficients with deltas, delta-deltas, and energy terms, and arranging the resulting vector into the components of an acoustic quaternion signal: \[\mathbf{q}(\mathbf{f},\mathbf{t})=0+e(f,t)\hat{i}+\frac{\partial e(f,t)}{ \partial t}\hat{j}+\frac{\partial^{2}e(f,t)}{\partial t^{2}}\hat{k}. \tag{93}\] Another problem related to language is joint 3D Sound Event Localization and Detection (SELD); solving this task can be helpful for activity recognition, assisting hearing impaired people, among other applications. The SELD problem consists in simultaneous solving: the Sound Event Detection (SED) problem, i.e. in a set of overlapping sound events, detecting temporal activities of each sound event and associating a label, and the Sound Localization Problem, which consists of estimating the spatial localization trajectory of each sound. The latter task could be simplified to determining the orientation of a sound source with respect to the microphone, which is called Direction-of-Arrival (DOA). For this problem, also following a classical approach, Comminiello _et al._[133] propose a quaternion-valued recurrent network (QSELD-net), based on the real-valued model of Adavanne _et al._[156]. The model has a first processing stage, where output is processed in parallel by two branches: the first one performs a multi-label classification task (SED), while the second one performs a multi-output regression task (DOA estimation). The model is shown in Figure 6e), where \(N\) is the number of sound event classes to be detected; for DOA estimation, we have three times more outputs, since they represent \((x,y,z)\) coordinates for each sound event class. This model was tested on the Ambisonic, Anechoic and Synthetic Impulse Response (ANSYN), and the Ambisonic, Reverberant and Synthetic Impulse Response (RESYN) datasets [134], consisting of spatially located sound events in an anechoic/reverberant environment synthesized using artificial impulse responses. Each dataset comprises three subsets: no temporally overlapping sources (O1), maximum two temporally overlapping sources (O2) and maximum three temporally overlapping sources (O3). In this application, the input data is a multichannel audio signal; real-valued SELD-net [156] as well as the quaternion-valued counterpart [133] apply the same feature extraction method: the spectrogram is computed using the \(M\)-point discrete Fourier transform to obtained a feature sequence of \(T\) frames containing the magnitude and phase components for each channel. Under the geometric paradigm, Muppidi and Radfar [74] proposed a QCNN for the emotion classification from speech task and evaluate it on the RAVDESS [135], IEMOCAP [136], and EMO-DB [137] datasets. They convert speech waveform inputs from the time domain to the frequency domain using Fourier transform, thereafter compute its Mel-Spectrogram, Fig. 5: Residual QNN models. Some notation details: For convolution, parameters are presented in the format (number of kernels, size of kernel, stride); if the size of kernel or stride is the same for each dimension, it is shown just a single number, e.g. instead of \(3\times 3\) it is shown \(3\). Symbol \(+\) inside a circle means summing. Dropout and flatten procedures were omitted. All blocks represent operations in the quaternion domain. and convert it to an RGB image. Finally, RGB images are processed with the model shown in Figure 6i), where the Reset-ReLU activation function resets invalid values to the nearest point in color space. ### _Forecasting_ Neshat _et al._[138] proposed an hybrid forecasting model, composed of QCNNs and Bi-directional LSTM recurrent networks, for prediction of short-term and long-term wind Fig. 6: Quaternion ConvNet models. Some notation details: For convolution, parameters are presented in the format (number of kernels, size of kernel, stride); if the size of kernel or stride is the same for each dimension, it is shown just a single number, e.g. instead of \(3\times 3\) it is shown \(3\). LRN stands for Local Response Normalization. Dropout and flatten procedures were omitted. All blocks represent operations in the quaternion domain, except those preceded by the word “real”. speeds. Historical meteorological wind data was collected from Lesvos and Samothraki Greek islands located in the North Aegean Sea, and obtained from the Institute of Environmental Research and Sustainable Development (IERSD), and the National Observatory of Athens (NOA). This work integrates classic standard QCNNs within a complete system, and shows that better results are obtained when using QCNNs than other AI models or handcrafted techniques. Because of the lack of data, the QCNNs are not able to provide a better understanding of the data, and the results are not available. Fig. 7: Quaternion CAE and Generative models. Some notation details: For convolution, parameters are presented in the format (number of kernels, size of kernel, stride); if the size of kernel or stride is the same for each dimension, it is shown just a single number, e.g. instead of \(3\times 3\) it is shown \(3\). Symbol \(+\) inside a circle means summing while \(\cup\) inside a circle means concatenation. TConv stands for Decoding Transposed Convolution. Dropout and flatten procedures were omitted. All blocks represent operations in the quaternion domain, except those preceded by the word “real”. the complexity of the model, it is recommended to consult the details directly from [138]. ### _Insights_ A fair comparison between the performance of the models and their individual blocks is difficult because of the fact that most of them are applied on different problems or using different datasets; moreover, most of the quaternion-valued models are based on their real-valued counterparts, and there are very few works based on incremental improvement over previous quaternion models. However, from the works reviewed, the following insights can be established: 1. Quaternion-valued models achieve, at least, comparable results to their real-valued counterparts, and with a lower number of parameters. As was stated by [119]: "\(\ldots\)an operation that is written as \(y=Wx\), where \(y\in\mathbb{R}^{4K}\), \(x\in\mathbb{R}^{4L}\), and \(W\in\mathbb{R}^{4K\times 4L}\) is thus mapped to a quaternionic operation \(\mathbf{y}=\mathbf{W}\mathbf{x}\), where \(\mathbf{y}\in\mathbb{H}^{K}\), \(\mathbf{x}\in\mathbb{H}^{L}\) and \(\mathbf{W}\in H^{K\times L}\). Parameter vector \(\mathbf{W}\) only contains \(4\times K\times L\) parameters compared to \(4\times 4\times K\times L\) of \(W\), hence we have a \(4\)x saving." In addition, some works report faster convergence on the training stage. 2. For the image classification task, the CIFAR-10 dataset has become the standard benchmark. Although the classification errors reported in different works are not standardized, for example: some authors use data-augmentation techniques, the best reported result is from Gaudet and Maida [19], whom uses a classic approach and a deep residual model. In addition, residual models present the best results for image classification tasks. 3. Attention modules filter out redundant features, and their use can improve the performance of the network [55]. 4. Variance Quaternion Batch Normalization (VQBN) experimentally outperforms Split Batch Normalization [55]. 5. The integration of quaternion Gabor filters and convolution layers enhances the abilities of the model to capture features in the aspects of spatial localization, orientation selectivity, and spatial frequency selectivity [122, 157, 158]. 6. Quaternion generative models demonstrate better abilities to learn the properties of the color space, and generate RGB images with more defined objects than real-valued models [82, 32]. Finally, Table VI presents information about the available source code. It only includes source code published by the authors of the papers; other implementations can be found in the popular website _paperswithcode_[159]. ## VI Further discussion Section IV presented each individual block, discussion of current issues, knowledge gaps, and future works for improvement. Here, we discuss future directions of research and open questions for any model of QCNNs, independently of its components. Even more, some topics involve any type of quaternion-valued deep learning model. ### _On proper comparisons_ When looking at the different models that have been proposed, one of the questions that arises is: Is it fair to compare a real-valued architecture with a quaternion-valued one using the same topology and number of parameters? Note that in the first case, the optimization of the parameters occurs in an Euclidean \(4\)-dimensional space, while in the second case, the optimization occurs in the quaternion space, which can be connected to geometric spaces different to the Euclidean one. From this observation, we could argue that a more reasonable way to compare is: an optimal real value network vs the optimal quaternion-value network, even though they have different topologies, connections, and number of weights. Now, it is clear that a single real-valued convolution layer cannot capture interchannel relationships, but could a real-valued multilayer or recurrent architecture capture interchannel relationships without the use of Hamilton products? When are Hamilton products _really_ needed? Something that could shed light on these questions is the reflection of Sfikas _et al._[119], whom believe that the effectiveness of quaternion networks is because of "navigation on a much compact parameter space during learning" [119], as well as an extra parameter sharing trait. ### _Mapping from data to quaternions_ Another question that arises is: how to find an optimal mapping from the input data to the quaternion domain? Common sense would say that finding a suitable mapping, and connecting with the right geometry could seriously improve the performance of quaternion networks; however, theoretical and experimental work is required in this direction. For example, for images, we can adopt different color models with a direct geometric relation to quaternion space, and point clouds as well as deformable models can avoid algebraic singularities when the quaternion representation is used. Moreover, models for language tasks are in their infancy, and novel methods that take advantage of the quaternion space should be proposed for signal processing, speech recognition, and other tasks. ### _Extension to hypercomplex and geometric algebras_ Current QCNNs rely on processing 4-dimensional input data, which is mapped to the quaternion domain. If the input data have less than \(4\) channels, a common solution is to apply zero-padding on some channels, or define a mapping to a \(4\)-dimensional space. In contrast, for more than \(4\) channels, there are several alternatives: The first one is to map the input data to an \(n\)-dimensional space, where \(n\) is a multiple of four. Then, we can process the input data by defining a quaternion kernel for each 4-channel input, see Figure 4. Alternatively, it can be applied an extension of the quaternion convolution to the octonions, sedenions or generalized hypercomplex numbers [67]. Moreover, Geometric algebras [69, 160] can be used to generalize hypercomplex convolution to general \(n\)-dimensional spaces, not restricted to multiples of \(4\), and to connect with different geometries. A partial approach in this direction is to parametrize the hypercomplex convolution [70, 71]. Let \(W\) be a convolution kernel, of size \(k\times k\), \(x\) a \(s\)-dimensional input, and \(y\) a \(d\)-dimensional output, then: \[y=W*x. \tag{94}\] Thus, the kernel is decomposed into the sum of its Kronecker products: \[y=\sum_{i=1}^{n}A_{i}\otimes F_{i}*x. \tag{95}\] where \(A_{i}\in\mathbb{R}^{n\times n}\), with \(i=1,\ldots,n\) are the matrices containing the algebra multiplication rules, and \(F_{i}\in\mathbb{R}^{\frac{\epsilon}{n}\times\frac{\epsilon}{n}\times k\times k}\) are the filters that compose the final weight matrix. The parameter \(n\) defines the dimension of the algebra; then, for \(n=2\) we are working in the complex domain, while a value \(n=4\) leads us to the quaternion space. Matrices \(A_{i}\), and \(F_{i}\) are obtained during training. ### _Quaternions on the frequency domain_ A natural implementation of QCNNs in the frequency domain would be to compute the Quaternion Fourier transform [161, 25, 162] of the input and kernels, and multiply them in the frequency domain. Moreover, the computing of the convolution could be accelerated using Fast QFT algorithms [163]. To this date, works following this approach have not been published. However, using a bio-inspired approach, Moya-sanchez _et al._[158] proposed a monogenic convolution layer. A monogenic signal [164], \(I_{M}\), is a mapping of the input signal, which simultaneously encodes local space and frequency characteristics: \[I_{M}=I^{\prime}+I_{1}\hat{i}+I_{2}\hat{j} \tag{96}\] where \(I_{1}\) and \(I_{2}\) are the Riesz transforms of the input in the \(x\) and \(y\) directions. In [158], the monogenic signal is computed in the Fourier domain, \(I^{\prime}\) is computed as a convolution of the input with quaternion log-Gabor filters with learnable parameters, and local phase and orientation are computed for achieving contrast invariance. In addition, their model provides crucial sensitivity to orientations, resembling properties of V1 cortex layer. ### _Might the classic, geometric, and equivariant models be special cases of a unified general model?_ To answer this question, let us recall that a quaternion is an element of a 4-dimensional space, and unitary versors together with the Hamilton product are isomorphic to the group SO(4). Thus, it is our point of view that the classic model, using the four components, is the most general one. By dividing the product as a sandwiching product and using an equivalent polar representation, a generalized version of the geometric model could be obtained (not Euclidean or affine, but 3D projective model); in addition, for the equivariant model, the kernel is reduced merely to its real components. However, the connection to invariant theory should be investigated for each of these models, so that a stratified organization at the light of geometry might be achieved. For example, if we link a unified general model to a particular geometry and its invariant properties, we could go down on the hierarchy of geometries by setting restrictions on the general model, until we obtain the rotation equivariant model or the others. ## VII Conclusions We have recollected the substantial majority of the ideas that have been published on QCNNs in recent years, and we presented a comprehensive guide for its application. In particular, being the convolution layer the core component of this type of models, we presented a sounded organization of QCNNs based on the definition of convolution; therefore, we proposed three classifications: classic, geometric, and rotation equivariant. For other components, we presented the purpose of each block, the key problem to solve in its design, knowledge gaps, if any, and ideas for future improvement. In addition, a review of the models by application domain and topology type was presented, including available source code. Further ideas for model design were also discussed. Finally, most of the ideas that have been implemented are extensions of the work on real-valued CNNs or complex CNNs to the quaternion domain [165, 49], or adaptations from quaternion neural networks models [166]. Further work is required in developing novel ideas, and exploiting the particularities of quaternion representation and its connection to geometry, topology, functional analysis, or invariant theory. This paper presents, in an organized manner, the current advances in the development of QCNNs, in the hope that it serves as a starting point for subsequent research, as well as for those interested in implementing applications. ## Acknowledgments G.A.G. received a Postdoctoral Fellowship _Estancias Posdoctorales por Mexico_ from CONACYT. C.G. acknowledges support from UNAM-PAPIIT (IN107919, IV100120, IN105122) and from the PASPA program from UNAM-DGAPA.
2307.11454
Structure-Aware Code Vulnerability Analysis With Graph Neural Networks
This study explores the effectiveness of graph neural networks (GNNs) for vulnerability detection in software code, utilizing a real-world dataset of Java vulnerability-fixing commits. The dataset's structure, based on the number of modified methods in each commit, offers a natural partition that facilitates diverse investigative scenarios. The primary focus is to evaluate the general applicability of GNNs in identifying vulnerable code segments and distinguishing these from their fixed versions, as well as from random non-vulnerable code. Through a series of experiments, the research addresses key questions about the suitability of different configurations and subsets of data in enhancing the prediction accuracy of GNN models. Experiments indicate that certain model configurations, such as the pruning of specific graph elements and the exclusion of certain types of code representation, significantly improve performance. Additionally, the study highlights the importance of including random data in training to optimize the detection capabilities of GNNs.
Ravil Mussabayev
2023-07-21T09:35:29Z
http://arxiv.org/abs/2307.11454v2
Dissecting Code Vulnerabilities: Insights from C++ and Java Vulnerability Analysis with ReVeal Model ###### Abstract. This study presents an analysis conducted on a real-world dataset of Java vulnerability-fixing commits. The dataset consists of commits with varying numbers of modified methods, leading to a natural partitioning based on the number of changed functions. The research aims to address several key questions. Firstly, the study investigates the optimal parameter selection for ReVeal, a state-of-the-art model, in order to achieve its best performance. Secondly, it explores the contributions of different parts of the Java dataset towards vulnerability detection. Lastly, the study evaluates the model's performance in separating close-to-vulnerable methods (vulnerable methods and their fixed versions) from randomly selected safe code, as well as the finer separation of vulnerable methods from their fixed versions within the set of close-to-vulnerable methods. The research employs a series of experiments to answer these questions and derive meaningful insights. vulnerability detection, cybersecurity, graph neural networks + Footnote †: journal: Computer Science Ravil Mussabayev. 2023. Dissecting Code Vulnerabilities: Insights from C++ and Java Vulnerability Analysis with ReVeal Model. In _Proceedings of ACM Conference (Conference'17)_. ACM, New York, NY, USA, 6 pages. [https://doi.org/10.1145/nmnmn.nmnmn](https://doi.org/10.1145/nmnmn.nmnmn) + Footnote †: journal: Computer Science ## 1. Introduction Code vulnerability detection is a critical challenge in software security that has significant implications for both individuals and organizations (Bartos et al., 2013). As software systems grow increasingly complex and interconnected, the presence of vulnerabilities poses serious threats, including potential breaches, data leaks, and compromised user privacy. Detecting code vulnerabilities is essential to proactively identify and remediate security flaws before they can be exploited by malicious actors. However, manual inspection of code for vulnerabilities is time-consuming, error-prone, and impractical for large-scale codebases. Therefore, developing automated methods, such as machine learning models, for accurately and efficiently identifying code vulnerabilities is of paramount importance to enhance software security and protect against potential risks. One of the state-of-the-art models for vulnerability detection is ReVeal (Bartos et al., 2013). It consists of three main modules: a gated graph neural network (GGNN), a SMOTE resampling, and a representation learning block. For a pictorial representation of the architecture, see Figure 1. The authors collected a new dataset of C++ vulnerabilities from the Linux Debian Kernel and the Chromium projects. In each of the security patches, they annotated the previous versions of all changed functions (i.e., the versions prior to the patch) as "vulnerable" and the fixed version of all changed functions (i.e., the version after the patch) as "clean". Additionally, other functions that were not involved in the patch (i.e., those that remained unchanged) are all annotated as "clean". From now on, this dataset will be referred to as the "ReVeal" dataset. In (Bartos et al., 2013), extensive experiments with other vulnerability detection models present in the literature showed their acute inadequacy when tested on the ReVeal dataset. ReVeal dataset stands out from other vulnerability detection benchmarks available in the literature. It consists of real imbalanced data annotated by human developers. Other datasets were mainly synthetic or semi-synthetic annotated by static analysers or unsupervised techniques. A misleadingly high scores achieved by other models could be explained by the low quality of their test datasets. More concretely, the authors of (Bartos et al., 2013) point at the following limitations of other existing approaches: (a) data duplication, (b) not handling data imbalance, (c) not learning semantic information, (d) lacking class separability. In this study, we would like to reproduce the results of (Bartos et al., 2013) and answer the following research question: **Research Question 1**. How would one optimize the parameters in the following dimensions to achieve the best possible performance of the ReVeal model on the ReVeal dataset? 1. Include the SMOTE and representation learning modules or only use a single GGNN block; 2. Include information about AST edges into the input graphs or only use DDG and CFG edges; 3. Include the full graph, which can be too large and detailed, or use its pruned version at operator nodes instead; 4. Balance the training data by downsampling the majority class or keep the original class ratio. We also investigate the performance of the ReVeal model on the Java vulnerability detection data. We collected a large dataset of 865 Java vulnerability-fixing commits across a wide variety of CWE vulnerability types. In this work, we restrict ourselves to the problem of method-level vulnerability detection. Each commit can involve a different number of changed methods. This induces a natural partition of the dataset with respect to the number of changed functions in a commit. Statistically, there are many more commits where more than one function has changed. Thus, a crucial problem arises in this setting. In a vulnerability-fixing commit where more than one function has changed, we cannot ensure that all the changes are related to fixing the underlying vulnerability. Thus, not all matched pairs of functions involved in such a commit can be labelled as fixing the vulnerability. To address the above-mentioned issue, we compile and answer a list of research questions. To formulate these questions, we introduce a new notation. We create a sequence of sets with the following structure: \[D_{k} =P_{1}\cup P_{2}\cup P_{3} \tag{1}\] \[=\{(f,f^{\prime})\in C\mid C\text{ is strict}\}\cup\] \[\{(f,f^{\prime})\in C\mid C\text{ is $k$-strict}\}\cup\] \[\{f\mid f\text{ is random $\& safe}\}\] where \(C\) is a vulnerability-fixing commit, and \((f,f^{\prime})\) denotes the pair of an original function \(f\) with its changed version \(f^{\prime}\). We say that a vulnerability fixing commit \(C\) is \(k\)-strict if it contains exactly \(k\) changed pairs of functions. We also decompose the original problem into two independent tasks: * Task \(T_{1}\): dividing in the set of potentially vulnerable methods, i.e., vulnerable methods (active vulnerable) against their fixed versions (passive vulnerable); * Task \(T_{2}\): dividing in the set of all methods, i.e., potentially vulnerable methods against random safe code. Then, the zero-day vulnerability detection task is a composition of these two tasks \[T_{0}=T_{1}\circ T_{2}\] Our hypothesis is that task \(T_{1}\) is much more difficult than task \(T_{2}\) for the state-of-the-art models. If we answer this question in the positive, then task \(T_{1}\) is a bottleneck and there is a need to develop better models that are specifically tailored to tackle \(T_{1}\). Thus, we have the following list of research questions: **Research Question 2**. Is \(P_{i}\) useful? (\(i=1,2,3\)) **Research Question 3**. How difficult is task \(T_{1}\)? **Research Question 4**. How difficult is task \(T_{2}\)? **Research Question 5**. Does random code appear in \(P_{2}\) as \(k\) increases? **Research Question 6**. How does the size of \(P_{3}\) affect the overall performance? ## 2. Experimental setup We adapted the source code provided by the authors of (Bartner et al., 2017). The final source code of our project can be found here: [!!llixToGitHub!!]. The experiments were conducted on a computer with the following configuration: Intel(R) Xeon(R) Gold 6151 CPU @ 3.00GHz with 32 cores, NVIDIA Tesla V100 PCIE GPU with 16 GB, and 126 GB RAM. The computing platform had the following specifications: Python 3.10.7, NumPy 1.23.3, DGL 1.0.1+cu117. Joern of version 1.1.1495 was used to parse source code into a graph representation. Throughout the experiments, we used the default choice of hyperparameters: learning rate 0.0001, weight decay 0.001, graph embedding size 200, batch size 128, maximum number Figure 1. Architecture of the ReVeal model of batches 10000, number of gradient accumulation steps 8, maximum patience of 50 for C++ data and 20 for Java data. ## 3. Experiments with C++ data To answer the first research question, we used the original C++ method-level vulnerability dataset from (Bauer et al., 2017). After parsing, we obtained the following statistics of the input graphs: 11788 train graphs (956 vulnerable), 1667 validation graphs (133 vulnerable), 3385 test graphs (286 vulnerable) To test each dimension of RQ 1, we performed 10 trials of training the model. In each trial, the dataset was split into train, validation, and test parts anew. The results can be found in Table 1. ### Excluding SMOTE and RL The model without SMOTE and RL achieves the worst performance with respect to the F1 score and the best performance with respect the ROC AUC measure. ### AST edges The model performs slightly better without including AST edges. This is likely due to including too much of fine-grained information or too many nodes. The model becomes more likely to overfit to irrelevant features in the input and fail to generalize. ### Pruning The experiments also showed that the model performs better with pruning at operator nodes. Pruning makes a graph simpler and less entangled for the model to understand. ### Downsampling Table 1 shows that the model performs worse with balancing the train set by downsampling non-vulnerable methods. We think that a rough balancing of the train part impacts the score negatively since it turns off SMOTE. ## 4. Experiments with Java data To answer the rest of research questions, we trained and tested the model on different parts of the Java dataset (1): \(P_{1}\), \(P_{2}\), and \(P_{3}\). In particular, we varied \(k\) in the range from 1 to 14. Then, we plotted the resulting ROC AUC scores against \(k\), and draw conclusions based on the observed dynamics. To make set \(P_{3}\) to be independent of \(k\), we fixed it to be the complement of \(P_{1}\). That is, \(P_{3}\) consisted of functions that remained unchanged in the commits where only one function was changed. Also, in order to balance different parts involved in training and testing, we restricted the size of \(P_{3}\): \[|P_{3}|=|P_{1}|+|P_{2}|\] During the data cleaning phase, we ensured that in each experiment, \(P_{3}\) did not contain functions that are contained in \(P_{1}\cup P_{2}\). Also, we removed any duplicate functions from each of the parts \(P_{1},P_{2}\), and \(P_{3}\), and removed methods contained in the training data from the test data. Table 2 shows the distribution of the collected Java methods after stratification by \(k\) and cleaning the data: ### Research question 2 In this research question, we investigate training on different combinations of sets \(P_{1}\), \(P_{2}\), and \(P_{3}\), and testing on \(P_{1}\cup P_{2}\cup P_{3}\) or \(P_{1}\cup P_{3}\), which is a stricter test. The results can be found in Figures 2 and 3. Figures 2 and 3 allow us to conclude that if the test set includes part \(P_{3}\), then the inclusion of part \(P_{3}\) into training is critical to achieving a high performance. Overall, parts \(P_{2}\) and \(P_{3}\) contribute the most to the prediction, as seen by the red and blue lines on Figures 2 and 3. Also, on Figure 3, we see a slight degradation of performance corresponding to training on \(P_{2}\cup P_{3}\) (red line) as \(k\) increases. This might indicate the increasing amount of random noise in \(P_{2}\) as \(k\) increases, partially answering RQ 5 in the positive. \begin{table} \begin{tabular}{c|c c} Configuration & Median F1 & Median ROC AUC \\ \hline Baseline & 27.29 & 0.696 \\ Without SMOTE \& RL & 21.45 & 0.730 \\ Without AST edges & 27.65 & 0.706 \\ With pruning & 30.83 & 0.724 \\ Majority downsampling & 26.61 & 0.678 \\ \end{tabular} \end{table} Table 1. Results of experiments for research question 1 \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{P1} & \multicolumn{2}{c|}{P2} \\ \hline k & train & test & train & test \\ \hline 1 & 410 (205) & 135 (68) & 0 (0) & 0 (0) \\ 2 & 399 (200) & 145 (73) & 343 (171) & 122 (61) \\ 3 & 416 (208) & 132 (66) & 696 (347) & 228 (113) \\ 4 & 414 (207) & 128 (65) & 960 (479) & 346 (172) \\ 5 & 415 (210) & 129 (64) & 1159 (575) & 433 (217) \\ 6 & 414 (208) & 131 (65) & 1393 (692) & 506 (254) \\ 7 & 421 (212) & 120 (60) & 1583 (789) & 596 (296) \\ 8 & 394 (197) & 151 (75) & 1870 (938) & 572 (284) \\ 9 & 410 (207) & 135 (67) & 2027 (1012) & 664 (330) \\ 10 & 411 (206) & 131 (66) & 2195 (1089) & 632 (314) \\ 11 & 399 (199) & 150 (75) & 2439 (1215) & 708 (353) \\ 12 & 400 (202) & 144 (72) & 2545 (1270) & 769 (383) \\ 13 & 397 (200) & 143 (72) & 2619 (1303) & 872 (434) \\ 14 & 409 (204) & 136 (68) & 2853 (1421) & 845 (419) \\ \hline \end{tabular} \end{table} Table 2. Statistics of collected Java methods after stratification by \(k\) and cleaning. Each cell has the format \(N_{1}(N_{2})\), where \(N_{1}\) is the total number of methods and \(N_{2}\) is the number of vulnerable ones. ### Research question 3 To assess the quality of the model on task \(T_{1}\), we change the combination of the test set to \(P_{1}\). This is the strictest possible test set that reflects the ability of the model to distinguish small differences between very similar code. The resulting plot can be found in Figure 4. As you can infer from this figure, the ReVeal model is unable to perform much better than random guessing on this test data irrespective of the training configuration. However, a slightly better result is achieved by the training data consisting of the set \(P_{1}\cup P_{2}\). Also, training on sets \(P_{1}\) and \(P_{2}\) separately shows a performance better than random strategy for most values of \(k\). Including data from \(P_{3}\) into training misleads the model since the test data does not have instances from the distribution of \(P_{3}\). This experiment shows that the ReVeal model heavily underperforms on task \(T_{1}\). ### Research question 4 In this experiment, the training data consisted of instances from set \(P_{1}\cup P_{2}\) marked as a positive class, and instances from part \(P_{3}\) marked as a negative class. Likewise, the test data was comprised of instances from part \(P_{1}\) marked as a positive class, and instances from part \(P_{3}\) marked as a negative class. The results of this setting reflect the ability of the model to differentiate close-to-vulnerable code from random safe code. The resulting ROC AUC values for this experiment can be found in Figure 5. From Figure 5, we see that the model performs fairly well on task \(T_{2}\), achieving the best results in the training regime involving \(P_{1}\cup P_{3}\). ### Research question 5 To answer this research question, we train on all possible combinations of parts involving \(P_{2}\). Then, we test the trained models on \(P_{1}\cup P_{2}\). The results can be seen in Figure 6. This plot does not allow us to definitively conclude anything about RQ 5. The model does not perform adequately for any training regime, and does not exhibit any trends that can be detected across different values of \(k\). Figure 4. ReVeal model trained on parts, tested on \(P_{1}\). This is a strict test for task \(T_{1}\). Figure 3. ReVeal model trained on parts, tested on \(P_{1}\cup P_{3}\). This is a stricter test for task \(T\) than the one on Figure 2). Figure 2. ReVeal model trained on parts, tested on \(P_{1}\cup P_{2}\cup P_{3}\). ### Research question 6 RQ 2 concluded that including \(P_{3}\) into the training data is crucial for achieving good performance on task \(T\). Now, we investigate how the size of \(P_{3}\) in the training data affects the final performance. For that, we pick the best training configuration from RQ 2 and plot its performance against different sizes of \(P_{3}\) in the training data. The test set was fixed. The results are displayed on Figure 7. It is clear from Figure 7 that the size of \(P_{3}\) in the training data does not affect the performance of the model. ## 5. Threats to validity There are some threats to the validity of the current study: 1. Insufficient number of trials made for each choice of \(k\) (for RQs 2-5) and \(|P_{3}|\) (for RQ 6). Only one trial was performed in our study. This might not be enough to account for various random phenomena present in the training of a graph neural network and in the split of data into train, validation, and test subsets; 2. The available training data might be insufficient to make solid conclusions. In particular, we only had around 3148 training examples for \(k=5\) in the regime \(P_{1}\cup P_{2}\cup P_{3}\). This number might not suffice to train a large graph neural network. ## 6. Conclusion In this study, we have investigated the performance of the ReVeal model on different datasets and configurations, focusing on the identification of vulnerable code in C++ and Java. We used a graph-based approach, representing code as graphs, and employed a machine learning model to predict vulnerabilities. Our investigation focused on six research questions (RQs), each exploring different aspects of the problem. RQ 1 showed that the model performs better with pruning and without including AST edges, while the absence of SMOTE and RL, and downsampling techniques led to a decrease in performance. The analysis of RQ 2 and RQ 3 indicated that the inclusion of part \(P_{3}\) in the training set is critical for achieving high performance, especially when the test set also includes part \(P_{3}\). However, the model underperformed on task \(T_{1}\), suggesting Figure 5. ReVeal model trained on parts, tested on \(P_{1}\) (all marked positive) \(\cup\)\(P_{3}\) (all marked negative). This is a strict test for task \(T_{2}\). Figure 6. ReVeal model trained on parts, tested on \(P_{1}\cup P_{2}\). Figure 7. ReVeal model trained on \(P_{1}\cup P_{2}\cup P_{3}\), tested on \(P_{1}\cup P_{2}\cup P_{3}\) with varying size of \(P_{3}\) in the train data. that distinguishing small differences between very similar code remains challenging. Our findings from RQ 4 and RQ 5 provided insights on the model's performance in differentiating close-to-vulnerable code from random safe code and the possible effect of random noise in \(P_{2}\) as \(k\) increases. In RQ 6, we found that the size of \(P_{3}\) in the training data did not significantly affect the model's performance. While our research offers substantial insights, there are a few threats to its validity. These include the limited number of trials for each choice of \(k\) and \(|P_{3}|\), and the possibility that the available training data may be insufficient to make solid conclusions. In conclusion, our research provides valuable insights into the performance of the ReVeal model in identifying vulnerable code. However, more research is required to further refine these models and address the identified challenges, particularly in distinguishing small differences in code and handling imbalanced datasets. Future work could involve investigating other machine learning models, as well as improving data augmentation techniques to enhance the model's performance.
2306.02729
Gibbs Sampling the Posterior of Neural Networks
In this paper, we study sampling from a posterior derived from a neural network. We propose a new probabilistic model consisting of adding noise at every pre- and post-activation in the network, arguing that the resulting posterior can be sampled using an efficient Gibbs sampler. For small models, the Gibbs sampler attains similar performances as the state-of-the-art Markov chain Monte Carlo (MCMC) methods, such as the Hamiltonian Monte Carlo (HMC) or the Metropolis adjusted Langevin algorithm (MALA), both on real and synthetic data. By framing our analysis in the teacher-student setting, we introduce a thermalization criterion that allows us to detect when an algorithm, when run on data with synthetic labels, fails to sample from the posterior. The criterion is based on the fact that in the teacher-student setting we can initialize an algorithm directly at equilibrium.
Giovanni Piccioli, Emanuele Troiani, Lenka Zdeborová
2023-06-05T09:26:38Z
http://arxiv.org/abs/2306.02729v2
# Gibbs Sampling the Posterior of Neural Networks ###### Abstract In this paper, we study sampling from a posterior derived from a neural network. We propose a new probabilistic model consisting of adding noise at every pre- and post-activation in the network, arguing that the resulting posterior can be sampled using an efficient Gibbs sampler. The Gibbs sampler attains similar performances as the state-of-the-art Monte Carlo Markov chain methods, such as the Hamiltonian Monte Carlo or the Metropolis adjusted Langevin algorithm, both on real and synthetic data. By framing our analysis in the teacher-student setting, we introduce a thermalization criterion that allows us to detect when an algorithm, when run on data with synthetic labels, fails to sample from the posterior. The criterion is based on the fact that in the teacher-student setting we can initialize an algorithm directly at equilibrium. ## I Introduction Neural networks are functions parametrized by the so-called weights, mapping inputs to outputs. Neural networks are commonly trained by seeking values of weights that minimize a prescribed loss function. In some contexts, however, we want to sample from an associated probability distribution of the weights. Such sampling is at the basis of Bayesian deep learning [50; 54]. It is used in Bayesian uncertainty estimation [25; 31; 46] or to evaluate Bayes-optimal performance in toy models where the data-generative process is postulated [3]. In this paper, we focus on studying the algorithms and properties of such sampling. Given training inputs \(X\) in Bayesian learning, one implicitly assumes the labels to be generated according to the stochastic process \(y\sim P(y|X,W)\), where \(W\) are the weight of the network, on which a prior \(P(W|X)\) is placed. At its heart, Bayesian deep learning consists of sampling from the posterior probability of the parameters: \[P(W|X,y)=\frac{P(y|W,X)P(W|X)}{P(y|X)}, \tag{1}\] where we simply used Bayes theorem. This sampling problem is, in general, NP-hard [10], with many techniques being developed to sample from (1). In this paper, we look at iterative algorithms that in the large time limit, return samples from the posterior distribution (1). Most available algorithms for this task are based on MCMC methods. We focus on the two following questions: * **Q1:** Do we have a method to evaluate whether the algorithms have thermalized, i.e., if the samples returned by the MCMC plausibly come from the posterior (1)? * **Q2:** Which combinations of sampling algorithm and form of the posterior distribution achieve the best performance in terms of ability to thermalize while reaching a low test error? The first question addresses the long-standing problem of estimating an MCMC's thermalization time, that is, the time at which the MCMC starts sampling well from the posterior. We propose a criterion for thermalization based on the teacher-student setting. The criterion can only be reliably applied to synthetic labels generated by a teacher network. After a comparison with other thermalization heuristics, we argue that the teacher-student criterion is more discriminative, in that it provides a higher lower bound to the thermalization time. The second question explores the interplay between the form of the posterior and the sampling algorithm: since there is more than one way of translating a network architecture into a probabilistic process, we exploit this freedom to introduce a generative process in which noise is added at every pre- and post-activation of the network. We then design a Gibbs sampler tailored to this posterior and compare it to other commonly used MCMCs. ### Related literature When running an MCMC one has to wait a certain number of iterations for the algorithm to start sampling from the desired probability measure. We will refer to this burn-in period as the thermalization time or \(T_{\text{therm}}\)[41]. Samples before \(T_{\text{therm}}\) should therefore be discarded. Estimating \(T_{\text{therm}}\) is thus of great practical importance, as it is crucial to know how long the MCMC should be run. More formally, we initialize the MCMC with a \(\delta\)-peaked distribution as its initial condition \(P_{t=0}(x)=\delta_{x,x_{0}}\) where \(x_{0}\in\mathcal{X}\) is a starting state of our liking. Then, the probability from which the MCMC samples evolves according to \(P_{t+1}(x)=\sum_{x^{\prime}\in\mathcal{X}}P_{t}(x^{\prime})P(x_{t+1}=x|x_{t}=x ^{\prime})\)[15], where the kernel \(P(x_{t+1}=x^{\prime}|x_{t}=x)\) contains the transition probabilities of the MCMC. If this kernel satisfies the detailed balance condition with respect to a probability \(\pi(\cdot)\), and the chain is ergodic, then for \(t\to\infty\), \(P_{t}(x)\to\pi(x)\), and thus the MCMC will sample from \(\pi\). In this setting, the thermalization time represents the time after which \(P_{t}\) is sufficiently close to \(\pi\). The actual thermalisation threshold will change depending on the metric we use to measure the distance between \(P_{t}\) and \(\pi\). Most commonly used distances between distributions \(P\) and \(Q\) can be written in the form \(d(P,Q)=\sup_{\varphi\in\mathcal{F}}\big{\{}\sum_{x\in\mathcal{X}}\varphi(x)(P( x)-Q(x))\big{\}}\), with \(\mathcal{F}\) a function space that determines how strict the convergence is. One of the strictest metrics is the total variation distance \(d_{TV}\), obtained by picking \(\mathcal{F}_{TV}=\{\varphi:\varphi\) measurable, \(||\varphi||_{\infty}\leq 1\}\). This distance gives rise to the definition of mixing time as the smallest \(t\) for which \(d_{TV}(P_{t},\pi)\leq 1/4\). Analytical bounds on the mixing time can be obtained using the transition kernel's spectral properties [27]. These results, however, are of difficult practical application, and the convergence metric might be too strict for practical purposes. A looser definition is that of weak convergence, which corresponds to a distance \(d_{w}\) given by \(\mathcal{F}_{w}=\{\varphi\text{ s.t. }||\varphi||_{\infty}+||\varphi||_{Lip}\leq 1\}\)[48]. For \(d_{w}(P_{t},\pi)\) to be small, all Lipschitz bounded functions must have a similar expectation under \(P_{t}\) and \(\pi\). This requirement can still be too strict: for example, in the statistical physics literature [41][57][34] one only includes in \(\mathcal{F}\) functions \(\varphi\) whose output concentrates in the thermodynamic limit, giving rise to \(d_{\text{tatphys}}\). We prefer to leave the definition of \(T_{\text{therm}}\) a bit vague since, from a practical point of view, none of the above convergence metrics can be so far computed efficiently. Practitioners have instead resorted to a number of heuristics, which provide lower bounds to the thermalization time. These heuristics usually revolve around two ideas. We first have methods involving multiple chains [5; 9; 16; 41]. In different flavours, all these criteria rely on comparing multiple chains with different initializations. Once all the chains have thermalized, samples from different chains should be indistinguishable. Another approach consists of finding functions with known mean under the posterior and verifying whether the empirical mean is also close to its predicted value [9; 13; 19; 20; 58]. The proposed method for detecting thermalization relies instead on the teacher-student framework [57]. Another field we connect with is that of Bayesian learning of neural networks. For an introduction see [18; 24; 32; 50] and references therein. We shall first examine the probabilistic models for Bayesian learning of neural networks and then review the algorithms that are commonly used to sample. In order to specify the posterior (1), one needs to pick the likelihood (or data generating process) \(P(y|X,W)\). The most common model, employed in the great majority of works [23; 39; 47; 52; 54] is \(P(y|X,W)=\frac{1}{Z}\exp\left(-\frac{1}{2\Delta}\sum_{\mu}\ell(y^{\mu},f(X^{\mu},W))\right)\), where \(f(\cdot,W)\) is the neural network function, \(\ell\) is the loss function, \(\mu\) is the sample index, and \(\Delta\) is a temperature parameter. As an alternative, other works have introduced the "stochastic feedforward networks" (SFFN), where noise is added at every layer's pre-activation [37; 42; 56; 45]. Outside of the Bayesian learning of neural networks literature, models where intermediate pre- or post-activations are added as dynamical variables have also been considered in the predictive coding literature [2; 35; 36; 53]. Once a probabilistic model has been chosen, the goal is to obtain samples from the corresponding posterior. A first solution consists of approximating the posterior with a simpler distribution, which is easier to sample. This is the strategy followed by variational inference methods [26; 30; 31; 45; 49; 55]. Although variational inference yields fast algorithms, it is often based on uncontrolled approximations. Another category of approximate methods is that of "altered MCMCs", i.e., Monte Carlo algorithms which have been modified to be faster at the price of not sampling anymore from the posterior [7; 28; 29; 38; 40; 59]. An example of these algorithms is the discretized Langevin dynamics [51]. Restricting the sampling to a subset of the parameters has also been considered in [44] as an alternative training technique. Finally, we have exact sampling methods: these are iterative algorithms that in the large time limit are guaranteed to return samples from the posterior distribution. Algorithms for exact sampling mostly rely on MCMC methods. The most popular ones are Hamiltonian Monte Carlo (HMC) [12; 39], Metropolis adjusted Langevin algorithm (MALA) [4; 33; 43] and the No U-turn sampler (NUTS)[22]. Within the field of Bayesian learning in neural networks, HMC is the most commonly used algorithm [23; 52; 54]. The proposed Gibbs sampler is inspired to the work of [1], and later [14; 21], that introduced the idea of augmenting the variable space in the context of logistic and multinomial regression. Teacher-Student Thermalization Criterion In this section we explain how to use the teacher-student setting to build a thermalization test for sampling algorithms. The test gives a lower bound on the thermalization time. We start by stating the main limitation of this approach: the criterion can only be applied to synthetic datasets. In other words, the training labels \(y\) must be generated by a teacher network, using the following procedure. We first pick arbitrarily the training inputs and organize them into an \(n\times d\) matrix \(X\). Each row \(X^{\mu}\) of the matrix is a different training sample, for a total of \(n\) samples. We then sample the teacher weights \(W_{\star}\) from the prior \(P(W)\). Finally, we generate the noisy training labels as \(y^{\mu}\sim P(y|X^{\mu},W_{\star})\). Our goal is to draw samples from the posterior \(P(W|D)\), where \(D=\{(X^{\mu},y^{\mu})\}_{\mu\in[n]}\) indicates the training set. Suppose we want to have a lower bound on the thermalization time of a MCMC initialized at a particular configuration \(W_{\text{start}}\). The method consists of running two parallel chains \(W_{1}(t)\) and \(W_{2}(t)\). For the first chain, we use an informed initialization, meaning we initialize the chain on the teacher weights, thus setting \(W_{1}(t=0)=W_{\star}\). For second chain we set \(W_{2}(t=0)=W_{\text{start}}\). To determine convergence we consider a test function \(g(W)\). We first run the informed initialization: after some time \(T_{1}\), \(g(W_{1}(t))\) will become stationary. Using samples collected after \(T_{1}\) we compute the expected value of \(g\) (let us call it \(\overline{g}\)). Next, we run the second chain. The lower bound to the thermalization time of \(W_{2}(t)\) is the time where \(g(W_{2}(t))\) becomes stationary and starts oscillating around \(\overline{g}\). In practice, this time is determined by visually inspecting the time series of \(g\) under the two initializations, and observing when the two merge. At first glance this method does not seem too different from [16] or [5], whose method (described in Appendix A) relies on multiple chains with different initializations. There is however a crucial difference: the informed initialization is already thermalized at \(t=0\). To see this, recall that the pair \(W_{\star},D\) was obtained by first sampling \(W_{\star}\) from \(P(W)\) and then sampling \(D\) from \(P(D|W_{\star})\). This implies that \(W_{\star},D\) is a sample from the joint distribution \(P(W,D)\). Rewriting \(P(W,D)=P(W|D)P(D)\), we see that \(W_{\star}\) is also typical under the posterior distribution. In conclusion, the power of the teacher-student setting lies in the fact that it gives us access to one sample from the posterior, namely \(W_{\star}\). It then becomes easier to check whether a second chain is sampling from the posterior by comparing the value of a test function. In contrast, other methods comparing chains with different initialization have no guarantee that if the two chains "merge" then the MCMC is sampling from the posterior, since it is possible that both chains are trapped together far from equilibrium. ## III The Intermediate Noise Model In this section, we introduce a new probabilistic model for Bayesian learning of neural networks. We start by reviewing the classical formulation of Bayesian learning of neural networks. Let \(f(x,W)\) be the neural network function, with \(W\) its parameters, and \(x\in\mathbb{R}^{d}\) the input vector. Given a training set \(X\in\mathbb{R}^{n\times d},y\in\mathbb{R}^{n}\) we aim to sample from \[P(W|X,y)=\frac{1}{P(y|X)}P(W)\exp\left[-\frac{1}{2\Delta}\sum_{\mu=1}^{n}\ell \left(y^{\mu},f(X^{\mu},W)\right)\right], \tag{2}\] where \(\ell(\cdot,\cdot)\) is the single sample loss function, and \(\Delta\) a temperature parameter. Notice that to derive (2) from (1), we supposed that \(P(W|X)=P(W)\), i.e., \(W\) is independent of \(X\). This is a common and widely adopted assumption in the Bayesian learning literature, and we shall make it in what follows. Most works in the field of Bayesian learning of neural networks attempt to sample from (2). This form of the posterior corresponds to the implicit assumption that the labels were generated by \[y^{\mu}\sim P_{\text{out}}(y|f(X^{\mu},W)),\text{ with }P_{\text{out}}(y|z) \propto e^{-\frac{1}{2\Delta}\ell(y,z)} \tag{3}\] where \(W\) are some weights sampled from the prior. We propose an alternative generative model based on the idea of introducing a small Gaussian noise at every pre- and post-activation in the network. The motivation behind this process lies in the fact that we are able to sample the resulting posterior efficiently using a Gibbs sampling scheme. Consider the case where \(f(\cdot,W)\) is a multilayer perceptron with \(L\) layers, without biases and with activation function \(\sigma\left(\cdot\right)\). Hence we have \(f(x,W)=W^{(L)}\sigma\left(W^{(L-1)}\sigma\left(\ldots\sigma(W^{(1)}x)\ldots \right)\right)\). Here \(W^{(\ell)}\in\mathbb{R}^{d_{\ell+1}\times d_{\ell}}\) indicates the weights of layer \(\ell\in[L]\), with \(d_{\ell}\) the width of the layer. We define the pre-activations \(Z^{(\ell)}\in\mathbb{R}^{n\times d_{\ell}}\) and post activations \(X^{(\ell)}\in\mathbb{R}^{n\times d_{\ell}}\) of layer \(\ell\). Using Bayes theorem and applying the chain rule to the likelihood we obtain \[P(\{X^{(\ell)}\}_{\ell=2}^{L},\{Z^{(\ell)}\}_{\ell=2}^{L},\{W^{( \ell)}\}_{\ell=1}^{L}|X,y)=\] \[\frac{1}{P(y|X)}P(\{W^{(\ell)}\}_{\ell=1}^{L})P(y,\{X^{(\ell)}\}_{ \ell=2}^{L},\{Z^{(\ell)}\}_{\ell=2}^{L}|\{W^{(\ell)}\}_{\ell=1}^{L},X)=\frac{1 }{P(y|X)}P(\{W^{(\ell)}\}_{\ell=1}^{L})\times\] \[\times\left[\prod_{\ell=2}^{L}P(Z^{(\ell+1)}|X^{(\ell)},W^{(\ell) })P(X^{(\ell)}|Z^{(\ell)})\right]P(Z^{(2)}|X,W^{(1)}) \tag{4}\] with the constraint \(Z^{(L+1)}=y\). The conditional probabilities are assumed to be \[P(Z^{(\ell+1)}|X^{(\ell)},W^{(\ell)})=\prod_{\mu=1}^{n}\prod_{ \alpha=1}^{d_{\ell+1}}\mathcal{N}\left(Z_{\alpha}^{(\ell+1)\mu}\Big{|}W_{ \alpha}^{(\ell)T}X^{(\ell)\mu},\Delta_{Z}^{(\ell+1)}\right) \tag{5}\] \[P(X^{(\ell)}|Z^{(\ell)})=\prod_{\mu=1}^{n}\prod_{i=1}^{d_{\ell}} \mathcal{N}\left(X_{\alpha}^{(\ell)\mu}\Big{|}\sigma(Z_{i}^{(\ell)\mu}), \Delta_{X}^{(\ell)}\right), \tag{6}\] where \(\{\Delta_{Z}^{(\ell)}\}_{\ell=2}^{L+1}\), \(\{\Delta_{X}^{(\ell)}\}_{\ell=2}^{L}\) control the amount of noise added at each pre- and post- activation. This structure of the posterior implicitly assumes that the pre- and post-activations are iteratively generated as \[Z^{(\ell+1)}=X^{(\ell)}W^{(\ell)T}+\epsilon_{Z}^{(\ell+1)},\quad X^{(\ell+1)}= \sigma(Z^{(\ell+1)})+\epsilon_{X}^{(\ell+1)},\quad\ell\in[L]. \tag{7}\] \(X^{(1)}=X\in\mathbb{R}^{n\times d}\) are the inputs, \(Z^{(L+1)}=y\in\mathbb{R}^{n}\) represent the labels and \(\epsilon_{Z}^{(\ell)},\epsilon_{X}^{(\ell)}\) are \(n\times d_{\ell}\) matrices of i.i.d. respectively \(\mathcal{N}(0,\Delta_{Z}^{(\ell)})\) and \(\mathcal{N}(0,\Delta_{X}^{(\ell)})\) elements. We will refer to (7) as the intermediate noise generative process. If we manage to sample from the posterior (4), which has been augmented with the variables \(\{X^{(\ell)}\}_{\ell=2}^{L},\{Z^{(\ell)}\}_{\ell=2}^{L}\), then we can draw samples from \(P(W|X,y)\), just by discarding the additional variables. A drawback of this posterior is that one has to keep in memory all the pre- and post-activations in addition to the weights. We remark that the intermediate noise generative process admits the classical generative process (3) and the SFNN generative model as special cases. Setting all \(\Delta\)s (and hence all \(\epsilon\)) to zero in (7) except for \(\Delta_{Z}^{(L+1)}\) indeed gives back the classical generative process (3), with \(\ell(y,z)=(y-z)^{2}\) and \(\Delta=\Delta_{Z}^{(\ell+1)}\). Instead, setting \(\Delta_{X}^{(\ell)}=0\) for all \(\ell\), but keeping the noise in the pre-activations gives the SFNN model. ## IV Gibbs sampler for neural networks Gibbs sampling [17] is an MCMC algorithm that updates each variable in sequence by sampling it from its conditional distribution. For a probability measure with three variables \(P(\theta_{1},\theta_{2},\theta_{3})\), one step of Gibbs sampling can be described as follows. Starting from the configuration \(\theta_{1}(t),\theta_{2}(t),\theta_{3}(t)\), we first draw \(\theta_{1}(t+1)\sim P(\theta_{1}|\theta_{2}(t),\theta_{3}(t))\), then we draw \(\theta_{2}(t+1)\sim P(\theta_{2}|\theta_{1}(t+1),\theta_{3}(t))\) and finally \(\theta_{3}(t+1)\sim P(\theta_{3}|\theta_{1}(t+1),\theta_{2}(t+1))\). Repeating this procedure one can prove [6] that, in the limit of many iterations (\(t\gg 1\)) and provided that the chain is ergodic, the samples \((\theta_{1}(t),\theta_{2}(t),\theta_{3}(t))\) will come from \(P(\theta_{1},\theta_{2},\theta_{3})\). We now present a Gibbs sampler for the intermediate noise posterior (4), with Gaussian prior. More specifically the prior on \(W^{(\ell)}\) is i.i.d. \(\mathcal{N}(0,1/\lambda_{W}^{(\ell)})\) over the weights' coordinates. The full derivation of the algorithm is reported in Appendix C, here we sketch the main steps. To define the sampler we need to compute the distributions of each of \(X^{(\ell)},Z^{(\ell)},W^{(\ell)}\) conditioned on all other variables (here indicated by "All"). For \(X^{(\ell)}\) the conditional distribution factorizes over samples \(\mu\in[n]\), leading to \[P(X^{(\ell)\mu}|\text{All})=P(X^{(\ell)\mu}|Z^{(\ell)\mu},W^{(\ell)},Z^{(\ell+1 )\mu})=\mathcal{N}(X^{(\ell)\mu}|(m^{(X_{\ell})})^{\mu},\Sigma^{(X_{\ell})}). \tag{8}\] This is a multivariate Gaussian with covariance \(\Sigma^{(X_{\ell})}=\left(\frac{1}{\Delta_{Z}^{(\ell+1)}}W^{(\ell)T}W^{(\ell)} +\frac{1}{\Delta_{X}^{(\ell)}}\mathbb{I}_{d_{\ell}}\right)^{-1}\), and mean \((m^{(X_{\ell})})^{\mu}=\Sigma^{(X_{\ell})}\left(\frac{1}{\Delta_{X}^{(\ell)}} \sigma(Z^{(\ell)\mu})+\frac{1}{\Delta_{Z}^{(\ell+1)}}W^{(\ell)T}Z^{(\ell+1)\mu }\right).\) Considering \(W^{(\ell)}\), we exploit that the conditional factorizes over the rows \(\alpha\in[d_{\ell+1}]\). \[P(W_{\alpha}^{(\ell)}|\text{All})=P(W_{\alpha}^{(\ell)}|X^{(\ell)},Z_{\alpha}^{( \ell+1)})=\mathcal{N}(W_{\alpha}^{(\ell)}|(m_{W}^{(\ell)})_{\alpha},\Sigma_{W}^{ (\ell)}), \tag{9}\] with \(\Sigma_{W}^{(\ell)}=\left(\frac{1}{\Delta_{Z}^{(\ell+1)}}X^{(\ell)T}X^{(\ell)}+ \lambda_{W}^{(\ell)}\mathbb{I}_{d_{\ell}}\right)^{-1}\), and \((m_{W}^{(\ell)})_{\alpha}=\frac{1}{\Delta_{Z}^{(\ell+1)}}\Sigma_{W}^{(\ell)T}Z _{\alpha}^{(\ell+1)}\). For \(Z^{(\ell+1)}\) the conditional factorizes both over samples and over coordinates. We have \[P(Z_{\alpha}^{(\ell+1)\mu}|\text{All})=P(Z_{\alpha}^{(\ell+1)\mu}|X _{\alpha}^{(\ell+1)\mu},W_{\alpha}^{(\ell)},X^{(\ell)\mu})\propto \tag{10}\] \[\exp\left[-\frac{1}{2\Delta_{Z}^{(\ell+1)\mu}}\left(Z_{\alpha}^{( \ell+1)\mu}-W_{\alpha}^{(\ell)T}X^{(\ell)\mu}\right)^{2}-\frac{1}{2\Delta_{X}^ {(\ell+1)}}\left(\sigma(Z_{\alpha}^{(\ell+1)\mu})-X_{\alpha}^{(\ell+1)\mu} \right)^{2}\right].\] Notice that the conditional distributions of \(W_{\alpha}^{(\ell)}\) and \(X^{(\ell)\mu}\) are multivariate Gaussians and can be easily sampled. Instead \(Z_{\alpha}^{(\ell)\mu}\) is a one-dimensional random variable with non Gaussian distribution. Appendix E provides recipes for sampling it for sign, ReLU and absolute value activations. ``` Input: training inputs \(X\), training labels \(y\), noise variances \(\{\Delta_{Z}^{(\ell)}\}_{\ell=2}^{L+1}\), \(\{\Delta_{X}^{(\ell)}\}_{\ell=2}^{L}\), prior inverse variances \(\{\lambda_{W}^{(\ell)}\}_{\ell=1}^{L}\), initial condition \(\{X^{(\ell)}\}_{\ell=2}^{L}\),\(\{W^{(\ell)}\}_{\ell=1}^{L}\), \(\{Z^{(\ell)}\}_{\ell=2}^{L}\), length of the simulation \(t_{\text{max}}\) Output: a sequence \(S\) of samples \(X^{(1)}\gets X\) \(Z^{(L+1)}\gets y\) \(S\leftarrow\left[\{\{W^{(0)}\}_{\ell=1}^{L}\), \(\{X^{(0)}\}_{\ell=2}^{L}\), \(\{Z^{(\ell)}\}_{\ell=2}^{L}\)\right]\) for\(t=1,\ldots,t_{\text{max}}\)do\(W^{(1)}\sim P(W^{(1)}|X,Z^{(2)})\)\(\triangleright\) See eq. (9) for\(\ell=2,\ldots,t_{\text{do}}\)\(X^{(\ell)}\sim P(W^{(\ell)}|Z^{(\ell)},W^{(\ell)},Z^{(\ell+1)})\)\(\triangleright\) See eq. (8) \(W^{(\ell)}\sim P(W^{(\ell)}|X^{(\ell)},Z^{(\ell+1)})\)\(\triangleright\) See eq. (9) \(Z^{(\ell)}\sim P(Z^{(\ell)}|X^{(\ell-1)},W^{(\ell-1)},X^{(\ell)})\)\(\triangleright\) See eq. (10) endfor \(S\).append \(\left(\{W^{(\ell)}\}_{\ell=1}^{L}\), \(\{X^{(\ell)}\}_{\ell=2}^{L}\), \(\{Z^{(\ell)}\}_{\ell=2}^{L}\right)\) endfor ``` **Algorithm 1** Gibbs sampler for MLP Putting all ingredients together, we obtain the Gibbs sampling algorithm, whose pseudocode is reported in Algorithm 1. The main advantages of Gibbs sampling lie in the fact that it has no hyperparameters to tune and, moreover, it is a rejection-free sampling method. In the case of MCMCs, hyperparameters are defined to be all parameters that can be changed without affecting the probability measure that the MCMC asymptotically samples. The Gibbs sampler can also be parallelized across layers: a parallelized version of Algorithm 1 is presented in Appendix D. Finally, one can also extend this algorithm to more complex architectures: Appendices F and G contain respectively the update equations for biases and convolutional networks. We release an implementation of the Gibbs sampler at [https://github.com/SPOC-group/gibbs-sampler-neural-networks](https://github.com/SPOC-group/gibbs-sampler-neural-networks). ## V Numerical results In this section we present numerical experiments to support our claims. We publish the code to reproduce these experiments at [https://github.com/SPOC-group/numerics-gibbs-sampling-neural-nets](https://github.com/SPOC-group/numerics-gibbs-sampling-neural-nets). ### Teacher student convergence method In section II we proposed a thermalization criterion based on having access to an already thermalized initialization. Here we show that it is more discriminative than other commonly used heuristics. We first briefly describe these heuristics. * **Stationarity**. Thermalization implies stationarity since once the MCMC has thermalized, it samples from a fixed probability measure. Therefore any observable, plotted as a function of time should oscillate around a constant value. The converse (stationarity implies thermalization) is not true. Nevertheless observing when a function becomes stationary gives a lower bound on \(T_{\text{therm}}\). * **Score method**[13]. Given a probability measure \(P(x)\), we exploit the fact that \(\mathbb{E}_{x\sim P}\left[\frac{\partial\log P(x)}{\partial x}\right]=\int \frac{\partial P(x)}{\partial x}dx=0\). We then monitor the function \(\frac{\partial\log P(x)}{\partial x}\) along the dynamics. The time at which it starts fluctuating around zero is another lower bound to \(T_{\text{therm}}\). * \(\hat{R}\) **statistic [16]**. Two (or more) MCMCs are run in parallel starting from different initializations. The within-chain variance is compared to the total variance, obtained by merging samples from both chains. Call the ratio of these variances \(\hat{R}\) (a precise definition of which is given in Appendix A). If the MCMC has thermalized, the samples from the two chains should be indistinguishable, thus \(\hat{R}\) will be close to 1. The time at which \(\hat{R}\) gets close to 1 provides yet another lower bound to the thermalization time. We compare these methods in the case of a one hidden layer neural network, identical for the teacher and the student, with input dimension \(d_{1}=50\), \(d_{2}=10\) hidden units and a scalar output. This corresponds to the function \[f(x,W)=b^{(2)}+W^{(2)}\sigma\left(W^{(1)}x+b^{(1)}\right), \tag{11}\] where \(\sigma(x)=\max(0,x)\) and \(W\) indicates the collection of all parameters: \(W^{(1)}\in\mathbb{R}^{d_{2}\times d_{1}}\) and \(W^{(2)}\in\mathbb{R}^{1\times d_{2}}\). We specify the prior by setting \(\lambda_{W}^{(1)}=\lambda_{b}^{(1)}=d_{1},\lambda_{W}^{(2)}=\lambda_{b}^{(2) }=d_{2}\), the prior on the bias \(b^{(\ell)}\) is \(\mathcal{N}(0,1/\lambda_{b}^{(\ell)})\) i.i.d. over the coordinates of the bias vector. Let \(n=2084\) be the size of the training set. We pick \(n\) to be four times the number of parameters in the network anticipating that the training set contains enough information to learn the teacher. We start by generating the matrix of training inputs \(X\in\mathbb{R}^{n\times d_{1}}\) with i.i.d. standard Gaussian entries, then we sample the teacher's weights \(W_{*}\) from the Gaussian prior. For concreteness we set \(\Delta_{Z}^{(2)},\Delta_{Z}^{(2)},\Delta_{Z}^{(3)}\) to the same value \(\Delta\) and set \(\Delta=10^{-4}\). To generate the training labels \(y\), we feed \(X\), the teacher's weights \(W_{*}\) and \(\Delta\) into the generative process (7), adapted to also add the biases. For the test set, we first sample \(X_{\text{test}}\), with i.i.d. standard Gaussian entries. Both the test labels and the test predictions are generated in a noiseless way (i.e., just passing the inputs through the network). In this way, the test error takes the following form: test \(\text{MSE}=\frac{1}{\eta_{\text{test}}}\sum_{\mu=1}^{n_{\text{test}}}\left(f( X_{\text{test}}^{\mu},W_{*})-f(X_{\text{test}}^{\mu},W)\right)^{2}.\) The full details about this experiment set are in Appendix B. We run the Gibbs sampler on the intermediate noise posterior starting from three different initializations: informed, zero and random. Respectively the student's variables are initialized to the teacher's counterparts, to zero, or are sampled from the prior. In this particular setting, the Gibbs sampler initialized at zero manages to thermalize, while the random initializations fail to do so. Two independent random initializations are shown, in order to be able to use the multiple chains method. Figure 1 illustrates a representative result of these experiments. In the left panel, we aim to find the highest lower bound to the thermalization time of the zero-initialized chain. Looking at the score method we plot \(U=\Delta\frac{1}{d_{1}d_{2}}\sum_{i=1}^{d_{1}}\sum_{\alpha=1}^{d_{2}}\frac{ \partial\log P}{\partial W_{i}^{(1)}}\), where \(P\) indicates the posterior distribution; this is the score rescaled by \(\Delta\) and averaged Figure 1: Comparison of different thermalization measures. In the legend, next to each method we write between parentheses the initialization (or pair of initializations) the method is applied to. The circles on the \(x\) axis represent the thermalization times estimated by each method. **Left:** We compare the predictions for the thermalization time of the zero-initialized MCMC. The red \(y\) scale on the right refers uniquely to the lines in red. All the other quantities should be read on the black \(y\) scale. **Right:** We compare the predictions for the thermalization time of two chains initialized independently at random. The pink \(y\) scale refers uniquely to the pink line. All other quantities should be read on the black logarithmic scale. The randomly initialized runs fail to thermalize and their test MSEs get stuck on a plateau. However, \(\hat{R}\), whose time series on the plateau is stationary and close to 1, fails to detect this lack of thermalization. over the first layer weights. In the zero-initialized chain, \(U\) starts oscillating around zero already at \(t=20\). Then we consider the \(\hat{R}\) statistics computed on the outputs of the two chains with zero and informed initializations. The criterion estimates that the zero-initialized chain has thermalized after \(t=6\times 10^{4}\), when \(\hat{R}\) approaches 1 and becomes stationary. Next, we consider the teacher-student method, with the test MSE as the test function (\(g\) in our previous discussion). According to this method, the MCMC thermalizes after the test MSE time series of the informed and zero-initialized chains merge, which happens around \(t=10^{5}\). Finally, the stationarity criterion, when applied to the test MSE or to \(\hat{R}\) gives a similar estimate for the thermalization time. The \(x\)-axis of the left plot provides a summary of this phenomenology, by placing a circle at the thermalization time estimated by each method. In summary, the teacher-student method is the most conservative, but the \(\hat{R}\) statistics-based method is also reasonable here. The right panel of Fig. 1 then shows a representative situation where thermalization is not reached yet the \(\hat{R}\) statistics-based method would indicate it is. In the right panel, two randomly initialized chains, denoted by _random 1_ and _random 2_ are considered. Neither of these chains actually thermalizes, in fact looking at the test MSE time series we see that both chains get stuck on the same plateau around MSE=\(10^{-3}\) and are unable to reach the MSE of the informed initialization. However, as soon as both chains reach the plateau, \(\hat{R}\) quickly drops to a value close to the order of 1 and thereafter becomes stationary, mistakenly signalling thermalization. This example exposes the problem at the heart of the multiple-chain method: the method can be fooled if the chains find themselves close to each other but far from equilibrium. Similarly, since the chains become stationary after they hit the plateau, the stationarity criterion would incorrectly predict that they have thermalized. To conclude, we have shown an example where common thermalization heuristics fail to recognize that the MCMC has not thermalized; instead, the teacher-student method detects the lack of thermalization. ### Gibbs sampler In this section, we show that the combination of intermediate noise posterior and Gibbs sampler is effective in sampling from the posterior by comparing it to HMC, run both on the classical and intermediate noise posteriors, and to MALA, run on the classical posterior. We provide the pseudocode for these algorithms in Appendix H. For the first set of experiments, we use the same network architecture as in the previous section. The teacher weights \(W_{*}\), as well as \(X,X_{\text{test}}\) are also sampled in the same way. The intermediate noise and the classical generative process prescribe different ways of generating the labels. However, to perform a fair comparison, we use the same dataset for all MCMCs and posteriors; thus we generate the training set in a noiseless way, i.e., setting \(y^{\mu}=f(X^{\mu},W_{\star})\). We generate 72 datasets according to this procedure, each time using independently sampled inputs and teacher's weights. The consequence of generating datasets in a noiseless way is that the noise level used to generate the data is different from the one in the MCMC, implying that the informed initialization will not exactly be a sample from the posterior. However, the noise is small enough that we did not observe any noticeable difference in the functioning of the teacher-student criterion. First, we aim to characterize how often each algorithm thermalizes, when started from an uninformed initialization. Uninformed means that the network's initialization is agnostic to the teacher's weights. For several values of \(\Delta\), and for all the 72 datasets, we run the four algorithms (Gibbs, classical HMC, intermediate HMC, classical MALA) starting from informed and uninformed initializations. More information about the initializations and hyperparameters of these experiments is contained in Appendix I. The left panel of figure 2 depicts the proportion of the 72 datasets in which the uninformed initialization thermalizes within 5:30h of simulation. The \(x-\)axis is the equilibrium test MSE, i.e., the average test MSE reached by the informed initialization once it becomes stationary. When \(\Delta\), and thus the test MSE, decreases, the proportion of thermalized runs drops for all algorithms, with the Gibbs sampler attaining the highest proportion, in most of the range. In the right panel, we plot the dynamics of the test error under each algorithm for a run where they all thermalize. For the same \(\Delta\)s of this plot (respectively \(\Delta=10^{-3},4.64\times 10^{-4}\) for the classical and intermediate noise posterior), we compute the average thermalization time among the runs that thermalize. Classical HMC, MALA, Gibbs, and intermediate HMC take, respectively on average around \(130,2700,3200,12500\) seconds to thermalize. This shows that the classical HMC, when it thermalizes, is the fastest method, while MALA and Gibbs occupy the second and third position, with similar times. However classical HMC thermalizes about 20% less often than the Gibbs sampler. Therefore in cases where it is essential to reach equilibrium, the Gibbs sampler represents the best choice. We now move from the abstract setting of Gaussian data to more realistic inputs and architectures. As an architecture we use a one-hidden layer MLP with 12 hidden units and ReLU activations, and a simple convolutional network (CNN) with a convolutional layer, followed by average pooling, ReLU activations, and a fully connected layer. See Appendix J for a description of both models and of the experimental details. In this setting, we resort to the stationarity criterion to check for thermalization, since the teacher-student method is inapplicable. We compare the Gibbs sampler with HMC and MALA both run on the classical posterior, picking MNIST as dataset. Figure 3 shows the test error as a function of time for the two architectures. We choose the algorithms \(\Delta\)s such that they all reach a comparable test error at stationarity. We then compare the time it takes each algorithm to reach this error. The results of the experiments are depicted in figure 3. For the MLP all algorithms take approximately the same time to become stationary, around \(500s\). In the CNN case, after HMC and MALA reach stationarity in \(100s\), compared to \(800s\) for Gibbs. We however note that for HMC and MALA to achieve these performances we had to carry out an extensive optimization over hyperparameters, thus the speed is overall comparable. Figure 3: Gibbs on the intermediate noise posterior and HMC, MALA both on the classical posterior, compared on MNIST. **Left:** MLP with one hidden layer with 12 hidden units. **Right:** CNN network. Figure 2: Thermalization experiments on synthetic data. **Left:** Proportion of the 72 runs that thermalize plotted against the equilibrium test MSE. **Right:** Example of the dynamics of the test MSE in a particular run where all four algorithms thermalize. In order to get a similar equilibrium test MSE in the classical and intermediate noise posteriors, we pick respectively \(\Delta=10^{-3}\) and \(\Delta=4.64\times 10^{-4}\). The transparent lines represent the informed initializations. Conclusion In this work, we introduced the intermediate noise posterior, a probabilistic model for Bayesian learning of neural networks, along with a novel Gibbs sampler to sample from this posterior. We compared the Gibbs sampler to MALA and HMC, varying also the form of the posterior. We found that HMC and MALA both on the classical posterior and Gibbs, on the intermediate noise posterior, each have their own merits and can be considered effective in sampling the high dimensional posteriors arising from Bayesian learning of neural networks. Gibbs compares favourably to the other algorithms in terms of the ability to thermalize, moreover, no hyperparameter tuning is required, it can be applied to non-differentiable posteriors, and can be parallelized across layers. We further proposed the teacher-student thermalization criterion: a method to obtain stringent lower bounds on the thermalization time of an MCMC, within a synthetic data setting. We first provided a simple theoretical argument to justify the method and subsequently compared it to other thermalization heuristics, finding that the teacher-student criterion consistently gives the highest lower bound to \(T_{\mathrm{therm}}\). ## VII Acknowledgment We thank Lucas Clarte for introducing us to the blocked Gibbs sampler, and Christian Keup for the useful discussions on predictive coding and stochastic neural networks.
2306.05046
A Gradient-based Approach for Online Robust Deep Neural Network Training with Noisy Labels
Learning with noisy labels is an important topic for scalable training in many real-world scenarios. However, few previous research considers this problem in the online setting, where the arrival of data is streaming. In this paper, we propose a novel gradient-based approach to enable the detection of noisy labels for the online learning of model parameters, named Online Gradient-based Robust Selection (OGRS). In contrast to the previous sample selection approach for the offline training that requires the estimation of a clean ratio of the dataset before each epoch of training, OGRS can automatically select clean samples by steps of gradient update from datasets with varying clean ratios without changing the parameter setting. During the training process, the OGRS method selects clean samples at each iteration and feeds the selected sample to incrementally update the model parameters. We provide a detailed theoretical analysis to demonstrate data selection process is converging to the low-loss region of the sample space, by introducing and proving the sub-linear local Lagrangian regret of the non-convex constrained optimization problem. Experimental results show that it outperforms state-of-the-art methods in different settings.
Yifan Yang, Alec Koppel, Zheng Zhang
2023-06-08T08:57:06Z
http://arxiv.org/abs/2306.05046v1
# A Gradient-based Approach for Online Robust Deep Neural Network Training with Noisy Labels ###### Abstract Learning with noisy labels is an important topic for scalable training in many real-world scenarios. However, few previous research considers this problem in the online setting, where the arrival of data is streaming. In this paper, we propose a novel gradient-based approach to enable the detection of noisy labels for the online learning of model parameters, named Online Gradient-based Robust Selection (OGRS). In contrast to the previous sample selection approach for the offline training that requires the estimation of a clean ratio of the dataset before each epoch of training, OGRS can automatically select clean samples by steps of gradient update from datasets with varying clean ratios without changing the parameter setting. During the training process, the OGRS method selects clean samples at each iteration and feeds the selected sample to incrementally update the model parameters. We provide a detailed theoretical analysis to demonstrate data selection process is converging to the low-loss region of the sample space, by introducing and proving the sub-linear local Lagrangian regret of the non-convex constrained optimization problem. Experimental results show that it outperforms state-of-the-art methods in different settings. ## 1 Introduction Online learning is a widely used learning framework for streaming data in many real-world scenarios. In recent years, online training of deep neural networks (DNNs) has garnered increased attention to enable large-scale training [1; 2; 3], to face the challenge of increasingly large datasets. Such a large-scale training process of DNNs, especially online DNNs training, is highly sensitive to the noisy labels in the datasets [4], which is more pronounced with the streaming and dynamically changing online data. The noisy label problem refers to the presence of incorrect or mislabeled annotations in a training dataset. Usually, the data samples with incorrect labels are defined as noisy data, and the correct one is called clean data. This issue has been identified as a common challenge in many datasets. For instance, researchers in [5] found 6% label errors in the Imagenet validation set and 10% label errors in the QuickDraw dataset. Similarly, up to 30% label errors were found in the Google Emotions dataset [6] and 37% errors in the MS COCO dataset [7]. Label errors vary across different datasets and appear with varying probabilities of occurrence in data streams at different time slots. In recent years, the robustness issue of training with noisy labels has been widely studied in different research areas [4, 8, 9]. Among various approaches, sample selection methods enjoy the flexibility to support any type of deep learning architecture and do not need to maintain additional neural networks. The concept of multi-round sample selection for scalable models was first introduced in [10], where the authors proposed an iterative training loss minimization (ITLM) method that leverages samples selected at the beginning of each training epoch. Building on this idea, INCV [11] employs cross-validation to detect noisy training data and remove large-loss samples. O2U-Net [12] first repeats the entire training process to collect loss statistics, then retrains the neural network from scratch only with clean samples detected. These works proposed different methods to estimate the ratio of a dataset and using sorting methods to filter out noisy data based on that ratio, but they all follow the same idea of detecting clean samples based on a fixed pre-estimated scale parameter, which is hard to be set for streaming online datasets with changing clean ratios. In this paper, we introduce **On**line **G**radient-based **R**obust **S**election (**OGRS**), a novel gradient-based multi-iteration sample selection approach that enables the online training of DNNs with dynamically changing proportions of noisy labels. Since clean data normally produces much lower training loss compared to noisy data based on the observations in [10], our proposed method capitalizes on the significant disparity in the gradient of the training loss at the clean and noisy data points respectively, which initially updates the data selection towards the clean region. To prevent the risk of overfitting, which may arise from the repeated selection of the same samples, we additionally propose a constraint function to mitigate the overlap of selected data. As a result, we formulate the problem as a non-convex constrained optimization problem. This structure enables our approach to dynamically adapt to varying noise proportions, thereby boosting the robustness of the online DNNs training against noisy labels. In the realm of non-convex constrained optimization, a critical unresolved issue pertains to providing a theoretical guarantee for convergence analysis. Since the recent decade, gradient descent optimization methods have been widely used to solve a wide variety of problems, like the controls of robotic systems [13], bayesian inference [14], recommendation systems [15, 16] and the training of DNNs [17, 18, 19, 20, 45]. While [21] studied the constrained non-convex optimization problem using quadratic approximations, a straightforward analysis for this problem remains elusive due to the computational intractability of minimizing standard regret in non-convex cases. To address this challenge, we introduce a new metric for non-convex constrained optimization, termed local Lagrangian regret. We conduct a detailed theoretical analysis to validate our approach and show that a constant number of updating steps ensure our method finds a balance between the sample selection performance and the computation expense. In the subsequent experimental evaluation, we incrementally input data selected by the OGRS method into various online training models. These results are then benchmarked against state-of-the-art methods to demonstrate the effectiveness of our approach. In general, our main contributions are summarized below: * We introduce a novel gradient-based sample selection approach designed to facilitate effective online DNNs training with dynamically varying clean ratios. * We define a new local Lagrangian regret for the non-convex optimization problem and propose an efficient algorithm that is specifically tailored to address the sample selection problem. * We give a way for theoretical proof of the effectiveness and efficiency of our sample selection methods with our newly defined regret metric. * We conduct experiments by simulating real-world online training cases and make comparisons between different sample selection methods. ## 2 Related Work In this section, we review related work in the areas of learning with noisy labels and online DNNs training. Over the past decades, numerous deep-learning techniques have been developed to tackle the noisy label problem. These techniques are primarily grouped into five categories [4]. The first group of methods encompasses sample selection, which includes techniques of multi-network learning and multi-round learning. The multi-network learning involves the mentor network in the case of collaborative learning and multi-network learning in the case of co-training. For instance, [22] trained multiple DNNs simultaneously, with updates based solely on disagreements between different DNNs. On the other hand, MentorNet [23] employs a mentor network to guide the training of the student network. Multi-round learning, another sample selection method, refines a selected set of clean samples at the start of each epoch [10, 12]. Alongside methods using the small-loss trick [10, 12] introduced in the previous section, others improve the system efficiency using a single round refinement, like [24] and [25]. These techniques do not require the maintenance of additional DNNs, hence providing flexibility across various model architectures. In this paper, we mainly compare our results with the ITLM method in [10], since it refines the selection set for each epoch with the small-loss trick, which is the most suitable type of related method for online robust training. The other results, like [24], only do a single-time selection, it naturally unsuitable for the online setting. Additionally, certain studies have sought to design robust architectures that incorporate a noise adaptation layer at the top of the training model. Recently, such methods have been adapted to handle noisy labels [26, 27]. Webly learning, for instance, uses the confusion matrix to initialize the weights of the noise adaptation layer [27]. In [28, 29], researchers designed robust policy gradient descent methods to deal with model uncertainties. Other methodologies include robust regularization [30, 31], robust loss function [32, 33], and closed-loop control [34]. Furthermore, it has been proven that designing a robust loss function for noisy data can approach the Bayesian optimal classifier [35]. Loss adjustment, instead of designing a new robust loss function, modifies the loss for all examples prior to the training process. While some studies have considered online DNNs training [1, 36, 37] and robust optimization [38, 39], none have explored the problem of training with noisy labels. The inherent difficulty of online robust training lies in the dynamic nature of data streams. Existing techniques are unfit for the online scenario as it is computationally intractable to consistently modify parameters in traditional training techniques with noisy labels, such as sample selection and robust architecture. Hence, the new gradient-based sample selection method without the need for such pre-defined parameters is introduced in this paper. ## 3 Framework and Preliminaries In this section, we introduce the general framework for the multi-round robust training of DNNs in an online setting with time slots \(t\in[1,T]\). During the training process, the online streaming data arrives at the training set \(\mathcal{D}_{t}\) at each time slot \(t\), where \(\mathcal{D}_{t}=(d_{1},y_{1}),\cdots,(d_{t},y_{t})\) consists of a series of data pairs \((d_{t},y_{t})\). For a single data pair \((d_{t},y_{t})\), \(d_{t}\in\mathbb{R}^{d}\) denotes the \(d\)-dimensional input features, and \(y_{t}\in\mathbb{R}^{C}\) represents the corresponding label for \(d_{t}\). Since we consider the training problem with noisy data, there exists a proportion of \(\phi_{t}\) labels that are mistaken in \(\mathcal{D}_{t}\) at the time slot \(t\). ### Problems in Directly Transferring Previous Method into Online Setting The multi-round sample selection problem considered in this paper is a traditional problem that has been widely studied since [40], which contains two parts: (1) how to select a clean sample set \(S_{t}\) (2) how to use \(S_{t}\) to train a neural network with parameters \(\theta_{t}\). In Fig. 1, we summarize the workflow of the multi-round sample selection problem for a single time slot \(t\), where data pair \((d_{t},y_{t})\) arrives at \(\mathcal{D}_{t}\) and the sample selection algorithm select sample set \(S_{t}\) with the loss information provided by the training model to update the model parameter \(\theta_{t}\). In this subsection, we will show that directly transferring some traditional sample selection methods like ITLM [10] in an online training setting didn't work well. To fit the ITLM into an online training Figure 1: Workflow for the OGRS framework, we need iteratively refines the clean set \(S_{t}\) at the beginning of each time slot \(t\). Since data with mistaken labels usually have high training loss based on observations. We sort all data samples \(d\in\mathcal{D}_{t}\) based on their training loss \(\ell_{\theta_{t}}(d)\) in their ITLM method. Then, the clean set \(S_{t}\) is selected by trimming the top \(1-\hat{\phi}_{t}\) proportion of the sorted data list, where the pre-known clean ratio \(\phi_{t}\) can be estimated by some techniques like crossed-validation [41, 42] before starting the sample selection at each time \(t\). Since there should have a total number of \(t\) samples in \(\mathcal{D}_{t}\) at time slot \(t\), this process can be formulated as: \[S_{t}\leftarrow\operatorname*{arg\,min}_{S:|S|=\lfloor\phi_{t}t\rfloor}\sum_ {d_{i}\in S}\ell_{\theta_{t}}(d_{i}), \tag{1}\] To update \(\theta_{t}\) with the clean set \(S_{t}\), the following optimization problem is solved in their training process: \[\theta_{t+1}:=\operatorname*{arg\,min}_{\theta}\ell_{\theta_{t}}(s_{i}), \tag{2}\] where \(s_{i}\in S_{t}\) is a batch of \(K\) data samples stochastically sampled from the clean set \(S_{t}\). Even though the online ITLM idea may work, it is obvious that directly transferring the previous method into the online setting is computationally intractable since we need to re-estimate the clean ratio at the beginning of each iteration. ### Problems for Using Traditional Local Regret Metric in Our Method For the sample selection part of our method, instead of using the sorting method in the ITLM, we introduce a novel sample selection algorithm based on a non-convex constrained gradient descent algorithm. To select \(k\)-th data sample \(d_{t,k}\) in the clean sample set \(S_{t}\) at time slot \(t\), we run the constant number of gradient descent steps with iterations \(i\in[1,M]\). Thus, by repeatedly running this selecting process for \(K\) times, we can directly obtain a set of \(S_{t}=\{d_{t,1},\cdots,d_{t,K}\}\). To illustrate the idea of local regret metric in [43], we focus on the updating process for selecting a single data \(d_{t,k}\), which bypasses a series of decisions \(d^{1},\cdots,d^{M-1},d^{M}_{t,k}\). To derive the local regret, we can gauge the average loss \(L_{t,w}(d^{i}_{t})\) of current \(w\) time slots loss values via a sliding window, which keeps track of the algorithm performance at the point of the current decision \(d^{i}_{t}\) at the iteration \(i\). The definition for the \(w\)-local regret is established by summing up the average gradient of the local loss over a total of \(M\) rounds, as described in the following equation: \[R_{w}(M):=\sum_{i=1}^{M}\|\nabla L_{t,w}(d^{i}_{t})\|^{2}, \tag{3}\] where the averaged local loss can be calculated by \(L_{t,w}(d^{i}_{t}):=\frac{1}{w}\sum_{j=0}^{w-1}\ell_{t-j}(d^{i}_{t})\). Besides the new definition of local regret, Hazan also proposed efficient new algorithms with the local loss and gives a sublinear local regret bound in their theoretical analysis. Nevertheless, the original local regret metric was designed to address the unconstrained non-convex optimization problem. We cannot directly apply this setting to our sample selection problem, as we need to incorporate a constraint function to prevent the oversampling of certain samples. ## 4 Algorithms As previously outlined, we have introduced the overall structure of our online robust training system designed for handling noisy labels. In this section, we will delve into more detailed discussions of the two main parts of the OGRS method, the sample selection process, and the model training aspects. ### Gradient-based Sample Selection The key component of the OGRS is selecting the set of clean samples \(S_{t}\) at the beginning of each time slot \(t\). In order to obtain \(S_{t}\), we repeat the updating steps for selecting a single low-loss sample \(d_{t,k}\) for \(K\) times, where the updating steps \(d^{1},d^{2},\cdots,d^{M-1}\) toward the final choice \(d_{t,k}\) employ a modified constrained gradient descent method. Detail of the gradient-based sample selection algorithm can be found in Alg. 1. To deal with the non-convex loss function, we use the local loss \(L_{t,w}(d)\) introduced in Sec. 3.2. In order to avoid repeatedly selecting the same samples, a global constraint function \(g^{i}_{t}\) is set as the difference between the total number of selected times for a sample \(d^{i}_{t}\) and a threshold \(\zeta\): \[g^{i}_{t}(d^{i}_{t})=p^{i}_{t}(d^{i}_{t})-\zeta, \tag{4}\] where \(p^{i}_{t}\) indicates the total number of times that \(d^{i}_{t}\) has been selected at the time slot \(t\) and iteration \(i\). \(\zeta\) is the maximum allowance for the repeat times. As a result, we can model the sample selection part as a constrained optimization problem: \[d_{t,k}\in\operatorname*{arg\,min}_{d\in\mathcal{D}_{t}}L_{t,w}(d)\quad s.t. \quad g^{i}_{t}(d^{i}_{t})\leq 0 \tag{5}\] To simplify the proof, we ignore the time slot \(t\) and only focus on a single optimization problem with iteration \(i\in[1,M]\) in the following part of this section. A widely used approach to solve the constrained optimization problem is to build the Lagrangian function that associates the loss with the constraints through a dynamically updating Lagrangian multiplier \(\mu^{i}\in R^{+}\)[44]. To enable the non-convex constrained optimization problem, we introduce a Lagrangian function built on the local loss instead, called local Lagrangian: \[\mathcal{L}^{i}_{w}(d^{i},\lambda^{i})=L_{w}(d^{i})+(\mu^{i})^{\top}g^{i}(d^{ i}) \tag{6}\] To optimize the local Lagrangian, we introduce a modified saddle point approach, which updates decision \(d^{i}\) in the primal update and \(\mu^{i}\) in the dual update. Thus, the sample \(d^{i+1}\) can be updated as the minimizer of the following optimization problem: \[\min_{d\in\mathcal{D}_{t}}\nabla^{\top}L^{i}_{w}(d-d^{i})+(\mu^{i+1})^{\top}g ^{i}(d)+\frac{\|d-d^{i}\|^{2}}{2\alpha}, \tag{7}\] where \(\frac{\|d-d^{i}\|^{2}}{2\alpha}\) is an added regularizer and \(\alpha\) is the positive stepsize. As the current decision \(d^{i}\) is revealed, the Lagrangian multiplier is updated based on the observation of \(g^{i}(d^{i})\) as: \[\mu^{i+1}=\left[\mu^{i}+\gamma g^{i}\left(d^{i}\right)\right]^{+} \tag{8}\] To provide the theoretical guarantee of the non-convex constrained optimization algorithm, we introduce the novel local Lagrangian regret, which is defined by combining the gradient of the local loss and the constraints. The idea of the local Lagrangian regret comes from the Karush-Kuhn-Tucker (KKT) stationary conditions, as: \[RL=\|\sum_{i=1}^{M}\nabla L_{w}(d^{i})+(\mu^{i})^{\top}\nabla g^{i}(d^{i})\| \tag{9}\] We will show later that the proposed local Lagrangian regret can help us better understand the theory of our sample selection method. One thing that needs to be noted. Different from the traditional setting of an optimization problem that iterates the gradient descent until the current decision is close enough to the optimal decision, we only run our algorithm for constant steps. The constant steps of updates help us to reduce the over-fitting problem during the training process and are enough to guarantee a sample loss below a certain threshold. ### Online Model Training After we select \(S_{t}\) by using the proposed method. We update the model parameters \(\theta_{t}\) by solving the following optimization problem: \[\theta=\operatorname*{arg\,min}_{\theta}\min_{\mathcal{S}_{t}}\sum_{t=1}^{T} \sum_{s\in\mathcal{S}_{t}}\ell_{\theta}(s) \tag{10}\] These processes can be conducted by some widely used optimization methods, like the stochastic gradient descent (SGD) or the alternating direction method of multipliers (ADMM). The detail of the training process is summarized in Alg. 2. ``` 0:loss function \(\ell_{\theta_{t}}(\cdot)\) of recent \(w\) iterations, dataset \(\mathcal{D}_{t}\), repeat threshold \(\zeta\) 0:\(S_{t}\) 1:for\(k\in[1,K]\)do 2:for\(i\in[1,M]\) epoch do 3: Update the selected sample \(d^{i}\) by minimizing the following optimization problem: \[\min_{d\in\mathcal{D}_{t}}\nabla^{\top}L_{t,w}^{i}(d-d_{t}^{i})+(\mu_{t}^{i+1} )^{\top}g_{t}^{i}(d)+\frac{\|d-d_{t}^{i}\|^{2}}{2\alpha},\] (11) 4: Observe the constraint violation \(g_{t}^{i}(d_{t}^{i})\), where: \[g_{t}^{i}(d_{t}^{i})=p_{t}^{i}(d_{t}^{i})-\zeta\] (12) 5: Update the dual variable \(mu_{i+1}\) by the following equation: \[\mu_{t}^{i+1}=[\mu_{t}^{i}+\gamma g_{t}^{i}(d_{t}^{i})]^{+}\] (13) 6: Append sample \(d^{M}\) in \(S_{t}\) 7: Return selected dataset \(S_{t}\) ``` **Algorithm 2** Online Gradient-based Robust Training ``` 0:dataset \(\mathcal{D}_{t}\) 0:model parameter \(\theta_{t}\) 1:Initialize the model parameters \(\theta_{0}\) 2:for\(t=1,\cdots,T\)do 3:\(S_{t}=\) Algorithm 1\((\mathcal{D}_{t},\theta_{t})\) 4: Update model parameter \(\theta_{t}\) according to \(S_{t}\) ``` **Algorithm 3** Online Gradient-based Algorithm ## 5 Theoretical Analysis In this section, we present a theoretical analysis of the sample selection component of the OGRS, demonstrating the reliability of our method via local Lagrangian regret. We focus specifically on the process of selecting a single sample \(d_{t,k}\) at time \(t\). Note that the optimization strategy outlined in eq. (5), aimed towards the optimal, may be overly ambitious for sample selection and could potentially lead to severe overfitting issues. Consequently, we limit ourselves to a constant number of total iterations \(M\) and show that constant \(M\) can also give a guarantee of maximum Lagrangian residual. Next, we first establish the regret bound in a general scenario tied to \(M\). Subsequently, we delve into further exploration with a fixed setting of \(M\), illustrating our method's performance. This approach is akin to setting a threshold for the maximum sample loss, a concept that aligns with previous sample selection methods that leverage the clean ratio to manage this threshold. However, we can directly use OGRS on different tasks without necessitating the configuration of specific parameters. Before presenting the local regret bound for the Lagrangian Residual, we first enumerate some frequently employed assumptions. **Assumption 1**.: (Bounded gradient) For every iteration \(i\), both \(L_{t,w}(d)\) and \(g^{i}(d)\) are bounded and have bounded gradient, which is given as \(|L_{t,w}(x)|\leq F\), \(|\nabla L_{t,w}(x)|\leq F_{1}\), \(|g^{i}(d^{i})|\leq G\), \(\left|\nabla g^{i}(x)\right|\leq G_{1}\), \(\left|\nabla^{2}g^{i}(x)\right|\leq G_{2}\) **Assumption 2**.: (Lipschitz continuous) The averaged loss function \(L_{w}\) and the constraints \(g^{i}\) are Lipschitz smooth, so its derivatives are Lipschitz continuous with constant \(L_{1}\) and \(L_{2}\), i.e., for two real vector \(d_{i},d_{j}\in\mathcal{D}_{t}\), we have: \[\left|L_{w}(d^{i})-L_{w}(d^{j})\right| \leq L_{1}\|d_{1}-d_{2}\| \tag{14}\] \[\left|g^{i}(d^{i})-g^{i}(d^{j})\right| \leq L_{2}\|d_{1}-d_{2}\| \tag{15}\] **Assumption 3**.: (Bounded decision set) The sample set \(\mathcal{D}_{t}\) is bounded, which means for some constant \(D\) and any \(d^{i},d^{j}\), we have \(\|d^{i}-d^{j}\|<D\) **Assumption 4**.: (Salter Condition) There exists some positive constant \(\epsilon\) and an interior point \(d\in\mathcal{D}\), such that \(g^{i}(d)\leq-\epsilon\mathbf{I}\). Assumptions 1 are broadly employed in the non-convex optimization community. Assumption 2 is crucial to ensure the validity and reasonableness of our analyses. Moreover, the Slater condition in assumption 4 is instrumental in establishing the boundary of the Lagrangian multiplier. We begin by establishing a bound for the norm of Lagrangian multiplier \(\|\mu^{i}\|\), as outlined in the following two lemmas: **Lemma 5**.: _Let \(i_{0}\) be some arbitrary integer and \(\theta\) be some real constants. For \(i\in[1,M]\), the following bound holds:_ \[|||\mu^{i+1}\|-\|\mu^{i}\|\leq G\gamma \tag{16}\] \[|||\mu^{i+i_{0}}\|-\|\mu^{i}\|\|\leq-\frac{\epsilon i_{0}}{2}, \tag{17}\] _when \(\|\mu^{i}\|\geq\frac{\gamma^{2}G^{2}}{\epsilon}+\frac{4\gamma F_{1}D}{ \epsilon}+\frac{D^{2}}{2\alpha\epsilon}\)_ Proof.: The detail of the proof can be found in Appendix A. Subsequently, we establish the bound for the norm of the Lagrangian Multiplier with lemma 5. **Lemma 6**.: _Let Assumption 4 be satisfied. For the Lagrangian multiplier \(\mu^{i}\), we can bound its norm as:_ \[\|\mu^{i}\|\leq\frac{M^{-\frac{1}{2}}G^{2}}{\epsilon}+\frac{4M^{-\frac{1}{2}} F_{1}D}{\epsilon}+\frac{D^{2}}{2\alpha\epsilon}+i_{0}GM^{-\frac{1}{4}}+i_{0} \frac{8G^{2}M^{-\frac{1}{2}}}{\epsilon}\log[\frac{32G^{2}M^{-\frac{1}{2}}}{ \epsilon^{2}}] \tag{18}\] Proof.: The detail of the proof can be found in Appendix B. Finally, we proceed to present the proof of the local Lagrangian residual regret. Commencing from the stationary condition in the KKT conditions, we individually bind the components associated with the gradient of the loss function and the constraints. This procedure brings us to the subsequent Theorem: **Theorem 7**.: _Let \(L_{t,w}\) be the local loss function and \(g^{1},\cdots,g^{M}\) be the constraint functions in Alg. 1 and all assumptions satisfied. Set \(H^{i}=\nabla L_{w}+\mu^{i}\nabla g^{i}(d^{i})\) and involving the results in lemma 6, we have:_ \[RL(M)=\|\sum_{i=1}^{M}\nabla L_{w}+(\mu^{i})^{\top}\nabla g^{i}(d^{i})\|\leq \mathcal{O}(M^{\frac{1}{2}}) \tag{19}\] _where \(M\) is the maximum number of the constraints functions_ Proof.: The detail of the proof can be found in Appendix C. **Remark:** From Theorem 7, it is evident that the sample selection algorithm can attain a \(\mathcal{O}(M^{1/2})\) local regret bound. This implies that the Lagrangian function may converge to zero when \(M\) is sufficiently large. However, ensuring full convergence of the Lagrangian isn't suitable for our sample selection objectives. Instead, we restrict the maximum iterations in our algorithm to \(M_{max}\), thereby setting a limit on the local Lagrangian regret with a threshold of \(\mathcal{O}(\sqrt{M_{max}})\). This aligns partially with previous sample selection methods that use the task-specific estimated clean ratios as thresholds to differentiate 'good' and 'bad' samples. Notably, our OGRS method eliminates the need for such specifications. With a fixed maximum iteration setting, OGRS can handle tasks with diverse noisy training data or especially online training tasks with dynamically changing clean ratios. ## 6 Experimental Results In this section, we evaluate the performance of our proposed OGRS method. As other multi-round sample selection algorithms utilize a similar concept of estimating the clean ratio of the training dataset, our focus is primarily on a representative work titled ITLM (section 3.1). Initially, we present intuitive synthetic results to demonstrate the performance of the OGRS method and compare it with the other methods under different parameter settings. Subsequently, we compare these methods using several real datasets, under both static and dynamically changing clean ratios. Our experiment primarily examines random label error scenarios, wherein a certain proportion \(1-\phi\) of data samples are randomly, independently, and equally likely mislabeled, where \(\phi\) represents the real clean ratio of the current arrived data. Despite deep learning models' ability to automatically fit these erroneous data, our experiments reveal that our methods outperform both naive training and state-of-the-art methods when dealing with noisy datasets. Code is available at [https://github.com/AnonymousSubmission100/OGRS_NeurIPS/tree/main](https://github.com/AnonymousSubmission100/OGRS_NeurIPS/tree/main). ### Synthetic Experiments In this section, we evaluate performance on a synthetic dataset of 300 samples, delineated into two non-sensitive features \((x_{1},x_{2})\) and one label class \(y\). The dataset, visualized in a 3D scatter plot, is partitioned into a training set of 200 samples and a test set. Samples follow a Gaussian mixture distribution \((x_{1},x_{2})|y=1\)\(\mathcal{N}([1,1]:[5,1;1,5])\) and \((x_{1},x_{2})|y=0\)\(\mathcal{N}([-1,-1]:[10,1;1,3])\). We artificially flip \(40\%\) of training labels and apply the logistic regression (LR) model. Our Optimal Gaussian Robustness Scoring (OGRS) method is compared against naive LR and ITLM, varying the pre-estimated clean ratio \(\hat{\phi}\). Unlike other methods, OGRS's parameters remain constant across tasks. Differing \(\hat{\phi}\) values simulate misestimation of the clean ratio with dynamically changing data. Results are detailed in Fig. 2. Training data loss is depicted in four left-hand figures, where the axes \((x_{1},x_{2})\) represent features and the third axis signifies training loss \(\ell_{\theta_{T}}((x_{1},x_{2}))\) post-training. ITLM with \(\hat{\phi}=0.1\) outperforms other methods, as underestimation of \(\hat{\phi}\) enhances ITLM's selection accuracy. That's only happening in some simple tests like in this section. When the task becomes more and more complex, an underestimation of \(\hat{\phi}\) may seriously degrade the performance as it misses a lot of valuable data points. Naive LR and ITLM with higher \(\hat{\phi}\), however, struggle to differentiate good and bad data, whereas OGRS matches ITLM's performance without requiring a pre-estimated clean ratio. Test accuracy for different methods is shown on the right, revealing that after an initial drop post the first 50 warm start rounds, OGRS quickly recovers and matches the best ITML case. In contrast, Figure 2: The left four figures shows the loss \(\ell_{\theta_{T}}(d)\) after the training process, where a well-trained model should correctly distinguish clean and bad samples by assigning different loss. The right figure shows the averaged test accuracy along the time slots \(t\). ITLM with incorrect parameters underperforms naive LR due to sample misselection and valuable data omission. ### Experiment on Real Datasets In this subsection, we evaluate the robustness of the OGRS method against label-randomized error in multi-dimensional, multi-error ratio datasets using different models. We employ the MNIST and CIFAR-10 datasets, testing varying clean ratios from 30% to 70%. A 2-layer Multi-layer Perceptron (MLP) is utilized for MNIST, while CIFAR-10 is tested with the widely used ResNet-18 model. Four training methods are compared: * OGRS: Our method with **fixed parameters** across all experiments. * ITLM: A representative algorithm for multi-round sample selection, tested under different \(\hat{\phi}\). We introduce the way to transfer ITLM to an online training setting in section 3.1. * Naive: Directly training using all samples. * Oracle: Training using only clean samples. Note that the result can only reach around 0.75 for ResNet-18 since we randomly select samples in each time slot \(t\), which is different from the offline training case. In this section, we compare various training algorithms while maintaining a constant real clean ratio \(\phi\) since holding \(\phi\) fixed allows for clear and plausible experimental analysis. Experiments are conducted under varying parameter configurations to highlight the advantage of OGRS, which does not necessitate a pre-estimated clean ratio. The models are trained for 10,000 rounds for experiments on both MNIST and CIFAR-10, each starting with a 500-round warm-up period using naive training. Table 1 presents the results on both MNIST and CIFAR-10 datasets. In the majority of the tests, our Online Gradient-based Robust Sample selection (OGRS) method outshines the Iterative Training with Loss Minimization (ITLM) method, except for the test where \(\phi=0.7\) and the pre-estimated ratio for ITLM is set as \(\hat{\phi}=0.9\). This discrepancy arises due to the setting of \(\hat{\phi}=0.9\), which ensures that ITLM samples the top 10% of low-loss data, thereby increasing the likelihood of selecting clean data. However, this setting also causes ITLM to overlook certain observations, which in turn leads to underwhelming performance in other settings. A vital aspect of this experiment involves testing the ITLM methods under various \(\hat{\phi}\) settings. When \(\hat{\phi}\) approximates the actual clean ratio \(\phi\), ITLM demonstrates robustness against label errors. However, when the discrepancy between these values becomes significant, the method experiences a substantial drop in accuracy. Specifically, ITLM fails to converge in certain tasks when applied to the CIFAR-10 dataset with \(\phi=0.3\) and \(\phi=0.7\), especially when the estimated value \(\hat{\phi}\) significantly deviates from the true value \(\phi\). Such a mismatch is a common occurrence in online training settings with fluctuating clean ratios, rendering the ITLM method less suited for online training with noisy labels. Additional experiments can be found in the Appendix. ## 7 Conclusion In this paper, we introduce a novel gradient-based sample selection method to first enable large-scale online robust training with varying proportions of noisy labels, which is a flexible method for training with noisy labels that can be added at the beginning of each iteration. We formulate the sample selection challenge as a non-convex constrained optimization problem and propose an efficient algorithm to address it. In order to give the theoretical analysis for our OGRS method, we introduce a novel metric called the local Lagrangian regret metric. We are the first to directly establish a sublinear local regret bound without resorting to the approximation of the objective function. Experimental results demonstrate that our proposed methods outperform alternatives, particularly when the pre-estimated clean ratio is hard to ascertain. Future work can be conducted to further address the fairness issue for online training with noisy labels. Given that fairness constraints can feasibly be incorporated into the constrained optimization aspect of OGRS, which make the future research about fairness training possible.
2303.11835
Lipschitz-bounded 1D convolutional neural networks using the Cayley transform and the controllability Gramian
We establish a layer-wise parameterization for 1D convolutional neural networks (CNNs) with built-in end-to-end robustness guarantees. In doing so, we use the Lipschitz constant of the input-output mapping characterized by a CNN as a robustness measure. We base our parameterization on the Cayley transform that parameterizes orthogonal matrices and the controllability Gramian of the state space representation of the convolutional layers. The proposed parameterization by design fulfills linear matrix inequalities that are sufficient for Lipschitz continuity of the CNN, which further enables unconstrained training of Lipschitz-bounded 1D CNNs. Finally, we train Lipschitz-bounded 1D CNNs for the classification of heart arrythmia data and show their improved robustness.
Patricia Pauli, Ruigang Wang, Ian R. Manchester, Frank Allgöwer
2023-03-20T12:25:43Z
http://arxiv.org/abs/2303.11835v2
# Lipschitz-bounded 1D convolutional neural networks ###### Abstract We establish a layer-wise parameterization for 1D convolutional neural networks (CNNs) with built-in end-to-end robustness guarantees. Herein, we use the Lipschitz constant of the input-output mapping characterized by a CNN as a robustness measure. We base our parameterization on the Cayley transform that parameterizes orthogonal matrices and the controllability Gramian for the state space representation of the convolutional layers. The proposed parameterization by design fulfils linear matrix inequalities that are sufficient for Lipschitz continuity of the CNN, which further enables unconstrained training of Lipschitz-bounded 1D CNNs. Finally, we train Lipschitz-bounded 1D CNNs for the classification of heart arrythmia data and show their improved robustness. ## I Introduction Robustness of neural networks (NNs) has lately been a topic of increasing importance, for which the Lipschitz constant of the NN's input-output mapping has become a common metric [1]. Finding an accurate upper bound on an NN's Lipschitz constant has broadly been tackled, e.g. using relaxations by quadratic constraints [2, 3], average operators [4] and polynomial optimization [5]. In addition, the training of provably Lipschitz-bounded NNs was proposed by including constraints [6, 7] and regularization techniques [8]. While effective, one drawback of these techniques is the computational overhead coming from constraints and projections in the optimization problem [7]. To overcome this, [9, 10, 11] suggest direct parameterizations for equilibrium networks, recurrent equilibrium networks, and feedforward neural networks, respectively, with guaranteed Lipschitz bounds. From a set of unconstrained variables [9, 10, 11] formulate the NNs in such a way that they by design satisfy linear matrix inequalities (LMIs). These LMIs in turn are sufficient conditions for Lipschitz continuity such that, this way, one can parameterize the class of Lipschitz-bounded NNs with a Lipschitz upper bound predefined by the user. The underlying training problem boils down to an unconstrained optimization problem that can be solved using gradient methods. In this work, we take the same approach as in [9, 10, 11] to parameterize Lipschitz-bounded 1D convolutional neural networks (CNNs). CNNs have been tremendously successful in image and audio processing tasks and they are the state of the art in these applications [12, 13, 14]. In this paper, we focus on 1D CNNs as a stepping stone to the more relevant, yet more involved case of 2D CNNs, which is beyond the scope of this paper. However, an extension based on a 2D systems representation [15] is possible and left as future work. CNNs typically consist of convolutions, nonlinear activation functions, pooling layers, and linear layers that are concatenated in a feedforward structure. While numerous methods exist for enforcing Lipschitz continuity and orthogonality in fully connected layers [16], the design of Lipschitz bounded convolutional layers and CNNs is less studied and often restricted to special convolutions [17]. Recently, this has been approached via parameterization of convolutional layers in the Fourier domain, however this requires a computationally expensive inverse that depends on the input size [11, 17]. We instead formulate convolutions in state space independent of the input dimension [3], providing us with a compact and nonrepetitive description thereof. This leads to a simple and structurally very similar parameterization to the one for fully connected layers. Another feature of our approach is that we impose Lipschitz continuity onto the input-output mapping only rather than on the individual layers, like it is done in many other works [18], using that the product of the Lipschitz bounds of the layers yields a Lipschitz bound for the overall NN. On the contrary, our approach imposes more general dissipativity properties onto the individual layers [3], yielding a _layer-wise_ parameterization with _end-to-end_ robustness guarantees. This leads to reduced conservatism in the compliance of the Lipschitz bound, i. e., higher expressivity for the same Lipschitz bound. In addition our approach accounts for standard pooling layers, which were not addressed in other recent Lipschitz-bounded parameterizations of CNNs [11]. Our main contribution is a direct, scalable, and expressive layer-wise parameterization for Lipschitz-bounded 1D CNNs that makes use of the Cayley transform to parameterize orthogonal matrices. Beside the Cayley transform, a tool that was used for NN parameterization before, we newly propose to utilize the controllability Gramian in the context of parameterizing convolutional layers of Lipschitz-bounded CNNs. In particular, we reformulate parts of the underlying LMI, that enforces dissipativity onto convolutional layers, as a Lyapunov equation whose unique analytical solution is the controllability Gramian. Using our parameterization, we then train Lipschitz-bounded 1D CNNs solving an unconstrained optimization problem. The remainder of the paper is structured as follows: Section II first introduces 1D CNNs and formally states the training problem. In Section III, we discuss preliminaries, including the state space represenation for 1D convolutions and the Lipschitz constant estimation for 1D CNNs. In Section IV, we present the direct parameterization for Lipschitz-bounded 1D CNNs and in Section V, we train Lipschitz-bounded 1D CNNs on the MIT-BIH arrhythmia database [19], a well-known benchmark dataset for 1D CNNs. Finally, in Section VI, we conclude the paper. **Notation:** By \(\mathbb{D}^{n}\) (\(\mathbb{D}_{+}^{n}\)) and \(\mathbb{S}^{n}\) (\(\mathbb{S}_{+}^{n}\)), we denote the set of \(n\)-dimensional (positive definite) diagonal and symmetric matrices, respectively, and by \(\mathbb{N}_{+}\) the natural numbers without zero. \(\mathcal{I}\) is a set of indices with elements \(i\in\mathbb{N}_{+}\), and \(|\mathcal{I}|\) gives the number of elements in the index set \(\mathcal{I}\). ## II Problem statement We consider 1D CNNs that are a concatenation of convolutional layers \(\mathcal{C}_{i}:\mathbb{R}^{c_{i-1}\times N_{i-1}}\rightarrow\mathbb{R}^{c_{ i}\times N_{i}}\) with indices \(i\in\mathcal{C}_{i}\), and fully connected layers \(\mathcal{C}_{i}:\mathbb{R}^{n_{i-1}}\rightarrow\mathbb{R}^{n_{i}}\) with indices \(i\in\mathcal{I}_{F}\) \[\text{CNN}_{\theta}=\mathcal{L}_{l}\circ\ldots\circ\mathcal{L}_{p+1}\circ F \circ\mathcal{C}_{p}\circ\ldots\circ\mathcal{C}_{1}, \tag{1}\] adding up to a total number of \(l=|\mathcal{A}_{C}|+|\mathcal{I}_{F}|\) layers. Herein, \(N_{i}\) denotes the signal length, \(c_{i}\) the channel size, and \(n_{i}\) the layer dimension of the respective \(i\)-th layer. At the intersection of the fully connected part and the fully convolutional part of the CNN, there is a flattening operation \(F:\mathbb{R}^{c_{p}\times N_{p}}\rightarrow\mathbb{R}^{n_{p}}\) of the output of the \(p\)-th (last) convolutional layer with \(n_{p}=c_{p}N_{p}\). A _convolutional_ layer consists of two to three stages, a convolution operation, a nonlinear activation, and possibly a pooling operation. The first two stages are \[\widetilde{\mathcal{C}_{i}}:w_{k}^{i}=\phi_{i}\left(b_{i}+\sum_{j=0}^{\ell_{i }-1}K_{j}^{i}w_{k-j}^{i-1}\right),\quad k=0,\ldots,N_{i}-1\ \forall i\in\mathcal{I}_{C}, \tag{2}\] with convolution kernel \(K_{j}^{i}\in\mathbb{R}^{c_{i}\times c_{i-1}}\), \(j=0,\ldots,\ell_{i}-1\), kernel size \(\ell_{i}\), and bias \(b_{i}\in\mathbb{R}^{c_{i}}\). First, a convolution on the signal \(w^{i-1}\in\mathbb{R}^{c_{i-1}\times N_{i-1}}\) is applied and subsequently, the nonlinear activation function \(\phi_{i}:\mathbb{R}^{c_{i}}\rightarrow\mathbb{R}^{c_{i}}\) is evaluated elementwise to obtain the output \(w^{i}\in\mathbb{R}^{c_{i}\times N_{i-1}}\). Oftentimes, a convolutional layer additionally contains pooling layers \(\mathcal{P}_{i}:\mathbb{R}^{c_{i}\times N_{i-1}}\rightarrow\mathbb{R}^{c_{i} \times N_{i}}\) to downsample the signal \(w^{i}\). We consider maximum pooling \[\mathcal{P}_{i}^{\text{max}}:\tilde{w}_{k}^{i}=\max_{j=1,\ldots,\ell_{i}}w_{ \ell_{i}(k-1)+j}^{i},\ k=0,\ldots,N_{i}-1,\forall i\in\mathcal{I}_{P}^{\text {max}},\] and average pooling \[\mathcal{P}_{i}^{\text{av}}:\tilde{w}_{k}^{i}=\frac{1}{\ell_{i}}\sum_{j=1}^{ \ell_{i}}w_{\ell_{i}(k-1)+j}^{i},\ k=0,\ldots,N_{i}-1,\forall i\in\mathcal{I} _{P}^{\text{av}},\] where \(\mathcal{I}_{P}^{\text{av}}\cup\mathcal{I}_{P}^{\text{max}}\subseteq\mathcal{I }_{C}\). As a result, the convolutional layer becomes \(\mathcal{C}_{i}=\mathcal{P}_{i}\circ\mathcal{C}_{i}\) in case a pooling layer is added or \(\mathcal{C}_{i}=\widetilde{\mathcal{C}_{i}}\) otherwise. Finally, a CNN typically holds _fully connected_ layers, which we define as mappings \[\begin{split}&\mathcal{L}_{l}:\ w^{i}=\phi_{i}(W_{l}w^{i-1}+b_{i}) \quad\forall i\in\mathcal{I}_{F}\backslash\{l\},\\ &\mathcal{L}_{l}:\ w^{l}=W_{l}w^{l-1}+b_{l}\end{split} \tag{3}\] with weights \(W_{l}\in\mathbb{R}^{n_{l}\times n_{l-1}}\), biases \(b_{i}\in\mathbb{R}^{n_{l}}\) and activation functions \(\phi_{i}:\mathbb{R}^{n_{l}}\rightarrow\mathbb{R}^{n_{l}}\) that are applied elementwise. The 1D CNN \(f_{\theta}(w^{0})=w^{l}\) is hence characterized by \(\theta=\{(K^{i},b_{i})_{i=1}^{p},(W_{i},b_{i})_{i=p+1}^{i}\}\) and the chosen activation and pooling operations. In this work, we present a direct parameterization for Lipschitz-bounded 1D CNNs (1). **Problem 1:** Find a parameterization \(\kappa\mapsto\theta\) of \(f_{\theta}\) for a predefined Lipschitz bound \(\rho>0\) such that all 1D CNNs parameterized by \(\kappa\) are \(\rho\)-Lipschitz continuous with respect to the \(\ell_{2}\) norm, i. e., they satisfy \[\|f_{\theta}(x)-f_{\theta}(y)\|_{2}\leq\rho\|x-y\|_{2}\quad\forall x,y\in \mathbb{R}^{n}. \tag{4}\] In the case of multiple channels \(c\), \(n=cN\) denotes the stacked up version of the input. Note that \(\|\cdot\|_{2}\) in (4) can either be interpreted as the Euclidean norm of a vector-valued input \(x\) or as the \(\ell_{2}\) norm of a signal \(x\). To train a Lipschitz-bounded CNN, we minimize a learning objective \(\mathcal{L}(\theta)\), e. g., the mean squared error, the cross-entropy loss or, to encourage robustness through the learning objective a tailored loss, e.g. the hinge loss [20], while at the same time enforcing Lipschitz-boundedness onto the CNN. Rather than solving a training problem subject to a Lipschitz constraint, i. e., \[\min_{\theta}\ \mathcal{L}(\theta)\quad\text{s.\ t.}\quad f_{\theta}\text{ is Lipschitz-bounded},\] the suggested parameterization \(\kappa\mapsto\theta\) allows to solve an unconstrained training problem over \(\kappa\) \[\min_{\kappa}\ \mathcal{L}(\theta(\kappa)).\] ## III Preliminaries Before we state the parameterization of Lipschitz-bounded 1D CNNs in Section IV, we introduce a compact formulation of convolutions in state space and state LMI conditions that certify Lipschitz boundedness and that can be used to estimate the Lipschitz constant for 1D CNNs [3]. In addition, we introduce the Cayley transform used to parameterize orthogonal matrices. ### _State space representation for convolutions_ To formulate LMI conditions for convolutional layers, we can either reformulate the convolutional operation as a fully connected layer characterized by a sparse and redundant Toeplitz matrix [7] that scales with the input dimension or, as suggested in [3], we can compactly state the convolution, i. e., a finite impulse response (FIR) filter, in state space, completely independent of the input signal length. A possible discrete-time state space representation of the \(i\)-th convolutional layer (2) with state \(x_{k}^{i}\in\mathbb{R}^{n_{l_{i}}}\) and state dimension \(n_{x_{i}}=(\ell_{i}-1)c_{i-1}\) is \[\begin{split} x_{k+1}^{i}&=A_{i}x_{k}^{i}+B_{i}w_{k}^{i -1},\\ y_{k}^{i}&=C_{i}x_{k}^{i}+D_{i}w_{k}^{i-1}+b_{i}, \\ w_{k}^{i}&=\phi(y_{k}^{i}),\end{split} \tag{5}\] where \[A_{i} =\begin{bmatrix}0&I&&0\\ 0&0&\ddots\\ \vdots&&\ddots&I\\ 0&\ldots&&0\end{bmatrix}, B_{i} =\begin{bmatrix}0\\ \vdots\\ 0\\ I\end{bmatrix}, \tag{6a}\] \[C_{i} =\begin{bmatrix}K_{\ell_{i}-1}^{i}&\ldots&K_{1}^{i}\end{bmatrix}, D_{i} =K_{0}^{i}. \tag{6b}\] Note that 2D convolutions also admit a state space realization, namely as a 2D system [15], based on which our parameterization can potentially be extended to end-to-end Lipschitz-bounded 2D CNNs. **Remark 2**: **The evaluation of a convolution via a fast Fourier transform necessitates the entire signal, whereas our causal representation of convolutional layers allows their use in real time.** ### _Lipschitz constant estimation_ The Lipschitz constant is a sensitivity measure to changes in the input, which is commonly used to verify robustness for NNs [1]. Since, however, the calculation of the true Lipschitz constant is an NP-hard problem, an accurate upper bound is sought instead. For this purpose, we over-approximate the nonlinear activation functions by their slope-restriction cone [2, 6]. Commonly used nonlinear activation functions \(\varphi:\mathbb{R}\to\mathbb{R}\), such as ReLU and tanh, are slope-restricted in \([0,1]\), i. e., \[0\leq\frac{\varphi(x)-\varphi(y)}{x-y}\leq 1\quad\forall x,y\in\mathbb{R}^{n}.\] Based on this property, we formulate an incremental quadratic constraint \[\begin{bmatrix}\phi(x)-\phi(y)\\ x-y\end{bmatrix}^{\top}\begin{bmatrix}-2\Lambda&\Lambda\\ \Lambda&0\end{bmatrix}\begin{bmatrix}\phi(x)-\phi(y)\\ x-y\end{bmatrix}\geq 0\ \forall x,y\in\mathbb{R}^{n}. \tag{7}\] with multipliers \(\Lambda\in\mathbb{D}_{+}^{n}\), yielding a suitable over-approximation of the NN. The following theorem states a set of \(l\) LMI conditions that serve as a sufficient condition for Lipschitz continuity for 1D CNNs based on the relaxation (7) [3]. **Theorem 3** ([3]): _Let \(\mathrm{CNN}_{\theta}\) and \(\rho>0\) be given and let all activation functions be slope-restricted in \([0,1]\). If there exist_ 1. \(Q_{i}\in\mathbb{S}^{c_{i}}\) _(_\(Q_{i}\in\mathbb{D}^{c_{i}}\)_ if a convolutional layer contains a maximum pooling layer),_ \(P_{i}\in\mathbb{S}_{+}^{n_{\mu_{i}}}\)_, and_ \(\Lambda_{i}\in\mathbb{D}_{+}^{c_{i}}\) _such that_ \(\forall i\in\mathcal{I}_{C}\)__ \[\left[\begin{array}{cc|c}P_{i}-A_{i}^{\top}P_{i}A_{i}&-A_{i}^{\top}P_{i}B_{ i}&-C_{i}^{\top}\Lambda_{i}\\ -B_{i}^{\top}P_{i}A_{i}&Q_{i-1}-B_{i}^{\top}P_{i}B_{i}&-D_{i}^{\top}\Lambda_{i} \\ \hline-\Lambda_{i}C_{i}&-\Lambda_{i}D_{i}&2\Lambda_{i}-Q_{i}\end{array}\right]\succeq 0,\] (8) _where_ \(Q_{0}=\tilde{\rho}^{2}I\)_,_ 2. \(Q_{i}\in\mathbb{S}^{n_{i}}\) _and_ \(\Lambda_{i}\in\mathbb{D}_{+}^{n_{i}}\) _such that_ \(\forall i=\mathcal{I}_{F}\backslash\{I\}\)__ \[\begin{bmatrix}Q_{i-1}&-W_{i}^{\top}\Lambda_{i}\\ -\Lambda_{i}W_{i}&2\Lambda_{i}-Q_{i}\end{bmatrix}\succeq 0\text{ and }\begin{bmatrix}Q_{l-1}&-W_{l}^{\top}\\ -W_{l}&I\end{bmatrix}\succeq 0,\] (9) _where_ \(Q_{p}:=I_{N_{p}}\otimes Q_{p}\)_,_ _then the \(\mathrm{CNN}_{\theta}\) is \(\rho\)-Lipschitz continuous with \(\rho=\tilde{\rho}\prod_{S\in\mathcal{I}_{F}^{\mu}}\mu_{s}\), where \(\mu_{s}\) are the Lipschitz constants of the average pooling layers._ _The underlying idea in Theorem 3 is to enforce dissipativity onto all individual layers that are connected in a feedforward fashion. Thus the matrix \(Q_{i-1}\) links the \(i\)-th layer to the previous layer and by this interconnection we can finally analyse Lipschitz continuity of the input-output mapping \(w^{l}=\mathrm{CNN}_{\theta}(w^{0})\)[3]._ _Based on Theorem 3, we can determine an upper bound on the Lipschitz constant for a given CNN solving a semidefinite program_ \[\min_{\rho^{2},\Lambda,P,Q}\ \rho^{2}\quad\text{s.\,t.}\quad(\ref{eq:1}),(\ref{eq:2}), \tag{10}\] _where \(\Lambda=\{\Lambda_{i}\}_{i\in\mathcal{I}_{C}\cup\mathcal{I}_{F}\backslash\{I\}}\), \(Q=\{Q_{i}\}_{i\in\mathcal{I}_{C}\cup\mathcal{I}_{F}\backslash\{I\}}\), \(P=\{P_{i}\}_{i\in\mathcal{I}_{C}}\) serve as decision variables together with \(\rho^{2}\)._ ### _Cayley transform_ Typically, the Cayley transform maps skew-symmetric matrices to orthogonal matrices and its extended version parameterizes the Stiefel manifold from non-square matrices, which can be useful in designing NNs [11, 17, 21]. **Lemma 4** (Cayley transform [22]): _For all \(Y\in\mathbb{R}^{n\times n}\) and \(Z\in\mathbb{R}^{m\times n}\) the Cayley transform_ \[\mathrm{Cayley}\left(\begin{bmatrix}Y\\ Z\end{bmatrix}\right)=\begin{bmatrix}U\\ V\end{bmatrix}=\begin{bmatrix}(I+M)^{-1}(I-M)\\ 2Z(I+M)^{-1}\end{bmatrix},\] _where \(M=Y-Y^{\top}+Z^{\top}Z\), yields matrices \(U\in\mathbb{R}^{n\times n}\) and \(V\in\mathbb{R}^{m\times n}\) that satisfy \(U^{\top}U+V^{\top}V=I\)._ _Note that \(I+M\) is nonsingular since \(1\leq\lambda_{\min}(I+Z^{\top}Z)\leq Re(\lambda_{\min}(I+M))\)[23]._ ## IV Direct parameterization While (10) analyses Lipschitz continuity for given 1D CNNs, it might also be desirable to train robust CNNs, i. e., \(\rho\)-Lipschitz bounded CNNs where the robustness level \(\rho\) is chosen by the user. In this section, we introduce a direct layer-wise parameterization for 1D CNNs (1) that renders the input-output mapping Lipschitz continuous. We first discuss a parameterization for fully connected layers that satisfy (9) by design, using a similar construction to [11]. Our key contribution then is the parameterization of convolutional layers, which is carried out in two steps. In a first step, we establish a parameterization of \(P_{i}\) that renders the left upper block in (8) positive definite using the controllability Gramian and afterwards, we introduce the parameterization for convolutional layers that by design satisfy (8). ### _Fully connected layers_ In the following, we present a mapping \(\kappa_{i}\mapsto(W_{i},b_{i})\) from unconstrained variables \(\kappa_{i}\) that renders (9) feasible by design. **Theorem 5**: _Fully connected layers (3) parameterized by_ \[W_{i} =\sqrt{2}\Gamma_{i}^{-1}V_{i}^{\top}L_{i-1}, b_{i}\in\mathbb{R}^{c_{i}}, i\in\mathcal{I}_{F}\backslash\{I\}, \tag{11a}\] \[W_{i} =V_{i}^{\top}L_{i-1}, b_{i}\in\mathbb{R}^{c_{i}}, \tag{11b}\] satisfy (9). Herein, \[\Gamma_{i}=\text{diag}(\gamma_{i}),\ L_{i}=\sqrt{2}U_{i}\Gamma_{i},\ \ \begin{bmatrix}U_{i}\\ V_{i}\end{bmatrix}=\text{Cayley}\left(\begin{bmatrix}Y_{i}\\ Z_{i}\end{bmatrix}\right)\] with free variables \(Y_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\), \(Z_{i}\in\mathbb{R}^{n_{i-1}\times n_{i}}\), \(b_{i}\in\mathbb{R}^{n_{i}}\), \(i\in\mathcal{I}_{\mathcal{F}},\ \gamma_{i}\in\mathbb{R}^{n_{i}},\ i\in \mathcal{I}_{\mathcal{F}}\backslash\{I\}\). This yields the mappings \((Y_{i},Z_{i},\gamma_{i},b_{i})\mapsto(W_{i},b_{i})\), \(i\in\mathcal{I}_{\mathcal{F}}\backslash\{I\}\), and \((Y_{i},Z_{i},b_{i})\mapsto(W_{i},b_{i})\), respectively. According to Lemma 4, \(U_{i}\) and \(V_{i}\) satisfy \(U_{i}^{\top}U_{i}+V_{i}^{\top}V_{i}=I\). Now inserting the parameterization (11a), we obtain \[\frac{1}{2}(\Gamma_{i}^{\top}L_{i}^{\top}L_{i}\Gamma_{i}^{-1}+\Gamma_{i}W_{i} ^{\top}L_{i-1}^{-1}L_{i-1}^{-\top}W_{i}\Gamma_{i})=I.\] With \(Q_{i}=L_{i}^{\top}L_{i}\) and \(\Lambda_{i}=\Gamma_{i}^{\top}\Gamma_{i}\), we further obtain \[Q_{i}+\Lambda_{i}W_{i}Q_{i-1}^{-1}W_{i}^{\top}\Lambda_{i}=2\Lambda_{i},\] which implies \(2\Lambda_{i}-Q_{i}-\Lambda_{i}W_{i}Q_{i-1}^{-1}W_{i}^{\top}\Lambda_{i}\succeq 0\). Next, we apply the Schur complement, which yields the left inequality in (9). The last fully connected layer is a special case that does not contain an activation function. Inserting the parameterization (11b) gives \[U_{i}^{\top}U_{l}+V_{l}^{\top}V_{l}=U_{l}^{\top}U_{l}+W_{l}Q_{l-1}^{-1}W_{l}^ {\top}=I,\] which implies \(I-W_{l}Q_{l-1}^{-1}W_{l}^{\top}=U_{l}^{\top}U_{l}\succeq 0\), which by the application of the Schur complement satisfies the right inequality in (9). Note that the connection between the auxiliary matrices \(L_{i}\) in (11) and \(Q_{i}\) in (9) is \(L_{i}^{\top}L_{i}=Q_{i}\) and the relation between the multiplier matrices \(\Lambda_{i}\) in (7) / (9) and \(\Gamma_{i}\) in (11) is \(\Gamma_{i}^{\top}\Gamma_{i}=\Lambda_{i}\). **Remark 6**: _Throughout the paper, we assume that \(\Gamma_{i}\) and \(L_{i}\) are nonsingular. In our experiments, this was always the case. However, there also are tricks to enforce this property, e.g., by choosing \(\Gamma_{i}=\text{diag}(e^{\gamma_{i}})\)[11]._ **Remark 7**: _Our parameterization (11) is equivalent to the one established in [11], where they show that it is necessary and sufficient, i. e., the fully connected layers (3) satisfy (9) if and only if the weights can be parameterized by (11)._ ### _Parameterization by the controllability Gramian_ In this section, we make use of the controllability Gramian of (5) to parameterize convolutional layers, which to the best knowledge of the authors, has thus far not appeared in the context of parameterizing NNs. For that purpose, we introduce \[F_{i}:=\left[\begin{array}{cc}P_{i}-A_{i}^{\top}P_{i}A_{i}&-A_{i}^{\top}P_{ i}B_{i}\\ -B_{i}^{\top}P_{i}A_{i}&Q_{i-1}-B_{i}^{\top}P_{i}B_{i}\end{array}\right]\succ 0, \tag{12}\] which is the left upper block in (2) and further, we introduce \(\widehat{C}_{i}:=\begin{bmatrix}C_{i}&D_{i}\end{bmatrix}\), which simplifies the notation of (8) to \[\begin{bmatrix}F_{i}&-\widehat{C}_{i}^{\top}\Lambda_{i}\\ -\Lambda_{i}\widehat{C}_{i}&2\Lambda_{i}-Q_{i}\end{bmatrix}\succeq 0. \tag{13}\] We note that the LMI (13) and the left LMI in (9) share a similar structure. The right lower block is the same in both LMIs. In addition to the bias terms \(b_{i}\), the parameters to be trained in the CNN layers are collected in \(\widehat{C}_{i}\), cmp. (6), whereas the parameters \(W_{i}\) characterize the fully connected layers. In the off-diagonal blocks of the respective LMIs (13) and (9) \(\widehat{C}_{i}\) and \(W_{i}\) appear respectively multiplied by \(\Lambda_{i}\). The only difference is in the left upper blocks of the LMIs. While in LMI (9) for fully connected layers, we here have \(Q_{i-1}=L_{i-1}^{\top}L_{i-1}\succ 0\) for nonsingular \(L_{i-1}\), LMI (13) for convolutional layers here contains \(F_{i}\), that depends on \(Q_{i-1}\). To render \(F_{i}\) positive definite, we parameterize \(P_{i}\) as follows, using the controllability Gramian. **Lemma 8**: _For some \(\varepsilon>0\) and all \(H_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\), the matrix \(P_{i}=X_{i}^{-1}\) with_ \[X_{i}=\sum_{k=0}^{n_{i}-c_{i-1}}A_{i}^{k}(B_{i}Q_{i-1}^{-1}B_{i}^{\top}+H_{i}^ {\top}H_{i}+\varepsilon I)(A_{i}^{\top})^{k}, \tag{14}\] _renders (12) feasible._ The matrix \(A_{i}\) is a nilpotent matrix, i. e., \(A^{n_{i}-c_{i-1}+k}=0\ \forall k\geq 1\), such that \[X_{i}=\sum_{k=0}^{\infty}A_{i}^{k}(B_{i}Q_{i-1}^{-1}B_{i}^{\top}+H_{i}^{\top}H_ {i}+\varepsilon I)(A_{i}^{\top})^{k}\] corresponds to the controllability Gramian of the linear time-invariant system characterized by \((A_{i},B_{i})\) as defined in (6a), i. e., the unique solution \(X_{i}\succ 0\) to the Lyapunov equation \[X_{i}-A_{i}X_{i}A_{i}^{\top}-B_{i}Q_{i-1}^{-1}B_{i}^{\top}=H_{i}^{\top}H_{i}+ \varepsilon I\succ 0. \tag{15}\] Note that \(X_{i}\) is positive definite by design, given that \(Q_{i-1}=L_{i-1}^{\top}L_{i-1}\succ 0\) such that \(B_{i}Q_{i-1}^{-1}B_{i}^{\top}+H_{i}^{\top}H_{i}+\varepsilon I\) is positive definite. Next, we apply the Schur complement to (15) to obtain \[\begin{bmatrix}X_{i}^{-1}&0&A_{i}^{\top}\\ 0&Q_{i-1}&B_{i}^{\top}\\ A_{i}&B_{i}&X_{i}\end{bmatrix}\succ 0,\] Now inserting \(P_{i}=X_{i}^{-1}\) and again applying the Schur complement yields (12). ### _Convolutional layers_ In this subsection, we present a direct parameterization for convolutional layers such that they satisfy (13) by design. Our parameterization of convolution kernels \(K^{i}\in\mathbb{R}^{c_{i}\times c_{i-1}\times c_{i}}\), or equivalently \(\widehat{C}_{i}\in\mathbb{R}^{c_{i}\times c_{i}c_{i-1}}\), is independent of the input dimension \(N_{i}\) whereas other approaches design a parameterization for Lipschitz-bounded convolutions and CNNs in the Fourier domain which involves the costly inversion of \(N_{i}\) matrices [17, 11]. **Theorem 9**: _Covolutional layers (2) that contain an average pooling layer or no pooling layer parameterized by_ \[\widehat{C}_{i}=\sqrt{2}\Gamma_{i}^{-1}V_{i}^{\top}L_{i}^{F},\ b_{i}\in\mathbb{R }^{c_{i}},\ \forall i\in\mathcal{I}_{\mathcal{C}}\backslash\mathcal{I}_{\mathcal{C}}^{ \text{max}} \tag{16}\] _satisfy (8). Herein,_ \[\Gamma_{i}=\text{diag}(\gamma_{i}),\ \begin{bmatrix}U_{i}\\ V_{i}\end{bmatrix}=\text{Cayley}\left(\begin{bmatrix}Y_{i}\\ Z_{i}\end{bmatrix}\right),\ L_{i}^{F}=\text{chol}(F_{i}),\] \(\text{chol}(\cdot)\) _denoting the Cholesky decomposition, \(Q_{i}=L_{i}^{\top}L_{i},\ L_{0}=\rho I,\ L_{i}=\sqrt{2}U_{i}\Gamma_{i}\), where \(F_{i}\) is given by (12) with \(P_{i}\) parameterized from \(Q_{i-1}\) and \(H_{i}\) using (14). The free variables beside \(b_{i}\) are \(Y_{i}\in\mathbb{R}^{c_{i}\times c_{i}}\), \(Z_{i}\in\mathbb{R}^{c_{i-1}\times c_{i}}\), \(H_{i}\in\mathbb{R}^{c_{i}\times c_{i}}\), where \(c_{i}\) is the number of \(\mathbb{R}^{n_{ij}\times n_{ij}}\), \(\gamma_{i}\in\mathbb{R}^{c_{i}}\), \(i\in\mathcal{I}_{C}\mathcal{I}_{C}^{\max}\), which yields the mapping \((Y_{i},Z_{i},H_{i},\gamma_{i},b_{i})\mapsto(K^{i},b_{i})\)._ Proof:: The matrices \(U_{i}\) and \(V_{i}\) satisfy \(U_{i}^{\top}U_{i}+V_{i}^{\top}V_{i}=I\). Now inserting the parameterization (16), we obtain \[\frac{1}{2}(\Gamma_{i}^{-1}L_{i}^{\top}L_{i}\Gamma_{i}^{-1}+\Gamma_{i}W_{i}^{ \top}L_{i-1}^{\top}L_{i-1}^{\top}{}^{-\top}W_{i}\Gamma_{i})=I.\] Lemma 8 ensures positive definiteness of \(F_{i}\), i. e., its Cholesky decomposition exists, and we insert \(Q_{i}=L_{i}^{\top}L_{i}\), \(F_{i}=L_{i}^{F\top}L_{i}^{F}\), \(\Lambda_{i}=\Gamma_{i}^{\top}\Gamma_{i}\), to further obtain \[Q_{i}+\Lambda\widehat{C}_{i}F_{i}^{-1}\widehat{C}_{i}^{\top}\Lambda_{i}=2 \Lambda_{i},\] which implies \(2\Lambda_{i}-Q_{i}-\Lambda_{i}\widehat{C}_{i}F_{i}^{-1}\widehat{C}_{i}^{\top} \Lambda_{i}\succeq 0\). Next, we apply the Schur complement and obtain (13) which corresponds to (8). To account for average pooling layers present in the CNN, we rescale the Lipschitz bound with the product of the Lipschitz bounds of the average pooling layers, i. e., \(\tilde{\rho}=\rho/\Pi_{s\in\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I }_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I }_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{I}_{\mathcal{I}_{I}_{I }_{I}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\)\) \) \) robustness, yet lower test accuracies and vice versa. We note that the vanilla CNN has significantly larger upper Lipschitz bounds than LipCNN and further, LipCNN maintains high test accuracies as the Lipschitz bound decreases in Fig. 3. With even larger weighting parameters \(\gamma\) in L2 regularized training the training failed, whereas LipCNNs allows for training with very low Lipschitz bounds. ## VI Conclusion In this paper, we introduced a parameterization for Lipschitz-bounded 1D CNNs using Cayley transforms and controllability Gramians. Using our parameterization we can train Lipschitz-bounded 1D CNNs in an unconstrained training problem which we illustrated in the classification of ECG data from the MIT-BIH database. Future research includes the extension of our parameterization to 2D CNNs using a 2D systems approach as suggested in [15].
2310.00800
GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation
Recent studies have shown that graph neural networks (GNNs) exhibit strong biases towards the node degree: they usually perform satisfactorily on high-degree nodes with rich neighbor information but struggle with low-degree nodes. Existing works tackle this problem by deriving either designated GNN architectures or training strategies specifically for low-degree nodes. Though effective, these approaches unintentionally create an artificial out-of-distribution scenario, where models mainly or even only observe low-degree nodes during the training, leading to a downgraded performance for high-degree nodes that GNNs originally perform well at. In light of this, we propose a test-time augmentation framework, namely GraphPatcher, to enhance test-time generalization of any GNNs on low-degree nodes. Specifically, GraphPatcher iteratively generates virtual nodes to patch artificially created low-degree nodes via corruptions, aiming at progressively reconstructing target GNN's predictions over a sequence of increasingly corrupted nodes. Through this scheme, GraphPatcher not only learns how to enhance low-degree nodes (when the neighborhoods are heavily corrupted) but also preserves the original superior performance of GNNs on high-degree nodes (when lightly corrupted). Additionally, GraphPatcher is model-agnostic and can also mitigate the degree bias for either self-supervised or supervised GNNs. Comprehensive experiments are conducted over seven benchmark datasets and GraphPatcher consistently enhances common GNNs' overall performance by up to 3.6% and low-degree performance by up to 6.5%, significantly outperforming state-of-the-art baselines. The source code is publicly available at https://github.com/jumxglhf/GraphPatcher.
Mingxuan Ju, Tong Zhao, Wenhao Yu, Neil Shah, Yanfang Ye
2023-10-01T21:50:03Z
http://arxiv.org/abs/2310.00800v1
# GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation ###### Abstract Recent studies have shown that graph neural networks (GNNs) exhibit strong biases towards the node degree: they usually perform satisfactorily on high-degree nodes with rich neighbor information but struggle with low-degree nodes. Existing works tackle this problem by deriving either designated GNN architectures or training strategies specifically for low-degree nodes. Though effective, these approaches unintentionally create an artificial out-of-distribution scenario, where models mainly or even only observe low-degree nodes during the training, leading to a downgraded performance for high-degree nodes that GNNs originally perform well at. In light of this, we propose a test-time augmentation framework, namely GraphPatcher, to enhance test-time generalization of any GNNs on low-degree nodes. Specifically, GraphPatcher iteratively generates virtual nodes to patch artificially created low-degree nodes via corruptions, aiming at progressively reconstructing target GNN's predictions over a sequence of increasingly corrupted nodes. Through this scheme, GraphPatcher not only learns how to enhance low-degree nodes (when the neighborhoods are heavily corrupted) but also preserves the original superior performance of GNNs on high-degree nodes (when lightly corrupted). Additionally, GraphPatcher is model-agnostic and can also mitigate the degree bias for either self-supervised or supervised GNNs. Comprehensive experiments are conducted over seven benchmark datasets and GraphPatcher consistently enhances common GNNs' overall performance by up to 3.6% and low-degree performance by up to 6.5%, significantly outperforming state-of-the-art baselines. The source code is publicly available at [https://github.com/jumxglhf/GraphPatcher](https://github.com/jumxglhf/GraphPatcher). ## 1 Introduction Graph Neural Networks (GNNs) have gained significant popularity as a powerful approach for learning representations of graphs, achieving state-of-the-art performance on various predictive tasks, such as node classification [22; 38; 9], link prediction [50; 53], and graph classification [43; 47; 11]. These tasks further form the archetypes of many real-world applications, such as recommendation systems [45; 3], predicative user behavior models [31; 52], and molecular property prediction [51; 48]. While existing GNNs are highly proficient at capturing information from rich neighborhoods (i.e., high-degree nodes), recent studies [13; 25; 36; 55] have revealed a significant performance degradation of GNNs when dealing with nodes that have sparse neighborhoods (i.e., low-degree nodes). This observation can be attributed to the fact that GNNs make predictions based on the distribution of node neighborhoods [27]. According to this line of theory, GNNs struggle with low-degree nodes due to the limited amount of available neighborhood information, which may not be able to precisely depict the learned distributions. Empirically, as shown in Figure 1, the classification accuracy of GCN [22] proportionally decays as the node degree decreases, resulting in a performance gap of \(\sim\)20% accuracy. Furthermore, the sub-optimal performance of GNNs on low-degree nodes can be aggravated by the power-law degree distribution commonly observed in real-world graphs, where the number of low-degree nodes significantly exceeds that of high-degree nodes [36]. To bridge this gap, several frameworks have been proposed to specifically improve GNNs' performance on low-degree nodes [36, 25, 13, 55, 49]. These frameworks either introduce designated architectures or training strategies specifically for low-degree nodes. For examples, Tail-GNN [25] enhances latent representations of low-degree nodes by incorporating high-degree structural information; whereas Coldbrew [55] retrieves a set of existing nodes as virtual neighbors for low-degree nodes. However, these approaches suffer from two significant drawbacks. Firstly, while benefiting low-degree nodes, they inadvertently create an artificial out-of-distribution scenario during training [42], where models primarily observe low-degree nodes, leading to a downgraded performance for high-degree nodes that GNNs originally perform well on. Secondly, deploying these frameworks often requires changing model architectures, which can be impractical in real-world scenarios where the original models are well-trained due to the expensive re-training cost (on large-scale graphs) and the shared usage of it across different functionalities in production. In light of these drawbacks, we propose a test-time augmentation framework for GNNs, namely GraphPatcher. Given a well-trained GNN, GraphPatcher mitigates the degree bias by patching corrupted ego-graphs with multiple generated virtual neighbors. Notably, GraphPatcher not only enhances the performance of low-degree nodes but also maintains (sometimes improves) GNNs performance on high-degree nodes. This behavior is empirically important because practitioners can universally apply GraphPatcher to all nodes without, like previous works, manually discovering a degree threshold that differentiates the low- and high-degree nodes. To achieve so, we first generate a sequence of ego-graphs corrupted with increasing strengths. Then, GraphPatcher recursively generates multiple virtual nodes to patch the mostly corrupted graph, such that the frozen GNN gives similar predictions for the patched graph and the corresponding corrupted ego-graph in the sequence. Through this scheme, GraphPatcher not only learns how to patch low-degree nodes (i.e., heavily corrupted) but also maintains GNNs original superior performance on high-degree nodes (i.e., lightly corrupted). As a test-time augmentation framework, GraphPatcher is parameterized in parallel with the target GNN. Hence, GraphPatcher is model-agnostic and requires no updates on the target GNN, enabling practitioners to easily utilize it as a plug-and-play module to existing well-established infrastructures. Overall, our contributions are summarized as: * We study a more practical setting of degree biases on graphs, where both the performances on low- and high-degree nodes are considered. In this case, a good framework is required to not only improve the performance over low-degree nodes but also maintain the original superior performance over high-degree nodes. We evaluate existing frameworks in this setting and observe that many of them trade off performance on high-degree nodes for that on low-degree nodes. * To mitigate degree biases, we propose GraphPatcher, a novel test-time augmentation framework for graphs. Given a well-trained GNN, GraphPatcher iteratively generates multiple virtual nodes and uses them to patch the original ego-graphs. These patched ego-graphs not only improve GNNs' performance on low-degree nodes but also maintains that over high-degree nodes. Moreover, GraphPatcher is applied at the testing time for GNNs, a plug-and-play module that is easily applicable to existing well-established infrastructures. * We conduct extensive evaluation of GraphPatcher along with six state-of-the-art frameworks that mitigate degree biases on seven benchmark datasets. GraphPatcher consistently enhances the overall performance by up to 3.6% and low-degree performance by up to 6.5% of multiple GNNs, significantly outperforming state-of-the-art baselines. Figure 1: The classification accuracy of GCN and SoTA frameworks that mitigate degree biases. Related Works **Graph Neural Networks**. Graph Neural Networks (GNNs) have become one of the most popular paradigms for learning representations over graphs [22; 38; 9; 43; 23; 17; 4]. GNNs aim at mapping the input nodes into low-dimensional vectors, which can be further utilized to conduct either graph-level or node-level tasks. Most GNNs explore a layer-wise message passing scheme, where a node iteratively extracts information from its first-order neighbors, and information from multi-hop neighbors can be captured by stacked layers. They achieved state-of-the-art performance on various tasks, such as node classification [22; 44; 12; 35], link prediction [50; 53; 8], node clustering [2; 37], etc. These tasks further form the archetypes of many real-world applications, such as recommendation systems [45; 3], predictive user behavior models [31; 52], question answering [18], and molecular property prediction [51; 48; 7; 24]. **Degree Bias underlying GNNs**. Recent studies have shown that GNNs exhibit strong biases towards the node degree: they usually perform satisfactorily over high-degree nodes with rich neighbor information but suffer over low-degree nodes [13; 25; 36; 55]. Existing frameworks that mitigate degree biases derive either designated architectures or training strategies specifically for low-degree nodes. For instance, Tail-GNN [25] enhances low-degree nodes' latent representations by injecting high-degree structural information learned from high-degree nodes; Coldbrew [55] retrieves a set of existing nodes as virtual neighbors for low-degree nodes; TuneUp [13] fine-tunes the well-trained GNNs with pseudo labels and heavily corrupted graphs. Though effective for low-degree nodes, they unintentionally create an artificial out-of-distribution scenario [42], where models only observe low-degree nodes during the training, leading to downgraded performance for high-degree nodes that GNNs originally perform well at. **Test-time Augmentation**. While data augmentations during the training phase have become one of the essential ingredients for training machine learning models [54], the augmentation applied during the testing time is far less studied, especially for the graph learning community. It has been moderately researched in the computer vision field, aimed at improving performance or mitigating uncertainties [34; 21; 41; 1]. They usually corrupt the same sample by different augmentation approaches and aggregate the model's predictions on all corrupted samples. Whereas in the graph community, GTrans [15] proposes a test-time enhancement framework, where the node feature and graph topology are modified at the test time to mitigate potential out-of-distribution scenarios. ## 3 Methodology ### Preliminary In this work, we specifically focus on the node classification task. Let \(G=(V,E)\) denote a graph, where \(V\) is the set of \(|V|=N\) nodes and \(E\subseteq V\times V\) is the set of \(|E|\) edges between nodes. \(\mathbf{X}\in\mathbb{R}^{N\times d}\) represents the feature matrix, with \(i\)-th row representing node \(v_{i}\)'s \(d\)-dimensional feature vector. \(\mathbf{Y}\subseteq\{0,1\}^{N\times C}\) denotes the label matrix, where \(C\) is the number of total classes. And \(\mathbf{Y}^{(L)}\) denotes the label matrix for training nodes. We denote the ego-graph of node \(v_{i}\) is defined as \(\mathcal{G}(v_{i})=(V_{i},E_{i})\) with \(V_{i}=\mathcal{N}_{k}(v_{i})\), where \(\mathcal{N}_{k}(v_{i})\) stands for all nodes within the \(k\)-hop neighborhood of \(v_{i}\) including itself and \(E_{i}\) refers to the edges in-between \(\mathcal{N}_{k}(v_{i})\). A well-trained GNN \(f_{g}(\cdot;\mathbf{\theta}):G\rightarrow\mathbb{R}^{N\times C}\) parameterized by \(\mathbf{\theta}\) takes \(G\) as input and maps every node in \(G\) to a \(C\)-dimensional class distribution. Formally, we define test-time node patching as the following: **Definition 1** (Test-time Node Patching).: _Given a GNN \(f_{g}(\cdot;\mathbf{\theta})\) and a graph G, a test-time node patching framework \(f(\cdot;\mathbf{\phi}):G\to G\) takes \(G\) and outputs the patched graph \(\hat{G}\) with generated nodes and edges, such that the performance of \(f_{g}\) over nodes in \(G\) is enhanced when \(\hat{G}\) is utilized:_ \[\arg\min_{\mathbf{\phi}}\ \mathcal{L}\Big{(}f_{g}\big{(}f(G;\phi);\mathbf{\theta}^{ *}\big{)},\mathbf{Y}\Big{)},\quad\text{where}\quad\mathbf{\theta}^{*}=\arg\min_{ \mathbf{\theta}}\mathcal{L}\big{(}f_{g}(G;\mathbf{\theta}),\ \mathbf{Y}^{(L)}\big{)}, \tag{1}\] _where \(\mathcal{L}\) refers to the loss function evaluating the GNN (e.g., cross-entropy or accuracy)._ In this work, we aim at mitigating the degree bias via test-time node patching. To achieve so, two challenges need to be addressed: (1) how to optimize and formulate \(f(\cdot;\mathbf{\phi})\), such that the graphs patched by \(f(\cdot;\mathbf{\phi})\) enhance the performance of \(f_{g}(\cdot;\mathbf{\theta}^{*})\) over low-degree nodes; and (2) how to derive a unified learning scheme that allows \(f(\cdot;\mathbf{\phi})\) to not only improve low-degree nodes but also maintain the GNN's original superiority over high-degree nodes. ### The Proposed Framework: GraphPatcher Our proposed GraphPatcher is a test-time augmentation framework for GNNs to mitigate their degree biases. As shown in Figure 2, GraphPatcher is presented a sequence of ego-graphs corrupted by increasing strengths. Starting from the most corrupted graphs, GraphPatcher iteratively generates patching nodes to augment the anchor nodes. Compared with the corrupted graphs next in the hierarchy, the patched graphs should allow the target GNN to deliver similar outputs. Through this scheme, GraphPatcher not only learns how to patch low-degree nodes while preserving the superior performance over high-degree nodes. #### 3.2.1 Patching Ego-graphs via Prediction Reconstruction In order to patch low-degree nodes, a straightforward approach is to corrupt high-degree nodes into low-degree nodes, and allowing the learning model to patch the corrupted nodes to restore their original properties [25, 13]. However, patching low-degree nodes not only affects their own representations but also those of their neighbors due to the message-passing mechanism of GNNs as well as the non-i.i.d. property of nodes in a graph. Besides, modeling over the entire graphs requires the learning model to consider all potential circumstances, whose overheads grow quadratically w.r.t. the number of nodes. Consequently, it becomes challenging to simultaneously determine both features and neighbors of the patching nodes given the entire graph. To reduce the complexity of the optimization process, instead of working over the entire graph, we conduct node patching over ego-graphs and regard each ego-graph as an i.i.d. sample of the anchor node [56, 19]. For each node \(v_{i}\), we have \(f_{g}(G;\mathbf{\theta})[v_{i}]=f_{g}(\mathcal{G}(v_{i});\mathbf{\theta})[v_{i}]\) if \(k\) equals to the number of layers in \(f_{g}(\cdot;\mathbf{\theta})\). To further simplify the optimization process, we directly wire the generated virtual nodes to the anchor node (i.e., the generated virtual nodes are the first-order neighbors of the anchor node). This implementation is simple yet effective, because we no longer consider the location to place the patching node: any modification that affects the latent representation of the anchor node can be achieved by patching nodes (with different features) directly to the anchor nodes. We start explaining GraphPatcher by the most basic case where we only conduct node patching once. Specifically, given the a trained GNN \(f_{g}(\cdot;\mathbf{\theta}^{*})\), an anchor node \(v_{i}\), and a corruption function \(\mathcal{T}(\cdot;t)\) with strength \(t\) (i.e., first-order neighbor dropping with probability \(t\) to simulate a low-degree Figure 2: GraphPatcher is presented ego-graphs corrupted by increasing strengths (i.e., the top half of the figure). From the most corrupted graph, it iteratively generates patching nodes to the anchor node, such that the target GNN behaves similarly given the currently patched graph or the corrupted graph next in the hierarchy (i.e., the bottom half of the figure). scenario), that is, \(\mathcal{G}^{\prime}(v_{i})=(V^{\prime}(v_{i}),E^{\prime}(v_{i}))=\mathcal{T}( \mathcal{G}(v_{i}),t)\). GraphPatcher \(f(\cdot;\mathbf{\phi})\) takes the corrupted ego-graph \(\mathcal{G}^{\prime}(v_{i})\) as input and outputs the augmented ego-graph \(\hat{\mathcal{G}}(v_{i})\) with a patching node \(v_{p}\) and its feature \(\mathbf{x}_{p}\), which is directly connected to \(v_{i}\). That is, \[\hat{\mathcal{G}}(v_{i})=f(\mathcal{G}^{\prime}(v_{i});\mathbf{\phi}),\ \ \text{where}\ \ \hat{V}=V^{\prime}(v_{i})\cup\{v_{p}\},\ \ \hat{E}=E^{\prime}(v_{i})\cup\{e_{(i,p)}\}, \tag{2}\] where \(e_{(i,p)}\) refers to the edge connecting \(v_{i}\) and \(v_{p}\) and \(V^{\prime}(v_{i})\) and \(E^{\prime}(v_{i})\) refer to the nodes and edges in \(\mathcal{G}^{\prime}(v_{i})\), respectively. To optimize \(f(\cdot:\mathbf{\phi})\) such that \(f_{g}(\cdot;\mathbf{\theta}^{*})\) gives similar predictions to \(\hat{\mathcal{G}}^{\prime}(v_{i})\) and \(\mathcal{G}(v_{i})\), we minimize the Kullback-Leibler divergence between the frozen GNN's predictions on these two ego-graphs, which is defined as: \[\arg\min_{\mathbf{\phi}}\ \sum_{v_{i}\in V_{\text{r}}}\text{KL-Div}\Big{(}f_{g} \big{(}\mathcal{G}(v_{i});\mathbf{\theta}^{*}\big{)}[v_{i}],f_{g}\big{(}f( \mathcal{G}^{\prime}(v_{i});\mathbf{\phi});\mathbf{\theta}^{*}\big{)}[v_{i}]\Big{)}, \tag{3}\] where \(\text{KL-Div}(\mathbf{y}_{1},\mathbf{y}_{2})=(\mathbf{y}_{1}+\epsilon)\cdot \big{(}\log(\mathbf{y}_{2}+\epsilon)-\log(\mathbf{y}_{1}+\epsilon)\big{)}\)1 with \(\epsilon>0\) and \(V_{\text{rr}}\) refers to the set of anchor nodes for training. Intuitively, the reconstruction process above enforces GraphPatcher to remedy the corrupted neighborhood caused by \(\mathcal{T}(\cdot;t)\) via adding a patching node directly to the anchor node. It is philosophically similar to the existing works (e.g., TuneUp [13] and Tail-GNN [25]), where models gain better generalization over low-degree nodes via the corrupted high-degree nodes. Empirically, we observe that this branch of approaches can effectively enhance performance over low-degree nodes. Though promising, according to our empirical studies, it falls short on the high-degree node that original GNNs perform well at. This phenomenon may be attributed to the unintentially created out-of-distribution scenario [42], wherein models primarily encounter nodes with low degrees during the training. Consequently, the performance of GNNs, which is typically proficient with high-degree nodes, is adversely affected and downgraded. Footnote 1: KL divergence used here is equal to the regularized cross-entropy. It is strongly convex and Lipschitz continuous due to the incorporation of \(\epsilon\). These two properties are required for the derivation of Theorem 1. #### 3.2.2 Iterative Patching to Mitigate Degree Bias In this work, we emphasize that: _mitigating degree bias should not focus specifically on the low-degree nodes: trading off performance on high-degree nodes for that on low-degree nodes simply creates a new bias towards high-degree nodes_. Therefore, besides enhancing the performance on low-degree nodes, maintaining GNN's original superiority on high-degree nodes is equally critical. This behavior is empirically desirable because practitioners can universally apply GraphPatcher to all nodes without, like previous works do, manually discovering the degree threshold that differentiates the low- and high-degree nodes. Furthermore, the fact that these frameworks are applicable only to low-degree nodes indicates a lack of robustness: further remedying a neighborhood that is informative enough to deliver a good classification result should not jeopardize the performance. To mitigate the degree bias, we propose a novel training scheme for GraphPatcher such that it observes both low- and high-degree nodes simultaneously during the optimization. Specifically, given a node \(v_{i}\), we firstly create a sequence of \(M\) corrupted ego-graphs of \(v_{i}\), denoted as \(\mathcal{S}(v_{i})=[\mathcal{G}^{\prime}(v_{i})_{m}=\mathcal{T}(\mathcal{G}(v _{i}),t_{m})]_{m=1}^{M}\), with decreasing corruption strength (i.e., \(\forall\ m,n\in\{1,\dots,M\}\), \(t_{m}>t_{n}\) if \(m<n\)). Instead of the one-step patching to match the prediction on the original ego-graph as described in Section 3.2.1, GraphPatcher traverses \(\mathcal{S}(v_{i})\) and recursively patches the corrupted ego-graph to match the target GNN's prediction on the ego-graph next in the sequence. As also illustrated in Figure 2, this optimization process is formulated as: \[\arg\min_{\mathbf{\phi}}\ \sum_{v_{i}\in V_{u}}\sum_{m=1}^{M-1}\text{KL-Div} \Big{(}f_{g}\big{(}\mathcal{G}^{\prime}(v_{i})_{m+1};\mathbf{\theta}^{*}\big{)}[v_ {i}],f_{g}(\hat{\mathcal{G}}(v_{i})_{m};\mathbf{\theta}^{*})[v_{i}]\Big{)}, \tag{4}\] \[\text{s.t.}\ \ \hat{\mathcal{G}}(v_{i})_{m}=f(\hat{\mathcal{G}}(v_{i})_{m-1} ;\mathbf{\phi}),\] where \(\hat{\mathcal{G}}(v_{i})_{m}=(\hat{V}_{m},\hat{E}_{m})\) with \(\hat{V}_{m}=V^{\prime}_{1}(v_{i})\cup\{v_{p}\}_{p=1}^{m}\), \(\hat{E}_{m}=E^{\prime}_{1}(v_{i})\cup\{e_{(i,p)}\}_{p=1}^{m}\), and \(\hat{\mathcal{G}}(v_{i})_{0}=\mathcal{G}^{\prime}(v_{i})_{1}\). The one-step patching described in Section 3.2.1 remedies low-degree anchor nodes directly to the distributions of high-degree nodes. During this process, the model does not observe distributions of high-degree nodes and hence delivers sub-optimal performance. Therefore, we design GraphPatcher to be an iterative multi-step framework. At each step, it takes the previously patched ego-graph as input and further remedies the partially patched ego-graph to match the GNN's prediction on the ego-graph next in the sequence. This scheme enables GraphPatcher to learn to patch low-degree nodes in early steps when the ego-graphs are heavily corrupted (e.g., low-degree case in Figure 2) and maintain the original performance in later steps when ego-graphs are lightly corrupted (e.g., high-degree case in Figure 2). Specifically, at the \(m\)-th patching step, the currently patched ego-graph \(\hat{\mathcal{G}}(v_{i})_{m}\) reflects the neighbor distribution of ego-graphs corrupted by a specific strength of \(t_{m+1}\). GraphPatcher takes \(\hat{\mathcal{G}}(v_{i})_{m}\) as input and further generates another patching node \(v_{m+1}\) to approach the neighbor distribution of ego-graphs corrupted by a slightly weaker strength of \(t_{m+2}\). This process iterates until GraphPatcher traverses \(\mathcal{S}(v_{i})\). Intuitively, the incorporation of \(v_{m+1}\) enriches the neighbor distribution by an amount of \(t_{m+2}-t_{m+1}\) corruption strength. This optimization scheme allows GraphPatcher to observe neighbor distributions with varying corruption strengths and makes our proposal applicable to both low- and high-degree nodes. However, the target distribution at each step (i.e., \(f_{g}\big{(}\mathcal{G}^{\prime}(v_{i})_{m+1};\mathbf{\theta}\big{)}[v_{i}]\) in Equation (4)) is not deterministic due to the stochastic nature of the corruption function \(\mathcal{T}\). Given an ego-graph \(\mathcal{G}(v_{i})\) and a corruption strength \(t\), one can at most generate \(\binom{|V_{i}|}{(1-t)|V_{i}|}\) different corrupted ego-graphs. With a large corruption strength (e.g., ego-graphs early in the sequence \(\mathcal{S}(v_{i})\)), two corrupted ego-graphs generated by the same exact priors might exhibit completely different topologies. Such differences could bring high variance to the supervision signal and instability to the optimization process. To alleviate the issue above, at each step we sample \(L\) ego-graphs with the same corruption strength and let GraphPatcher approximate multiple predictions over them, formulated as: \[\mathcal{L}_{\text{patch}}=\sum_{v_{i}\in V_{\mathbf{v}}}\sum_{m=1}^{M-1}\sum_{l=1 }^{L}\text{KL-Div}\Big{(}f_{g}\big{(}\mathcal{G}^{\prime}(v_{i})_{m+1}^{l}; \mathbf{\theta}^{\ast}\big{)}[v_{i}],f_{g}\big{(}\hat{\mathcal{G}}(v_{i})_{m}; \mathbf{\theta}^{\ast}\big{)}[v_{i}]\Big{)}, \tag{5}\] where \(\hat{\mathcal{G}}(v_{i})_{m}=f(\hat{\mathcal{G}}(v_{i})_{m-1};\mathbf{\phi})\) and \(\mathcal{G}^{\prime}(v_{i})_{m+1}^{l}\) refers to one of the \(L\) target corrupted ego-graphs that GraphPatcher aims to approximate at the \(m\)-th step. This approach allows GraphPatcher to patch the anchor node towards a well-approximated region where its high-degree counterparts should locate, instead of one point randomly sampled from this region. With \(M-1\) virtual nodes patched to the ego-graph, we further ask GraphPatcher to generate a last patching node to \(\hat{\mathcal{G}}(v_{i})_{M-1}\) and enforce the resulted graph \(\hat{\mathcal{G}}^{\prime}(v_{i})_{M}\) to match the GNN's prediction on the original ego-graph. The last patching node could be regarded as a slack variable to complement minor differences between the original and the least corrupted ego-graphs, formulated as: \[\mathcal{L}_{\text{recon}}=\sum_{v_{i}\in V_{\mathbf{v}}}\text{KL-Div}\Big{(}f_{g }\big{(}\mathcal{G}(v_{i});\mathbf{\theta}^{\ast}\big{)}[v_{i}],f_{g}\big{(}\hat{ \mathcal{G}}(v_{i})_{M};\mathbf{\theta}^{\ast}\big{)}[v_{i}]\Big{)}, \tag{6}\] where \(\hat{\mathcal{G}}(v_{i})_{M}=f(\hat{\mathcal{G}}(v_{i})_{M-1};\mathbf{\phi})\). \(\mathcal{L}_{\text{recon}}\) (Equation (6)) also prevents GraphPatcher from overfitting to the low-degree nodes and enforces GraphPatcher to maintain the target GNN's performance over high-degree nodes, since only marginal distribution modification should be expected with this last patching node. Hence, GraphPatcher is optimized by a linear combination of the above two objectives (i.e., \(\arg\min_{\mathbf{\phi}}\mathcal{L}_{\text{patch}}+\mathcal{L}_{\text{recon}}\)). #### 3.2.3 Theoretical Analysis As shown in Equation (5), one of the important factors that contribute to the success of GraphPatcher is sampling multiple ego-graphs with the same corruption strength. The following theorem shows that the error is bounded w.r.t. the number of sampled ego-graphs \(L\). **Theorem 1**.: _Assuming the parameters of GraphPatcher are initialized from the set \(P_{\beta}=\{\mathbf{\phi}:||\mathbf{\phi}-\mathcal{N}(\mathbf{0}_{|\mathbf{\phi}|};\mathbf{1}_{| \mathbf{\phi}|})||_{F}<\beta\}\) where \(\beta>0\), with probability at least \(1-\delta\) for all \(\mathbf{\phi}\in P_{\beta}\), the error (i.e., \(\mathbb{E}(\mathcal{L}_{\text{patch}})-\mathcal{L}_{\text{patch}}\)) is bounded by \(\mathcal{O}(\beta\sqrt{\frac{|\mathbf{\phi}|}{L}}+\sqrt{\frac{\log(1/\beta)}{L}})\)._ The proof of Theorem 1 is provided in Appendix C. From the above theorem, we note that without the sampling strategy (i.e., \(L=1\)), the generalization error depends only on the number of parameters (i.e., \(|\mathbf{\phi}|\)) given the same objective function, which could lead to high variance to the supervision signal and instability to the optimization process. According to this theorem and our empirical observation, an affordable value of \(L\) (e.g., \(L=10\)) delivers stable results across datasets. Experiments ### Experimental Setting **Datasets**. We conduct comprehensive experiments on seven real-world benchmark datasets that are broadly utilized by the graph community, including Cora, Citeseer, Pubmed, Wiki.CS, Amazon-Photo, Coauthor-CS, ogbn-arxiv, Actor, and Chameleon [44; 28; 12; 33]. This list of datasets covers graphs with distinctive characteristics (i.e., graphs with different domains and dimensions) to fully evaluate the effectiveness of GraphPatcher. The detail of these datasets can be found in Appendix A. **Baselines**. We compare GraphPatcher with six state-of-the-art graph learning frameworks from three branches. The first branch specifically aims at enhancing the performance on low-degree nodes, including Tail-GNN [25], ColbBrew [55], and TuneUp [13]. The second branch consists of frameworks that focus on handling out-of-distribution scenarios, including EERM [42] and GTrans[15]. We list this branch of frameworks as baselines because the sub-optimal performance of GNNs over low-degree nodes could be regarded as an out-of-distribution scenario. As GraphPatcher is a test-time augmentation framework, the last branch of baseline includes DropEdge, which is a data augmentation framework employed during training. **Evaluation Protocol**. We evaluate all models using the node classification task [22; 38], quantified by the accuracy score. For datasets with publicly avaiable (i.e., ogbn-arxiv, Cora, Citeseer, and Pubmed), we employ their the provided splits for the model training and testing. Whereas for other datasets, we create a random 10%/10%/80% split for the training/validation/testing split, to simulate a semi-supervised learning setting. All reported performance is averaged over 10 independent runs with different random seeds. Both mean values and standard deviations for the performances of all models are reported. Besides mitigating the degree bias for supervised GNNs, GraphPatcher is also applicable to self-supervised GNNs. To evaluate the model performance for them, we apply GraphPatcher and TuneUp to state-of-the-art self-supervised GNNs including DGI [39], GRACE [57], and ParetoGNN [20]. We only compare our proposal with TuneUp since other frameworks require specific model architectures and hence do not apply to self-supervised GNNs. **Hyper-parameters**. We use the optimal settings on all baselines given by the authors for the shared datasets and a simple two-layer GCN [22] as the backbone model architecture for all applicable baselines. Hyper-parameters we tune for GraphPatcher include learning rate, hidden dimension, the augmentation strength at each step, and the total amount of patching steps with details described in Appendix B. Besides, all of our models are trained on a single RTX3090 with 24GB VRAM; additional hardware information can also be found in the appendix. ### Performance Comparison with Baselines We compare GraphPatcher with six state-of-the-art frameworks that mitigate the degree bias problem and the performances of all models are shown in Table 1. Firstly we notice that the problem of degree bias is quite serious across datasets for GCN. The performances on low-degree nodes are \(\sim\)10% lower than those over high-degree nodes. Comparing GCN with ColbBrew, Tail-GNN, and TuneUp, we can observe that frameworks that focus specifically on low-degree nodes can usually enhance GNN's performance over the lower percentile (e.g., 1.2% accuracy gain on Cora by TuneUp, 0.74% on Citeseer by ColbBrew, 1.38% on Pubmed by Tail-GNN, etc.). However, these frameworks fall short on the high-degree nodes and sometimes perform worse than the vanilla GCN (e.g., -2.7% accuracy degradation on Cora by Tail-GNN, -11.42% on Wiki.CS by ColbBrew, and -2.5% on Amazon Photo by TuneUp). This phenomenon could result from that they unintentionally create an artificial out-of-distribution scenario, where they only observe low-degree nodes during the training, leading to downgraded performance for high-degree nodes that GNNs originally perform well at. Comparing GCN with GTrans and EERM, we observe that they deliver similar performances as the vanilla GCN does, indicating that frameworks targeting out-of-distribution scenarios cannot mitigate degree biases. Comparing GraphPatcher with all baselines, we notice that our proposed GraphPatcher consistently improves the low-degree performance with an average improvement gain of 2.23 accuracy score. Besides, unlike other frameworks that have downgraded performance over high-degree nodes, GraphPatcher can maintain GCN's original high-degree superiority, due to our iterative node patching. On average, GraphPatcher improves GCN's overall performance by a 1.4 accuracy score across datasets. We further apply GraphPatcher to other GNN architectures (i.e., GraphSAGE [9] and GAT [38]) and compare its performance to TuneUp. We only compare with TuneUp since other baselines explore specific model architectures that do not allow a different backbone. From Table 2, we can observe that the issue of degree bias still exists on GAT and GraphSAGE with a performance gap between low- and high-degree nodes around \(\sim\)10%. Both TuneUp and GraphPatcher can improve the performance over low-degree nodes. Specifically, TuneUp on average improves 0.27 low-degree accuracy for GraphSAGE and 0.40 for GAT across datasets; whereas GraphPatcher improves 1.13 for GraphSAGE and 1.66 for GAT, outperforming TuneUp by a large margin. \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline Method & Cora & Citeseer & Pubmed & Wiki.CS & Am.Photoc & Co.CS & **Arxiv** & **Chameleon** & **Actor** \\ \hline \hline GCN & 73.27\({}_{\pm 0.01}\) & 64.86\({}_{\pm 0.02}\) & 76.88\({}_{\pm 0.04}\) & 72.98\({}_{\pm 0.02}\) & 75.59\({}_{\pm 0.43}\) & 84.59\({}_{\pm 0.45}\) & 63.15\({}_{\pm 0.13}\) & 54.05\({}_{\pm 0.18}\) & 27.30\({}_{\pm 0.32}\) \\ ColBrew & 73.82\({}_{\pm 0.98}\) & 65.00\({}_{\pm 0.02}\) & 77.22\({}_{\pm 0.63}\) & 73.98\({}_{\pm 0.32}\) & 76.18\({}_{\pm 0.80}\) & 85.56\({}_{\pm 0.65}\) & 63.02\({}_{\pm 0.21}\) & 53.41\({}_{\pm 0.22}\) & 27.88\({}_{\pm 0.13}\) \\ Tail-GNN & 71.71\({}_{\pm 0.80}\) & 57.60\({}_{\pm 0.33}\) & 75.38\({}_{\pm 0.78}\) & **74.30\({}_{\pm 0.72}\)** & 77.22\({}_{\pm 0.61}\) & 85.13\({}_{\pm 0.90}\) & OOM & 53.48\({}_{\pm 0.94}\) & 27.80\({}_{\pm 0.82}\) \\ TuneUp & 74.47\({}_{\pm 0.43}\) & 65.17\({}_{\pm 0.22}\) & 77.18\({}_{\pm 0.39}\) & 72.60\({}_{\pm 0.78}\) & 67.08\({}_{\pm 0.62}\) & 64.86\({}_{\pm 0.50}\) & 63.34\({}_{\pm 0.32}\) & 53.87\({}_{\pm 0.43}\) & 27.94\({}_{\pm 0.14}\) \\ EERM & 73.40\({}_{\pm 0.06}\) & 64.27\({}_{\pm 0.78}\) & 76.30\({}_{\pm 0.20}\) & 73.12\({}_{\pm 0.68}\) & 75.15\({}_{\pm 0.59}\) & 84.82\({}_{\pm 0.74}\) & 63.20\({}_{\pm 0.11}\) & 54.11\({}_{\pm 0.32}\) & 27.48\({}_{\pm 0.39}\) \\ GTrans & 73.16\({}_{\pm 0.64}\) & 64.95\({}_{\pm 0.83}\) & 77.05\({}_{\pm 0.75}\) & 12.51\({}_{\pm 0.50}\) & 75.55\({}_{\pm 0.55}\) & 84.74\({}_{\pm 0.70}\) & 62.88\({}_{\pm 0.11}\) & 54.29\({}_{\pm 0.14}\) & 27.53\({}_{\pm 0.21}\) \\ DropEdge & 73.57\({}_{\pm 0.79}\) & 65.47\({}_{\pm 0.72}\) & 75.68\({}_{\pm 0.72}\) & 73.94\({}_{\pm 0.70}\) & 76.49\({}_{\pm 0.54}\) & 83.41\({}_{\pm 0.33}\) & 54.13\({}_{\pm 0.33}\) & 54.12\({}_{\pm 0.41}\) & 27.39\({}_{\pm 0.24}\) \\ GraphPatcher & **78.08\({}_{\pm 0.06}\)** & **67.27\({}_{\pm 0.78}\)** & **79.85\({}_{\pm 0.21}\)** & 74.04\({}_{\pm 0.86}\) & **77.84\({}_{\pm 0.36}\)** & **86.76\({}_{\pm 0.54}\)** & **64.01\({}_{\pm 0.12}\)** & **54.48\({}_{\pm 0.17}\)** & **29.27\({}_{\pm 0.57}\)** \\ \hline \hline \multicolumn{10}{c}{Accuracy on High-degree Nodes (Upper Preciellite)} \\ \hline GCN & 86.83\({}_{\pm 0.17}\) & **77.25\({}_{\pm 1.00}\)** & 80.84\({}_{\pm 0.76}\) & 83.40\({}_{\pm 0.70}\) & 84.07\({}_{\pm 0.71}\) & 90.20\({}_{\pm 0.37}\) & 80.46\({}_{\pm 0.18}\) & 54.11\({}_{\pm 0.78}\) & 27.41\({}_{\pm 0.20}\) \\ ColBrew & 84.80\({}_{\pm 0.04}\) & 75.33\({}_{\pm 0.84}\) & 78.66\({}_{\pm 0.73}\) & 71.98\({}_{\pm 0.70}\) & 77.05\({}_{\pm 0.74}\) & 82.16\({}_{\pm 0.10}\) & 70.57\({}_{\pm 0.50}\) & 53.72\({}_{\pm 0.48}\) & 26.67\({}_{\pm 0.20}\) \\ Tail-GNN & 84.13\({}_{\pm 0.48}\) & 78.53\({}_{\pm 0.84}\) & 78.74\({}_{\pm 0.34}\) & 78.91\({}_{\pm 0.70}\) & 80.32\({}_{\pm 0.60}\) & 86.75\({}_{\pm 0.50}\) & OOM & **54.53\({}_{\pm 0.12}\)** & 27.13\({}_{\pm 0.44}\) \\ TuneUp & 87.13\({}_{\pm 0.67}\) & 76.95\({}_{\pm 0.83}\) & 81.74\({}_{\pm 0.31}\) & 83.11\({}_{\pm 0.57}\) & 81.57\({}_{\pm 0.60}\) & 90.65\({}_{\pm 0.80}\) & 80.09\({}_{\pm 0.51}\) & 54.25\({}_{\pm 0.59}\) & 26.64\({}_{\pm 0.71}\) \\ EERM & 85.99\({}_{\pm 0.76}\) & 76.52\({}_{\pm 0.22}\) & 79.98\({}_{\pm 0.60}\) & 82.98\({}_{\pm 0.60}\) & 84.32\({}_{\pm 0.96}\) & 90.17\({}_{\pm 0.11}\) & 80.37\({}_{\pm 0.72}\) & 54.41\({}_{\pm 0.17}\) & 27.39\({}_{\pm 0.44}\) \\ GTrans & 86.32\({}_{\pm 0.34}\) & 76.60\({}_{\pm 0.44}\) & 80.56\({}_{\pm 0.92}\) & 83.42\({}_{\pm 0.42}\) & 83.95\({}_{\pm 0.99}\) & 89.99\({}_{\pm 0.10}\) & **80.77\({}_{\pm 0.20}\)** & 54.21\({}_{\pm 0.19}\) & 27.29\({}_{\pm 0.12}\) \\ DropEdge & 86.53\({}_{\pm 0.90}\) & 76.35\({}_{\pm 0.17}\) & 81.44\({}_{\pm 0.51}\) & 83.37\({}_{\pm 0.73}\) & **84.97\({}_{\pm 0.96}\)** & 89.28\({}_{\pm 0.08}\) & 86.04\({}_{\pm 0.36}\) & 54.17\( ### Performance of GraphPatcher for Self-supervised GNNs To fully demonstrate the effectiveness of GraphPatcher, we also apply our proposal to self-supervised GNNs, as shown in Table 3. We can observe that self-supervised learning can mitigate degree bias by itself, proved by smaller gaps between low- and high-degree nodes than those of semi-supervised GNNs. Combined with GraphPatcher, the degree biases can be further without sacrificing GNN's original superiority over high-degree nodes. On average, GraphPatcher can enhance the low-degree performance of these three self-supervised GNNs by 1.78, 0.74, and 1.36 accuracy scores respectively. ### Effectiveness of GraphPatcher for Enhancing SoTA Method We apply GraphPatcher to GRAND [5], a strong GNN that utilizes a random propagation strategy to perform graph data augmentation and significantly improve the node classification performance. The performance improvement brought by GraphPatcher is shown in Table 4. We observe that GraphPatcher can still consistently improve the node classification for GRAND. Specifically, on low-degree nodes, GraphPatcher can improve 1.40, 2.23, and 4.20 accuracy score on Cora, Citeseer, and Pubmed, respectively. Overall, GraphPatcher further enhances the SoTA performance on these three datasets, with an outstanding accuracy score of 85.90, 76.10, and 84.20. The significant gain from GraphPatcher indicates that the effectiveness brought by the test-time augmentation is not overlapped with the data augmentation during the training. ### Performance w.r.t. the Number of Patching Nodes To investigate the necessity of patching multiple nodes, we conduct experiments over the number of patching nodes at the test time. As shown in Figure 3, we notice that the overall performance gradually increments as the number of patching nodes increases, demonstrating that multiple patching nodes are required to remedy the incomplete neighborhood of low-degree nodes. Besides, we discover that the performance of GraphPatcher saturates with around four nodes patched, which aligns with our training procedure, where the length of the ego-graph sequence is at most five. Experiments concerning the number of patching nodes during the optimization and the number of sampled ego-graphs per corruption strength (i.e., \(M\) and \(L\) in Equation (5)) can be found in Appendix B. ## 5 Discussion w.r.t. Diffusion Models Both diffusion models and GraphPatcher conduct multiple corruptions to training samples with increasing strengths and generate examples in an iterative fashion. This scheme is conceptually inspired by heat diffusion from physics. However, the motivations behind them are different, where diffusion models focus on the generation quality (i.e., fidelity to the original data distribution) but ours \begin{table} \begin{tabular}{l|c c} \hline \hline Method & Cora & Citeseer & Pubmed \\ \hline \multicolumn{3}{c}{Low-degree Nodes (Lower Percentile)} \\ \hline GRAND & 80.18\({}_{\pm 0.64}\) & 70.57\({}_{\pm 0.68}\) & 80.48\({}_{\pm 0.14}\) \\ +GraphPatcher & 81.58\({}_{\pm 0.45}\) & 72.73\({}_{\pm 0.29}\) & 84.68\({}_{\pm 0.29}\) \\ \hline \multicolumn{3}{c}{High-degree Nodes (Upper Percentile)} \\ \hline GRAND & 88.32\({}_{\pm 0.75}\) & 79.64\({}_{\pm 0.86}\) & 83.53\({}_{\pm 0.52}\) \\ +GraphPatcher & 88.92\({}_{\pm 0.18}\) & 79.54\({}_{\pm 0.13}\) & 84.43\({}_{\pm 0.21}\) \\ \hline \multicolumn{3}{c}{Overall Performance} \\ \hline GRAND & 85.22\({}_{\pm 0.80}\) & 74.90\({}_{\pm 0.77}\) & 82.30\({}_{\pm 0.41}\) \\ +GraphPatcher & 85.90\({}_{\pm 0.44}\) & 76.10\({}_{\pm 0.38}\) & 84.20\({}_{\pm 0.26}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Effectiveness for SoTA. Figure 3: Overall perf. (y-axis) w.r.t. the number of patching nodes (x-axis). \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & Cora & Pubmed & Wiki.CS \\ \hline \multicolumn{3}{c}{Low-degree Nodes (Lower Percentile)} \\ \hline DGI & 78.47\({}_{\pm 0.37}\) & 75.63\({}_{\pm 0.82}\) & 75.86\({}_{\pm 0.61}\) \\ +GraphPatcher & 79.95\({}_{\pm 0.53}\) & 78.04\({}_{\pm 0.97}\) & 77.31\({}_{\pm 0.91}\) \\ GRACE & 77.81\({}_{\pm 0.73}\) & 77.80\({}_{\pm 0.65}\) & 74.31\({}_{\pm 0.63}\) \\ +GraphPatcher & 78.53\({}_{\pm 0.82}\) & 78.49\({}_{\pm 0.94}\) & 75.12\({}_{\pm 0.34}\) \\ ParetoGNN & 78.85\({}_{\pm 0.71}\) & 78.32\({}_{\pm 0.33}\) & 74.71\({}_{\pm 0.18}\) \\ +GraphPatcher & 79.91\({}_{\pm 0.62}\) & 79.11\({}_{\pm 0.89}\) & 76.41\({}_{\pm 0.22}\) \\ \hline \multicolumn{3}{c}{High-degree Nodes (Upper Percentile)} \\ \hline DGI & 86.83\({}_{\pm 0.82}\) & 81.14\({}_{\pm 0.28}\) & 81.09\({}_{\pm 0.81}\) \\ +GraphPatcher & 86.91\({}_{\pm 0.10}\) & 82.31\({}_{\pm 0.53}\) & 80.95\({}_{\pm 0.19}\) \\ GRACE & 85.03\({}_{\pm 0.05}\) & 78.74\({}_{\pm 0.84}\) & 83.91\({}_{\pm 0.56}\) \\ +GraphPatcher & 85.12\({}_{\pm 0.25}\) & 79.58\({}_{\pm 0.31}\) & 84.12\({}_{\pm 0.22}\) \\ ParetoGNN & 87.03\({}_{\pm 0.84}\) & 80.89\({}_{\pm 0.84}\) & 81.57\({}_{\pm 0.84}\) \\ +GraphPatcher & 87.32\({}_{\pm 0.27}\) & 80.55\({}_{\pm 0.32}\) & 81.78\({}_{\pm 0.53}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Effectiveness for self-supervised GNNs. aims at the results brought by our generated nodes (i.e., the performance improvement). Specifically, diffusion models [10; 32] aim at learning the probability distribution of the data and accordingly generating examples following the learned distribution. Their goal is to generate samples that follow the original data distribution, agnostic of any other factor like the target GNN we have in our scenario. Whereas for GraphPatcher, we aim at generating nodes to ego-nets such that the target GNN models deliver better predictions when the node degree is low. We mostly care about performance improvement and the generated node may be very different from the original nodes in the graph. ## 6 Discussion w.r.t. Generation Methods for Graph Most graph generation frameworks (including those using diffusion models) explore iterative generation schemes to synthesize real graphs [58; 46; 6; 30; 16; 40]. They improve the generation quality and focus on applications such as molecule design, protein design, and program synthesis. Though GraphPatcher also generates patching nodes for ego-graphs, ours is a different research direction than these methods. We do not focus on whether or not the generated patching nodes are faithful to the original data distribution, as long as the low-degree performance is enhanced and the high-degree performance is maintained. Another relevant work named GPT-GNN [14] explores an iterative node generation for pre-training, which also falls under the category of maintaining the original data distribution. In summary, GraphPatcher is relevant to these frameworks in the sense that it generates nodes to add to ego-graphs. However, our proposal is motivated by a different reason and we aim at the performance improvement brought by generated nodes in downstream tasks. ## 7 Conclusion We study the problem of degree bias underlying GNNs and accordingly propose a test-time augmentation framework, namely GraphPatcher. GraphPatcher iteratively patches ego-graphs with its generated virtual nodes to remedy the incomplete neighborhood. Through our designated optimization scheme, GraphPatcher not only patches low-degree nodes but also maintains GNN's original superior performance over high-degree nodes. Comprehensive experiments are conducted over seven benchmark datasets and our proposal can consistently enhance GNN's overall performance by up to 3.6% and low-degree performance by up to 6.5%, outperforming all baselines by a large margin. Besides, GraphPatcher can also mitigate the degree bias issue for self-supervised GNNs. When applied to graph learning methods with state-of-the-art performance (i.e., GRAND), GraphPatcher can further improve the SoTA performance by a large margin, indicating that the effectiveness brought by the test-time augmentation is not overlapped with existing inductive biases. ## Limitation and Broader Impact One limitation is the additional overhead entailed by generating ego-graphs. To address this limitation, we generate all ego-graphs before the optimization to avoid duplicated computations. This operation takes more hard-disk storage, which is relatively cheap compared with computational resources. Furthermore, we observe no ethical concern entailed by our proposal, but we note that both ethical or unethical applications based on graphs may benefit from the effectiveness of our work. Care should be taken to ensure socially positive and beneficial results of machine learning algorithms. ## Acknowledgement We appreciate Shifu Hou from University of Notre Dame for valuable discussions and suggestions. We would also like to thank anonymous reviewers for their constructive suggestions and comments (i.e., experiments over heterophilic datasets, connections to diffusion models, and discussion w.r.t. iterative generation models for graphs). This work is partially supported by the NSF under grants IIS-2334193, IIS-2321504, IIS-2203262, IIS-2214376, IIS-2217239, OAC-2218762, CNS-2203261, and CMMI-2146076. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any funding agencies.
2305.18221
GazeGNN: A Gaze-Guided Graph Neural Network for Chest X-ray Classification
Eye tracking research is important in computer vision because it can help us understand how humans interact with the visual world. Specifically for high-risk applications, such as in medical imaging, eye tracking can help us to comprehend how radiologists and other medical professionals search, analyze, and interpret images for diagnostic and clinical purposes. Hence, the application of eye tracking techniques in disease classification has become increasingly popular in recent years. Contemporary works usually transform gaze information collected by eye tracking devices into visual attention maps (VAMs) to supervise the learning process. However, this is a time-consuming preprocessing step, which stops us from applying eye tracking to radiologists' daily work. To solve this problem, we propose a novel gaze-guided graph neural network (GNN), GazeGNN, to leverage raw eye-gaze data without being converted into VAMs. In GazeGNN, to directly integrate eye gaze into image classification, we create a unified representation graph that models both images and gaze pattern information. With this benefit, we develop a real-time, real-world, end-to-end disease classification algorithm for the first time in the literature. This achievement demonstrates the practicality and feasibility of integrating real-time eye tracking techniques into the daily work of radiologists. To our best knowledge, GazeGNN is the first work that adopts GNN to integrate image and eye-gaze data. Our experiments on the public chest X-ray dataset show that our proposed method exhibits the best classification performance compared to existing methods. The code is available at https://github.com/ukaukaaaa/GazeGNN.
Bin Wang, Hongyi Pan, Armstrong Aboah, Zheyuan Zhang, Elif Keles, Drew Torigian, Baris Turkbey, Elizabeth Krupinski, Jayaram Udupa, Ulas Bagci
2023-05-29T17:01:54Z
http://arxiv.org/abs/2305.18221v3
# GazeGNN: A Gaze-Guided Graph Neural Network for Chest X-ray Classification ###### Abstract Eye tracking research is important in computer vision because it can help us understand how humans interact with the visual world. Specifically for high-risk applications, such as in medical imaging, eye tracking can help us to comprehend how radiologists and other medical professionals search, analyze, and interpret images for diagnostic and clinical purposes. Hence, the application of eye tracking techniques in disease classification has become increasingly popular in recent years. Contemporary works usually transform gaze information collected by eye tracking devices into visual attention maps (VAMs) to supervise the learning process. However, this is a time-consuming preprocessing step, which stops us from applying eye tracking to radiologists' daily work. To solve this problem, we propose a novel gaze-guided graph neural network (GNN), GazeGNN, to leverage raw eye-gaze data without being converted into VAMs. In GazeGNN, to directly integrate eye gaze into image classification, we create a unified representation graph that models both images and gaze pattern information. With this benefit, we develop a real-time, real-world, end-to-end disease classification algorithm for the first time in the literature. This achievement demonstrates the practicality and feasibility of integrating real-time eye tracking techniques into the daily work of radiologists. To our best knowledge, GazeGNN is the first work that adopts GNN to integrate image and eye-gaze data. Our experiments on the public chest X-ray dataset show that our proposed method exhibits the best classification performance compared to existing methods. The code is available. ## 1 Introduction Image classification has always been a complicated task in the computer vision field. In recent years, because of the explosive development of machine learning techniques, deep learning-based classification algorithms have been proposed to deal with this challenging task [12, 14, 22, 23, 36]. However, compared to the classical natural image datasets such as ImageNet-1k [7], medical image datasets are usually characterized by a relatively limited scale and low signal-to-noise ratio [5], which makes disease classification a more challenging task. This problem is particularly evident in chest X-ray classification. It is because chest X-ray has limited soft tissue contrast, containing a variety of complex anatomical structures overlapping in planar (2D) view [29]. Many tissues, such as organs, blood vessels, and muscles, have similar intensity values on the chest X-ray images [32]. This can easily confuse the deep learning model to distinguish between normal and abnormal tissues, making it difficult to identify the true location of abnormalities accurately. Therefore, deep learning algorithms encounter difficulties in accurately identifying abnormality based solely on chest X-ray images. To overcome this challenge, many recent studies have applied eye-tracking techniques to complement the model with prior knowledge of the location of abnormality regions. Eye-tracking techniques collect eye-gaze data from radiologists during screening procedures [37, 38]. This eye-gaze data represents the search pattern of radiologists for tumors or suspicious lesions on the scans. It indicates the location information that radiologists have fixations and saccades on the images during diagnostic screenings. Since these positions are highly likely to hold abnormality and potentially important regions, eye-gaze data can provide extra location information of the disease that is often challenging to be observed from medical images alone. This supplementary information, a high-level attention, can guide the deep learning model to learn the disease feature in an interpretable way. Hence, embedding eye-gaze information into diagnostic analysis has become a popular topic in recent years [2, 20, 37, 21]. The prior mainstream works on this topic can be broadly categorized into two approaches. The first one [33, 39, 41, 43, 3, 45, 3] is referred to as the attention consistency architecture, illustrated in Fig. 1(a). It calculates the attention map based on the model learned by image. At the same time, eye gaze is utilized to supervise the attention map generated by the model. This ensures that the model's attention aligns closely with the attention patterns observed by human experts. However, since this architecture only utilizes eye gaze during training as a supervision source and excludes it during testing, there is a potential risk in classification performance and model robustness. This is related to the inherent variability in eye-gaze data. The eye-gaze data can differ significantly from case to case since each radiologist may have their own unique search patterns. This individualized nature of eye-gaze data may introduce inconsistencies that complicate the learning process for classification models. Therefore, it is challenging to learn a generalized model to capture standardized eye-gaze data patterns for one specific disease. In section 4.4, we verify that attention consistency architecture exhibits poor model robustness and has a remarkable performance drop when distribution gaps exist in the data. This motivates us to study other structures to integrate eye-gaze information. The second approach [27, 28, 31, 18] is known as the two-stream architecture, as depicted in Fig. 1(b). It consists of two branches dedicated to processing the image and eye gaze information separately. These branches extract features from their respective sources, which are then concatenated and fed into the classification head. In the end, the predicted probabilities of each disease class are achieved. However, since the eye-gaze data consists of a group of fixation points, which is not a regular grid or sequence representation, two-stream architecture transforms the eye-gaze data into visual attention maps (VAMs) and then integrates the VAMs with the medical images. It is not ideal for real-world clinical practice because it is time-consuming to generate VAM for each image during inference (\(\sim\)10s for each image). There is still a need to prepare all the VAMs in advance before sending them into the network one by one. As a result, this hinders the practical application of eye tracking techniques in the daily clinical workflow. Therefore, to address the problems of the existing two architectures, we develop a new framework illustrated in Fig. 1(c). We consider eye gaze as the model input to enhance the model robustness and directly utilize the raw eye-gaze data without converting it to the VAMs to improve time efficiency. To bypass the usage of VAM and fully integrate eye gaze with image, we apply a _graph_ to model multiple information in a single representation and adopt the Graph Neural Network (GNN) to learn the graph. Unlike the nowadays' widely sought Transformer model, GNN is shown to be highly effective even with limited training data, making it a better choice for medical settings [10]. Additionally, GNN has the advantage of capturing the relational information between different parts of the image according to their semantic and categorical attributes [11]. This capability facilitates the learning of relationships between various organs and even the distinction between normal and abnormal regions within the image. To adapt GNN for disease classification, the image is divided into patches to construct a graph. In the graph, each node stands for a feature fused from three types of informa Figure 1: Illustration of our proposed method and other frameworks that integrate eye-gaze information in medical image classification. tion: the location of the patch in the image, the local intensity information of the image patch, and the human attention information from the patch. Respectively, we employ three different embedding techniques to encode the information respectively: (i) positional embedding for the location of the patch, (ii) patch embedding for patch local intensity values extraction, and (iii) gaze embedding for aggregating the fixation time of radiologists on the patch. Then, for each patch, the three embedding features are combined as a single feature vector. Finally, each node is connected to its \(k\)-nearest neighbors to build the graph. By feeding the graph into a GNN, we obtain the disease classification model. The major contributions of this work are summarized as follows: 1. We propose a novel Gaze-guided GNN framework, **GazeGNN**, which can directly integrate raw eye-gaze data with images, bypassing the need to convert gaze into VAMs. This reduces the inference time of each case from \(\sim\)10s to less than 1s, making it the first study that can be applied to real-world clinical practice due to its efficiency and seamless integration. 2. We leverage the flexibility of a graph network to design a unified graph representation that can encode multiple types of information - the location of the patch in the image, the local intensity information of the image patch, and the human attention information focused on the patch - within a single representation. 3. Rather than a supervision source, we verify incorporating eye-gaze data as a model input that can enhance the model's robustness and reduce performance drop in scenarios where distribution gaps exist. 4. By evaluating GazeGNN on a public chest X-ray dataset [18], our proposed method achieves the state-of-the-art performance on the disease classification task. It outperforms the existing strategies that utilize both image and eye-gaze data, from the perspectives of accuracy, robustness, and time efficiency. ## 2 Related Works eye-gaze information. The image is a regular grid structure data, while eye-gaze information is a group of scatter points that indicates the attention locations of radiologists during their evaluation process. To integrate both types of information effectively, we employ the following techniques to embed them into feature vectors to construct a graph accordingly. #### 3.1.1 Patch Embedding The image input size in this task is \(224\times 224\). Therefore, if we treat each pixel as an individual node, there will be 50,176 nodes in the graph. This is an excessive number and makes the GNN training difficult. Instead, we divide the image into multiple \(15\times 15\) patches and consider each patch as a node. Given an image \(\mathcal{I}\in\mathbb{R}^{H\times W}\), we split it into \(N\) patches \(\mathcal{P}=\{p_{1},p_{2},...,p_{N}\}\), where \(p_{i}\in\mathbb{R}^{S\times S}\) for \(i=1,2,...,N\). For each patch \(p_{i}\), we extract a feature vector \(\mathbf{x}_{i}^{(I)}\in\mathbb{R}^{D}\) that encodes the local image information, which can be defined as: \[\mathbf{x}_{i}^{(I)}=F(p_{i}), \tag{1}\] where \(F(\cdot)\) is the feature extraction method. In this work, we adopt the overlapping patch embedding method [42] to extract the feature vectors from image patches. #### 3.1.2 Gaze Embedding Eye-gaze data consists of many scatter points, and each of them means that the radiologists' eyes have concentrated on this location for a moment when they were performing image reading. More importantly, eye gaze not only provides the location information but also offers the time duration for each point. As illustrated in "Eye Gaze" of Fig. 2, there are many red dots with different sizes scattered on the image. A bigger red dot indicates that the radiologist has spent a relatively longer time focusing on the corresponding area. To maintain consistency with the feature vector defined for a single image patch in Eq. (1), we perform time aggregation to get the fixation time for each patch. Assume that there are \(Q\) eye-gaze points \(g_{(m_{1},n_{1})},g_{(m_{2},n_{2})},...,\)\(g_{(m_{Q},n_{Q})}\), in which \(g_{(m_{i},n_{i})}\) indicates that radiologist's eyes fix at location \((m_{i},n_{i})\) for \(g_{(m_{i},n_{i})}\) seconds. Then, to conduct the time aggregation, we sum up all the eye-gaze points' fixation time in the patch to represent the attention feature of the patch, i.e., for each patch \(p_{i}\), the gaze embedding is defined as: \[x_{i}^{(T)}=\sum_{(m_{j},n_{j})\in p_{i}}g_{(m_{j},n_{j})}, \tag{2}\] where \(i\in[1,N]\) and \(j\in[1,Q]\). Next, we replicate the scalar \(x_{i}^{(T)}\) to the vector \(\mathbf{x}_{i}^{(T)}\in\mathbb{R}^{D}\) for feature fusion. Figure 2: An overview of our proposed GazeGNN framework. It includes a graph construction based on patch, gaze, and position embeddings and a graph neural network for disease classification. #### 3.1.3 Position Embedding During the graph processing in GNN, the features are treated as unordered nodes. To keep the positional information in the original image, we adopt the position embedding method from [11], which contains two steps. The first step is to add a learnable absolute positional encoding vector \(\mathbf{e}_{i}\in\mathbb{R}^{D}\) to the feature vector \(\left(\mathbf{x}_{i}^{(I)}+\mathbf{x}_{i}^{(T)}\right)\). In the second step, we calculate the relative positional distance between nodes as \(\mathbf{e}_{i}^{T}\mathbf{e}_{j}\), and this distance is used to determine the neighbors of a given node in the \(k\)-nearest neighbors algorithm for the graph construction. #### 3.1.4 Graph Construction With patch, gaze, and position embeddings, the graph node feature vector \(\mathbf{x}_{i}\) is elaborated as: \[\mathbf{x}_{i}=\mathbf{x}_{i}^{(I)}+\mathbf{x}_{i}^{(T)}+\mathbf{e}_{i}, \tag{3}\] and these features represent the vertices \(\mathcal{V}=\{\mathbf{x}_{1},\mathbf{x}_{2},...\mathbf{x}_{N}\}\). By calculating the \(k\)-nearest neighbors, the edges of the graph are defined as \[\mathcal{E}=\{(\mathbf{x}_{i},\mathbf{x}_{j})\mid\mathbf{x}_{j}\in K(\mathbf{ x}_{i})\}, \tag{4}\] where \(K(\mathbf{x}_{i})\) represents the \(k\)-nearest neighbors of \(\mathbf{x}_{i}\). In this way, a graph \(G=\{\mathcal{V},\mathcal{E}\}\) is constructed. ### Graph Neural Network (GNN) As illustrated in Fig. 3, the graph neural network consists of \(L\) graph processing blocks [11], an average pooling layer, and a graph classification head. Graph processing block consists of multiple fully-connected (FC) layers and a graph convolutional layer [24]. Suppose the graph is represented as \(N\)\(D\)-dimension feature vectors. Given an input graph \(\mathbf{X}^{t}=[\mathbf{x}_{1}^{t},\mathbf{x}_{2}^{t},...,\mathbf{x}_{N}^{t}] \in\mathbb{R}^{N\times D}\) at block \(t\), a graph processing block outputs \(\mathbf{Z}^{t}\in\mathbb{R}^{N\times D}\) as \[\mathbf{Y}^{t}=\Psi_{2}\left(\Phi\left(\Psi_{1}\left(\mathbf{X}^{t}\right) \right)\right)+\mathbf{X}^{t}, \tag{5}\] \[\mathbf{Z}^{t}=\Psi_{4}\left(\Psi_{3}\left(\mathbf{Y}^{t}\right)\right)+ \mathbf{Y}^{t}, \tag{6}\] where \(\Phi\) denotes the graph convolution operation and \(\Psi\) indicates FC layer. Here, we ignore the activation and batch normalization layers. Let \(\mathbf{Y}^{t}\in\mathbb{R}^{N\times D}\) stand for the intermediate output after the first shortcut connection and \(\mathbf{R}^{t}=\Psi_{1}(\mathbf{X}^{t})\) stand for the input of the graph convolutional layer. The graph convolution \(\mathbf{S}^{t}=\Phi(\mathbf{R}^{t})\) is defined as \[\mathbf{s}_{i}^{t}\ =\mathbf{W}\cdot\max\left(\left\{\mathbf{r}_{i}^{t}-\mathbf{r}_ {j}^{t}\mid j\in K\left(\mathbf{r}_{i}^{t}\right)\right\}\right), \tag{7}\] where \(\mathbf{S}^{t}=[\mathbf{s}_{1}^{t},\mathbf{s}_{2}^{t},...,\mathbf{s}_{N}^{t}] \in\mathbb{R}^{N\times D}\) and \(\mathbf{R}^{t}=[\mathbf{r}_{1}^{t},\mathbf{r}_{2}^{t},...,\mathbf{r}_{N}^{t} ]\in\mathbb{R}^{N\times D}\). \(\mathbf{W}\) is a trainable weight matrix to update the feature for the node. The max term is the aggregation function that aggregates features from the \(i\)-th node's neighbors. Therefore, the graph convolution aggregates node neighbors' feature information and updates it into the node feature. In the final step, the classification head is designed as a fully-connected layer with the softmax function. It outputs the predicted probability of each category. ## 4 Experiments Our experiments are implemented on a workstation with an Intel Xeon W-2255 CPU and an NVIDIA RTX 3090 GPU using PyTorch. We train GazeGNN using AdamW optimizer [26] with the learning rate of 0.0001 and the batch size of 32. The checkpoint model with the best testing accuracy is saved during the training. Cross-entropy loss is used as the classification loss function. In the following experiments, we adopt [18] as the implementation of two-stream architecture and [41] as the implementation of attention consistency architecture. ### Dataset Preparation The experiments in this paper are carried out on a public chest X-ray dataset [18], which contains 1083 cases from the MIMIC-CXR dataset [17]. For each case, a gray-scaled X-ray image with the size of around \(3000\times 3000\), eye-gaze data, and ground-truth classification labels are provided. These cases are classified into 3 categories: Normal, Congestive Heart Failure (CHF), and Pneumonia. For the comparison experiments, we generate the static VAMs from Figure 3: The architecture of the proposed Graph Neural Network (GNN). the eye-gaze data using the data post-processing method as described in [18]. The model performance is evaluated through multiple metrics, including accuracy, the area under the receiver operating characteristic curve (AUC), precision, recall, and F1-score. The higher these metrics are, the better the model is. For all the experiments, we apply the same data augmentation techniques, including random resize crop into \(224\times 224\), random horizontal flip, and random rotation by up to \(5^{\circ}\). ### Improving Disease Classification Accuracy We compare GazeGNN with the state-of-the-art methods, including temporal model [18], U-Net+Gaze model [18], and DenseNet121-based model [39]. These methods adopt the official training and test datasets, so we directly include their reported results in this paper. We also compare GazeGNN with some other gaze-guided methods, which have not been validated on this dataset yet, or have used this dataset but did not follow the official splitting strategy. These methods include GazeMTL [33], IAA [8], and EffNet+GG-CAM [45]. To make the comparison fair, we train these methods under the same setting as GazeGNN. The quantitative results are summarized in Table 1. Although we primarily compare the accuracy metric in this work because we save the checkpoint models with the best testing accuracy, it is noted that the proposed GazeGNN still achieves the best performance on all the evaluation metrics. Moreover, Fig. 4 shows the receiver operating characteristic (ROC) curves of the comparison method with the best average AUC and our GazeGNN. The ROC curves of other compared methods are presented in the supplemental materials. ### Improving Inference Speed Eye-gaze data is composed of a group of scatter points, indicating the location coordinates of the radiologists' gaze on the medical image. It is not a regular grid or sequential data format. To align the eye-gaze data with the medical image, existing methods typically transform the eye-gaze into the VAMs for training purposes. The generation of VAM for each image can be a time-consuming process. There are two approaches to accomplish this step. One method is to apply a Gaussian distribution to each eye-gaze point and aggregate the individual distributions to obtain the final VAM. The other approach is to apply a Gaussian filter kernel to smooth the eye-gaze intensity value (duration time on a certain image location) on the whole image. Due to the large size of chest X-rays (approximately 2500x3000) and the considerable number of eye-gaze points, generating VAMs for each image requires substantial time. Consequently, existing methods often pre-generate all VAMs in advance before training or inference. This is not ideal when we want to integrate the eye-gaze into the radiologists' daily work. In our method, on the other hand, we bypass the process of generating VAM and propose a novel technique, called time aggregation with gaze embedding, to conduct eye-gaze integration. Due to the simple calculation inside the time aggregation, we significantly reduce the inference \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Method & Accuracy & \multicolumn{5}{c|}{AUC} & Precision & Recall & F1-Score \\ & & Normal & CHF & Pneumonia & Average & & \\ \hline Temporal Model [18] & - & 0.890 & 0.850 & 0.680 & 0.810 & - & - & - \\ U-Net+Gaze [18] & - & 0.910 & 0.890 & 0.790 & 0.870 & - & - & - \\ DenseNet121+Gaze [39] & - & - & - & - & 0.836 & - & - & 0.270 \\ GazeMTL [33] & 78.50\% & 0.915 & 0.913 & 0.833 & 0.887 & 0.786 & 0.781 & 0.779 \\ IAA [8] & 78.50\% & 0.922 & 0.902 & 0.875 & 0.900 & 0.780 & 0.774 & 0.776 \\ EffNet+GG-CAM [45] & 77.57\% & 0.906 & 0.914 & 0.843 & 0.888 & 0.770 & 0.772 & 0.770 \\ \hline **GazeGNN** & **83.18\%** & **0.938** & **0.916** & **0.914** & **0.923** & **0.839** & **0.821** & **0.823** \\ \hline \end{tabular} \end{table} Table 1: Classification results on the Chest X-Ray dataset [18]. \begin{table} \begin{tabular}{l|c|c} \hline Method & Gaze & Inference Time \\ \hline GazeGNN & ✓ & **0.353s** \\ Two-stream Architecture & ✓ & 9.246s \\ Attention Consistency Architecture & ✗ & 0.294s \\ \hline \end{tabular} \end{table} Table 2: Comparison of inference speed. Figure 4: OC curves and AUC scores of our method and state-of-the-art method. time, as shown in Table 2. We compare the inference speed of our method and the current two mainstream architectures. We test on 100 cases and calculate the average processing time as the inference time. For attention consistency architecture, a Gaussian filter kernel, with standard deviation \(\sigma=150\), is applied to generate the VAM for each case. From the result shown in Table. 2, we can find that two-stream architecture takes the longest inference time, around 10 seconds. This is mainly due to the time-consuming process of VAM generation. It is worth noting the GazeGNN obtains comparable inference time as attention consistency architecture. The attention consistency architecture does not require gaze input in the inference stage, while GazeGNN involves the eye-gaze. This demonstrates the efficiency of eye-gaze integration in our architecture, which points out the feasibility to bring real-time eye-tracking techniques into the radiology rooms. ### Improving Model Robustness In attention consistency architecture, the eye-gaze data is considered a supervision source during training, as illustrated in Fig. 1. The inference stage of attention consistency architecture does not involve eye-gaze information. This requires the model to learn the eye-gaze pattern for certain diseases. However, the eye-gaze data is different case by case and each radiologist has his own search patterns when doing image reading. Further, even for the same radiologist's second time reading of the same scan may show differences in eye-gaze patterns. Therefore, learning standardized eye-gaze data patterns for a specific disease is challenging, and likely not a generalizable model. To fully utilize the power of eye-gaze information, we postulate that the model should incorporate gaze input in the inference stage. In this way, when encountering new data that exhibits a distribution shift from the original training dataset, we can still leverage the eye-gaze data to provide the model with the location information of the potential abnormality. To prove this assumption, we introduce random noise to the testing dataset, creating a distribution gap from the original training dataset. We then evaluate our method and attention consistency architecture (ACA) on the original and noisy testing datasets. Based on the results presented in Table 4, it is evident that the attention consistency architecture exhibits a larger performance drop compared to our proposed method, validating our previous assumption. ### Effectiveness of GNN After combining the position, gaze, and patch embedding, we obtain a single feature that represents both image and eye gaze. In this work, the feature is used to construct a graph and processed by GNN. But it also works for other backbone architectures. We employ strong backbone networks, including DenseNet, ResNet, and Swin Transformer, and compare the performance with GNN. The performance of our method across different backbones is shown in Table 3. The Transformer backbone does not exhibit the best performance. This might be because it suffers from limited data. In addition, we see that our method with GNN achieves the best results over all the evaluation metrics. This can be attributed to two key factors. Firstly, unlike the Transformer model, GNN demonstrates remarkable effectiveness even when presented with limited training data. The other reason is that GNN can capture and comprehend the intricate relationships between patches through graph learning. ### Ablation Study of Gaze Usage To study the effectiveness of the gaze information, we remove the gaze embedding and only fuse the features from patch embedding and position embedding. In this way, gaze information is not used. The comparison is presented in Table 5 and supplementary. Without the gaze information, \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline Method & \multicolumn{5}{c}{Performance Drop \(\downarrow\)} \\ & Accuracy & Precision & Recall & F1-Score & Average AUC \\ \hline GazeGNN & **2.78\%** & **1.10\%** & **2.87\%** & **3.97\%** & **0.20\%** \\ ACA & 13.79\% & 15.30\% & 15.63\% & 18.38\% & 4.86\% \\ \hline \end{tabular} \end{table} Table 4: Comparison of performance drop when testing on the dataset with distribution shift. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Backbone & Accuracy & \multicolumn{5}{c|}{AUC} & Precision & Recall & F1-Score \\ & & Normal & CHF & Pneumonia & Average & & & \\ \hline DenseNet121 [13] & 71.03\% & 0.903 & 0.855 & 0.620 & 0.793 & 0.696 & 0.689 & 0.689 \\ ResNet18 [12] & 71.96\% & 0.906 & 0.820 & 0.687 & 0.804 & 0.706 & 0.706 & 0.705 \\ ResNet50 [12] & 70.09\% & 0.898 & 0.818 & 0.663 & 0.793 & 0.685 & 0.685 & 0.684 \\ ResNet101 [12] & 71.03\% & 0.852 & 0.862 & 0.756 & 0.823 & 0.703 & 0.705 & 0.703 \\ Swin-T [25] & 77.57\% & 0.925 & 0.898 & 0.732 & 0.852 & 0.762 & 0.760 & 0.755 \\ Swin-S [25] & 74.77\% & 0.911 & 0.873 & 0.728 & 0.837 & 0.733 & 0.735 & 0.733 \\ Swin-B [25] & 76.64\% & 0.907 & 0.880 & 0.770 & 0.852 & 0.771 & 0.754 & 0.748 \\ \hline **GNN** & **83.18\%** & **0.938** & **0.916** & **0.914** & **0.923** & **0.839** & **0.821** & **0.823** \\ \hline \end{tabular} \end{table} Table 3: Performance comparison of our method across different backbones. accuracy, average AUC, precision, recall, and F1-score all descend. This validates the assumption that introducing the eye-gaze data can improve classification performance. It is noted that even without gaze embedding, our obtained accuracy is higher than 80% and the average AUC is higher than 0.900, superior to most gaze-guided state-of-the-art methods. This is because the proposed graph representation is powerful enough to help the model recognize the image. In addition, we visualize the model's intermediate features to show the power of eye-gaze integration. We use Grad-CAM [34] to generate the attention map from the trained model. From Fig. 5, it is observed that before the eye-gaze integration, the model fails to focus on the abnormal regions, resulting in incorrect classification decisions. However, when eye-gaze is introduced, the model's attention shifts to the regions highlighted by radiologists. This indicates the guidance of eye-gaze enhances the model's capability to achieve more accurate abnormality localization. ## 5 Conclusion In this study, we propose a novel gaze-guided graph neural network, GazeGNN, to perform the disease classification task. With the flexibility of graph representation, GazeGNN can utilize the raw eye-gaze information directly by embedding it with the image patch and the position information into the graph nodes. Therefore, this method avoids generating the VAMs that are required in mainstream gaze-guided methods. With this benefit, we develop a real-time, end-to-end disease classification algorithm without preparing the visual attention maps in advance. We show that GazeGNN can produce a significantly better performance than existing methods under the same training strategy. This proves the feasibility of bringing real-time eye tracking techniques to radiologists' daily work. Figure 5: Gaze map and Grad-CAM based attention maps with and without eye-gaze data are shown. Under the images, the original label of the chest X-ray is represented by the black color, while the red and green labels indicate incorrect and correct model predictions, respectively. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline Gaze & Accuracy & Average AUC & Precision & Recall & F1-Score \\ \hline ✓ & **83.18\%** & **0.923** & **0.839** & **0.821** & **0.823** \\ ✗ & 80.37\% & 0.910 & 0.800 & 0.805 & 0.801 \\ \hline \end{tabular} \end{table} Table 5: Ablation study on GazeGNN with/without the eye-gaze information.
2307.04065
Large-scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks
We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peak at high-performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high-dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems.
Jiaqi Jiang, Jonathan A. Fan
2023-07-09T00:05:59Z
http://arxiv.org/abs/2307.04065v1
Large scale global optimization of ultra-high dimensional non-convex landscapes based on generative neural networks ###### Abstract We present a non-convex optimization algorithm metaheuristic, based on the training of a deep generative network, which enables effective searching within continuous, ultra-high dimensional landscapes. During network training, populations of sampled local gradients are utilized within a customized loss function to evolve the network output distribution function towards one peaked at high performing optima. The deep network architecture is tailored to support progressive growth over the course of training, which allows the algorithm to manage the curse of dimensionality characteristic of high dimensional landscapes. We apply our concept to a range of standard optimization problems with dimensions as high as one thousand and show that our method performs better with fewer functional evaluations compared to state-of-the-art algorithm benchmarks. We also discuss the role of deep network over-parameterization, loss function engineering, and proper network architecture selection in optimization, and why the required batch size of sampled local gradients is independent of problem dimension. These concepts form the foundation for a new class of algorithms that utilize customizable and expressive deep generative networks to solve non-convex optimization problems. ## Introduction High dimensional, non-convex optimization problems are pervasive in many scientific and engineering domains, including computational materials science [3, 1], electromagnetics [1, 2], circuits design [1], process engineering [13], and systems biology [1]. These problems are known to be very difficult to solve because they are NP-hard, and algorithms aiming to definitively search for the global optimum, such as branch and bound methods, cannot practically scale to high dimensional systems. As such, various algorithm heuristics have been developed, ranging from evolutionary metaheuristics to Bayesian optimization [14, 15], which use judicious sampling of the landscape to identify high performing optima. In all cases, it remains challenging to apply these algorithms to ultra-high dimensional spaces with dimensions of hundreds to thousands due to the curse of dimensionality. The explosion of interest and research in deep neural networks over the last decade has presented new opportunities in optimization, as the process of training a deep network involves solving a high dimensional optimization problem. To this end, gradient-based optimization metaheuristics termed global topology optimization networks (GLOnets) [10] were recently proposed that use the training of a deep generative network to perform non-convex optimization. The concept applies to optimization problems where \(\mathbf{x}\) is a \(d\)-dimensional variable and the goal is to maximize the smoothly varying, non-convex objective function \(f(\mathbf{x})\). To run the metaheuristic, the generative network is first initialized so that it outputs a distribution of \(\mathbf{x}\) values that spans the full optimization landscape. Over the course of network training, this distribution is sampled, \(f(\mathbf{x})\) and local gradients are computed for these sampled points, and these values are incorporated into a customized loss function and backpropagated to evolve and narrow the distribution around high performing optima. Initial demonstrations indicate that GLOnets can perform better than standard gradient-based optimizers and global search heuristics for various non-convex optimization problems. However it is unable to extend to high dimensional problems in its current form, and the lack of interpretability with this black box algorithm has made it difficult to understand if and how it can to adapt to more general problems, including high dimensional problems. In this Article, we introduce the progressive growing GLOnet (PG-GLOnet) in which optimization within an ultra-high dimensional non-convex landscape is mediated through the training of a progressive growing deep generative network. Our tailoring of the network architecture for this optimization task serves to incorporate knowledge and assumptions about the optimization landscape into the metaheuristic, which is a requirement for tractably navigating ultra-high dimensional landscapes. We also explain how the algorithm works to smoothen the design landscape, how evaluation of the loss function serves as a gradient estimation calculation, and why the number of required functional evaluations is independent of problem dimension. With standard benchmarking test functions, we show that our concept performs better than state-of-the-art algorithms with fewer functional evaluations for one thousand dimensional problems. We anticipate that the customization of network architectures within the GLOnets framework will seed new connections between deep learning and optimization. ### Progressive Growing GLOnets Algorithm and Benchmarking The PG-GLOnet concept builds on the foundation of the original GLOnet algorithm, which we briefly review here. The optimization problem to be solved with GLOnets can be written in the following form: \[\max_{\mathbf{x}}f(\mathbf{x}) \tag{1}\] where \(f(\mathbf{x})\) is a non-convex, continuous objective function with feasible gradients. With GLOnets, this optimization problem is indirectly solved through the training of a general neural network (Figure 1a), where the input is a \(d\)-dimensional random variable \(\mathbf{z}\) with a standard normal distribution and the output is a distribution of \(\mathbf{x}\)'s. The generator therefore serves to map \(\mathbf{z}\) onto \(\mathbf{x}=G(\mathbf{z};\phi)\) with a distribution \(P(\mathbf{x};\phi)\), where \(\phi\) denotes the trainable neural network parameters. The optimization objective for the generator is defined as: \[L=\max_{\phi}\mathop{\mathbb{E}}_{\mathbf{x}\sim P(\mathbf{x};\phi)}\exp\left[ \frac{f(\mathbf{x})}{T}\right] \tag{2}\] The distribution that maximizes this expected value is a delta function centered at the global optimum, and as such, an ideally trained generator will produce a narrow distribution centered at the global optimum, thereby solving the original optimization problem. The use of the exponential function and the hyperparameter \(T\) in the optimization objective further enhance the valuation of the global optimum, and more generally high performing optima, in the design space. Generator training is consistent with conventional deep learning training methods: gradients of the objective function with respect to network parameters, \(\nabla_{\phi}\mathbb{E}f\), are calculated through backpropagation, and they are used to iteratively optimize \(\phi\) using standard gradient-based methods. In practice, the objective function is approximated by a batch of \(M\) samples. \(P(\mathbf{x};\phi)\), on the other hand, is typically implicit and cannot be directly sampled. To circumvent this issue, we draw \(M\) samples \(\{\mathbf{z}^{(m)}\}_{m=1}^{M}\) from the standard normal distribution, transform them to \(\{\mathbf{x}^{(m)}\}_{m=1}^{M}\), and then approximate \(L\) and its gradient \(\nabla_{\phi}L\) with respect to network parameters \(\phi\): \[L\approx\frac{1}{M}\sum_{m=1}^{M}\exp\left[\frac{f(\mathbf{x}^{(m)})}{T}\right] \tag{3}\] \[\nabla_{\phi}L\approx\frac{1}{M}\sum_{m=1}^{M}\frac{1}{T}\exp\left[\frac{f( \mathbf{x}^{(m)})}{T}\right]\nabla_{\mathbf{x}}f\cdot D_{\phi}\mathbf{x}^{(m)} \tag{4}\] \(\nabla_{\mathbf{x}}f=[\frac{\partial f}{\partial x_{1}},\frac{\partial f}{ \partial x_{2}},\dots,\frac{\partial f}{\partial x_{d}}]\) are the gradients of \(f(\mathbf{x})\) and \(D_{\phi}\mathbf{x}=\frac{\partial(x_{1},x_{2},\dots)}{\partial(\phi_{1},\phi _{2},\dots)}\) is the Jacobian matrix. Evaluation of \(f(\mathbf{x})\) is usually performed by a numerical simulator and the gradient of \(f(\mathbf{x})\) can be calculated explicitly or by auto-differentiation for analytic expressions, or by the adjoint variables method (AVM). In the initial conception of GLOnet, which we term FC-GLOnet, the generative network was a fully connected deep network and was capable of effectively addressing optimization problems with a modest number of dimensions. However, it was found to be ineffective at optimizing within very high dimensional landscapes due to the curse of dimensionality, which makes a direct search for the global optimum within a full, high dimensional landscape an intractable proposition. We therefore propose the PG-GLOnet, which utilizes a generative network that outputs a distribution that gradually grows from a coarse, low dimensional space to a fine, high dimensional space. By tailoring the network architecture in this way, we regularize the optimization process to take place over differing degrees of optimization landscape smoothing, enabling our search process to be computationally efficient and tractable. The PG-GLOnet generator architecture is shown in Figure 1b. The progressive growth concept is inspired by progressively growing GANs [1] that have been developed in the computer vision community to process images with increasing spatial resolution during network training. The input to the network is a \(D\)-dimensional random vector \(\mathbf{x}^{0}\), and its dimension is much smaller than that of \(\mathbf{x}\). With \(L\) growing blocks, the network simultaneously transforms and increases the dimensionality of the input vector, and its output is a \(2^{L}D\) dimensional vector \(\mathbf{x}^{L}\) that matches the dimensionality of \(\mathbf{x}\). In each growing block, the input vector dimension is doubled in two ways, by direct upsampling and by a linear transform. The resulting outputs are combined together and further transformed using a non-linear activation function: \[\mathbf{x}^{out}_{2d\times 1}=q\left((1-\alpha)\begin{pmatrix}\mathbf{x}^{in}_{d \times 1}\\ \mathbf{x}^{in}_{d\times 1}\end{pmatrix}+\alpha\ A_{2d\times d}\cdot\mathbf{x}^{in}_{d \times 1}\right) \tag{5}\] \(A_{2d\times d}\) are trainable parameters in the linear transformation branch, \(q(\cdot)\) is a non-linear activation function, and \(\alpha\) is a hyperparameter that is manually tuned over the course of optimization. Initially, \(\alpha\)'s for all of the growing blocks in the network are set to \(0\), such that the vector outputted by each block has the same effective dimensionality as its input vector. The network output \(\mathbf{x}^{L}\) therefore has an effective dimensionality that matches the dimensionality of the input \(\mathbf{x}^{0}\). As \(\alpha\) is increased for a particular growing block, its output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that exceeds and eventually doubles that of the growing block input vector. The effective dimensionality of \(\mathbf{x}^{L}\) therefore arises from the aggregation of effective dimensionality increases from all growing blocks. To control the effective dimensionality of \(\mathbf{x}^{L}\) over the course of PG-GLOnet training, \(\alpha\) is manually changed from 0 to 1 sequentially from the left to right blocks (bottom of Figure 1b). At the end of PG-GLOnet training, \(\alpha\) is \(1\) for all growing blocks and the effective dimensionality of \(\mathbf{x}^{L}\) matches that of \(\mathbf{x}\). To evaluate the efficacy of PG-GLOnet in solving high dimensional non-convex optimization problems, we perform a series of benchmark numerical experiments where we optimize a set of standard test functions with PG-GLOnet and other established algorithms. In the first set of experiments, we consider a testing function that can be tuned from a convex to non-convex function and compare PG-GLOnet with ADAM, a well known momentum-based gradient descent algorithm that is typically more effective than gradient descent. ADAM is a local optimization algorithm and performs well on convex objective functions but can get trapped within local optima for non-convex functions. Our test function is a modified Rastrigin function defined as follows: \[f(\mathbf{x};\rho)=\rho d+\sum_{i=1}^{d}[x_{i}^{2}-\rho\cos(2\pi x_{i})] \tag{6}\] \(\rho\) is a hyperparameter that specifies the amplitude of the sinusoidal modulation within the function. When \(\rho=0\), \(f(\mathbf{x};\rho)=\sum_{i=1}^{d}x_{i}^{2}\) and is a convex function. As \(\rho\) increases, more local optima emerge and these optima become separated by larger magnitude barriers. We first consider the computational cost required by ADAM and PG-GLOnet to find the global optimum of a two dimensional modified Rastrigin function as a function of \(\rho\). For ADAM, we run 10000 optimizations for 200 iterations with random starting points, and for PG-GLOnet, we run the algorithm 10 times with a batch size of 20 for 200 total iterations. In both cases, the algorithms terminate early when they output results within \(10^{-3}\) of the global optimum, and computational cost is quantified as the average number of function evaluations required to find the global optimum. The results are summarized in Figure 1(a) and indicate that for convex or nearly convex optimization landscapes, ADAM is more efficient at finding the global optimum. This efficiency arises because ADAM is a specially tailored local optimizer that is well suited for these types of problems, while PG-GLOnet always requires relatively large batch sizes and more iterations to converge. As \(\rho\) increases, orders-of-magnitude more ADAM evaluations are required to search for the global optimum due to trapping within local optima in the design landscape. The computational cost for PG-GLOnet, on the other hand, does not increase nearly as rapidly due to its ability to navigate non-convex landscapes and is ten times more efficient than ADAM for \(\rho\) greater than \(3\). We also perform benchmarks between ADAM and PG-GLOnet for a ten dimensional problem. Due to the inability for ADAM to converge to the global optimum in non-convex, high dimensional landscapes, we perform this benchmark differently and compare the best optimal value found by ADAM and PG-GLOnet given the same amount of computational resources. Here, we run ADAM for 200 iterations with 20 random starting points and PG-GLOnet for 200 iterations with a batch size of 20. We run these benchmark experiments ten times and average the best values from each experiment, and the results are reported in Figure 1(b). We find that the PG-GLOnet is able to consistently find solutions at or near the global optimum for all values of \(\rho\), but the local optimizer gets progressively worse as \(\rho\) increases. In our next set of benchmark experiments, we compare PG-GLOnet with the covariance matrix adaptation evolution strategy (CMA-ES), which is an established evolutionary algorithm used to perform population-based global searching of an optimization landscape. Compared to ADAM, it is Figure 1: (a) Framework of the GLOnet algorithm. A deep generative network produces a distribution of design variables \(\mathbf{x}\) and the distribution is narrowed around high performing optima by backpropagation. (b) PG-GLOnet generator architecture. A set of growing blocks are implemented and activated over the course of network training to enable optimization in very high-dimensional landscapes. more suitable for performing non-convex optimization. We consider two standard non-convex testing functions with lots of local optima, the Rastrigin and Schwefel functions (defined in the Appendix). Plots in Figures 2c and 2d show the average number of function evaluations required to find the global optimum as a function of problem dimension \(d\). The computational cost of CMA-ES increases exponentially as the problem dimension becomes larger, indicating the intractability of applying this algorithm to ultra-high dimensional problems. For the Schwefel function, we limited our CMA-ES benchmarking experiments to a problem dimension of 20 due to this scaling trend. PG-GLOnet, on the other hand, has a relatively small computational cost that is not sensitive to the dimension. In fact, the same neural network architecture and batch size is used for all problems. A more detailed discussion as to the origins of problem dimension and batch size decoupling is provided in the Discussion section. Finally, we benchmark PG-GLOnet with state-of-art algorithms on testing functions proposed by the CEC'2013 Special Session and Competition on Large-Scale Global Optimization (LSGO) [11]. We consider the six non-convex benchmark functions from the competition, which involve variations and combinations of the Rastrigin and Ackely functions and are defined in the Appendix. These benchmark functions were designed to incorporate a number of challenging features for optimization, including: 1. **High dimensions.** The design space of a optimization problem grows exponentially as the dimension of design variables increases. These benchmark functions utilize one thousand dimensional landscapes. 2. **Functions with non-separable subcomponents.** The whole design variable is decomposed into several subcomponents and dimensions within each subcomponent are strongly coupled together. 3. **Imbalance in the contribution of subcomponents.** The contribution of a subcomponent is magnified or dampened by a coefficient. 4. **Non-linear transformations to the base functions.** Three transformations are applied to break the symmetry and introduce some irregularity on the landscape: (1) Ill-conditioning (2) Irregularities (3) Symmetry breaking. To globally search these landscapes for the global optimum, we perform a two step optimization procedure. First, we run PG-GLOnet for each benchmark function for 200 iterations and a batch size of 100, from which our generative network outputs a narrow distribution of \(\mathbf{x}\)'s in promising regions of the optimization landscape. We then sample this distribution 100 times and perform local gradient descent on each of these design variables for an additional 200 iterations. The best function values found by PG-GLOnet plus local gradient descent are reported in Table 1, together with results produced from FC-GLOnet plus local gradient descent, local conjugate gradient descent, and two state-of-art non-convex optimization algorithms that were the best performing algorithms in the most recent LSGO contest: CC-RDG3, which is a divide-and-conquer method [23], and DGSC, which is a differential group method utilizing spectral clustering [11]. We observe that PG-GLOnet with local gradient descent refinement is able to significantly outperform the other algorithms for the majority of test functions. In addition, the total computational cost of the two step optimization procedure is only \(4\times 10^{4}\) function evaluations, while CC-RDG3 and DGSC require \(3\times 10^{6}\) function evaluations. ## Discussion We discuss the origins of the efficiency and efficacy of PG-GLOnet in solving ultra-high dimensional non-convex optimization problems. First, we examine how the generic GLOnet algorithm operates and why it is able to effectively utilize a gradient-based strategy to solve non-convex optimization problems. Second, we examine the role of the progressive growing generative network architecture in PG-GLOnet in solving ultra-high dimensional problems. By understanding the relationship between network architecture and optimization procedure, we elucidate built-in assumptions used by PG-GLOnet in its search for the global opti Figure 2: Benchmark results for PG-GLOnet. (a,b) Benchmark of PG-GLOnet with ADAM for a modified Rastrigin test function that can be tuned from convex to non-convex. (a) Functional evaluations required by both algorithms for a two dimensional Rastrigin test function with differing \(\rho\). Plots of the test function for \(\rho=0\) and \(\rho=10\) are shown on the left. (b) Optimal values achieved by PG-GLOnet and ADAM for a ten dimensional Rastrigin test function with differing \(\rho\). (c,d) Benchmark of PG-GLOnet with CMA-ES for modified (c) Rastrigin and (d) Schwefel test functions. Both plots show the required number of functional evaluations required to find the global optimum as a function of test function dimension. mum. With the generic GLOnet algorithm, the original optimization problem cited in Equation 1 is reframed as a related problem (Equation 2) that addresses a transformed, smoothened optimization landscape. The key concepts that produce this landscape transformation and enable effective gradient-based optimization are outlined in Figure 2(a) and are: 1) distribution optimization, where the original problem involving the optimization of \(\mathbf{x}\) is transformed to a problem involving the optimization of parameters within a simple distribution \(P(\mathbf{x})\); 2) exponential transformation, where the objective function is exponentially weighted; 3) over-parametrization, where the distribution \(P(\mathbf{x})\) is now parameterized by a neural network with hundreds to thousands of weights; and 4) gradient estimation, where gradients that specify the evolution of the continuous distribution \(P(\mathbf{x})\) are accurately computed through discrete samplings of \(\mathbf{z}\). **Distribution optimization.** With the concept of distribution optimization, the original problem of searching for an optimal \(\mathbf{x}\) is recast as a population-based search in which parameters within a distribution function are optimized, thereby enabling a search for the global optimum in a smoother and higher dimensional optimization landscape. This concept is shared by other population-based optimization algorithms, such as CMA-ES. To visualize the concept, we consider a non-convex one-dimensional function \(f(\mathbf{x})\) plotted as a blue line in the leftmost figure in Figure 2(a). The objective is to maximize \(f(\mathbf{x})\), and the function contains multiple local maxima separated by deep valleys. It is easy for optimization algorithms, particularly gradient-based algorithms, to get trapped in the local optima. For example, if gradient descent optimization is used and is initialized at the yellow dot position, the algorithm will converge to the local optimum delineated by the red dot. With this approach, multiple independent gradient descent optimizations with random starting points are needed to increase the possibility of finding the global optimum. For these problems, gradient-free optimization heuristics are often employed, which can reduce the chances of trapping within suboptimal maxima but which introduce a more stochastic nature to the search process. However, if we consider the optimization of a distribution function that interacts with the global optimization landscape, local information at different parts of the landscape can be aggregated and collectively utilized to evolve this distribution in a manner that reduces issues of trapping within suboptimal maxima. Formally, we transform the optimization variable \(\mathbf{x}\) to parameters within the distribution \(P(\mathbf{x})\), and the globally optimal distribution is one that is narrowly peaked around the global optimum. Distribution functions can be explicitly parameterized in many ways. As a simple illustrative example that builds on our discussion of the one-dimensional \(f(\mathbf{x})\), we consider the one-dimensional Gaussian distribution denoted as \(P(\mathbf{x};\mu,\sigma)\), shown as the red curve in the leftmost figure in Figure 2(a). \(\mu\) and \(\sigma\) refer to mean and standard deviation, respectively. With a Gaussian distribution function, the objective function now becomes transformed to the expected value of \(f(\mathbf{x})\) as a function of \((\mu,\sigma)\): \(\mathbb{E}_{\mathbf{x}\sim P(\mathbf{x};\mu,\sigma)}\,f(\mathbf{x})\). As this new optimization landscape is a function of two distribution parameters, \(\mu\) and \(\sigma\), it is two dimensional. We can directly visualize this new landscape by evaluating \(\int f(\mathbf{x})P(\mathbf{x};\mu,\sigma)d\mathbf{x}\) for all values of \((\mu,\sigma)\), and the result is summarized in the second figure from the left in Figure 2(a). The horizontal line section at the bottom of the contour plot, where \(\sigma\) equals zero, is the original one-dimensional \(f(\mathbf{x})\) with multiple optima. As \(\sigma\) increases to finite values above zero, the landscape becomes smoother. Mathematically, horizontal line sections for finite sigma are calculated by convolving \(f(\mathbf{x})\) with the Gaussian function, producing a Gaussian blur that leads to smoothening. This smoothened landscape facilitates gradient-based optimization of \((\mu,\sigma)\) when the distribution is initialized to large \(\sigma\) values, and the final optimized distributions converge to the original \(f(\mathbf{x})\) space at the bottom of the plot. However, while this two-dimensional landscape is smoother than the original \(f(\mathbf{x})\), there remain multiple distribution parameter initializations for which the gradient-based optimizer converges to suboptimal maxima. **Exponential transformation.** To further smoothen the optimization landscape and enhance the presence of the global optimum, we perform an exponential transformation of the objective function. Mathematically, the objective function for the distribution optimization problem becomes: \(\mathbb{E}_{\mathbf{x}\sim P(\mathbf{x};\mu,\sigma)}\exp\left[\frac{f(\mathbf{ x})}{T}\right]\). The temperature term \(T\) modulates the impact of the global optimum on the optimization landscape such that low \(T\) produces strong landscape modulation by the global optimum. For our one-dimensional \(f(\mathbf{x})\) example, the exponentially transformed landscape is plotted in the second figure from the left in Figure 2(a) and shows that the local optima has faded out, such that gradient-based optimization within this landscape is more likely to converge \begin{table} \begin{tabular}{|c|l|l|l|l|l|} \hline & CG & CC-RDG3 & DGSC & FC-GLOnet +Local & PG-GLOnet +Local \\ & & & & Refinement & Refinement \\ \hline \(f_{1}\) & (3.65 \(\pm\) 0.07)e+04 & (2.36 \(\pm\) 0.11)e+03 & **(7.15 \(\pm\) 1.61)e+02** & (1.74 \(\pm\) 0.01)e+03 & (1.85 \(\pm\) 0.63)e+03 \\ \(f_{2}\) & (1.98 \(\pm\) 0.00)e+01 & (2.04 \(\pm\) 0.00)e+01 & (2.07 \(\pm\) 0.00)e+01 & (1.98 \(\pm\) 0.00)e+01 & **(4.38 \(\pm\) 0.75)e-03** \\ \(f_{3}\) & (2.08 \(\pm\) 0.18)e+07 & (2.27 \(\pm\) 0.30)e+06 & (3.27 \(\pm\) 0.66)e+06 & (8.76 \(\pm\) 2.99)e+05 & **(5.05 \(\pm\) 0.86)e+05** \\ \(f_{4}\) & (9.76 \(\pm\) 0.01)e+05 & (9.96 \(\pm\) 0.00)e+05 & (1.06 \(\pm\) 0.00)e+06 & (9.94 \(\pm\) 0.01)e+05 & **(2.95 \(\pm\) 0.82)e+02** \\ \(f_{5}\) & (1.15 \(\pm\) 0.97)e+09 & (1.45 \(\pm\) 0.32)e+08 & (1.79 \(\pm\) 0.53)e+08 & **(4.22 \(\pm\) 0.77)e+07** & (1.17 \(\pm\) 0.49)e+08 \\ \(f_{6}\) & (8.84 \(\pm\) 0.00)e+07 & (9.11 \(\pm\) 0.14)e+07 & (9.38 \(\pm\) 0.03)e+07 & (9.03 \(\pm\) 0.01)e+07 & **(4.66 \(\pm\) 0.79)e+04** \\ \hline \end{tabular} \end{table} Table 1: Optimization results from conjugate gradient, CC-RDG3, DGSC, FC-GLOnet followed by local gradient descent, and PG-GLOnet followed by local gradient descent, as applied to 1000-dimensional benchmark functions. to the global optimum. The choice of \(T\) depends on the scale of \(f(\mathbf{x})\). Consider \(f(\mathbf{x})\) that is linearly normalized to span \((0,1)\). Such normalization can be typically achieved based on prior knowledge about the upper and lower bound of \(f(\mathbf{x})\). If we want to amplify \(f(\mathbf{x})\) for \(f(\mathbf{x})>f_{d}\) and minimize \(f(\mathbf{x})\) for \(f(\mathbf{x})<f_{d}\), where \(f_{d}\) is a division point between 0 and 1, the temperature is chosen to be \(T=f_{d}/\log(1+f_{d})\). For example, if \(f_{d}\) is chosen to be the golden ratio, then the temperature is roughly \(T=1.3\). In practice, the selection of \(f_{d}\) is problem specific, and \(T\) can be treated as a hyperparameter that can be manually tuned around 1 for tailoring to a particular problem. **Over-parameterization.** To further enhance the ability for GLOnet to efficiently and reliably converge to the global optimum, we next consider the concept of over-parameterization in which the distribution \(P(\mathbf{x})\) is now a neural network parameterized by weights \(\phi\). The objective function then becomes: \(\mathbb{E}_{\mathbf{x}\sim P(\mathbf{x};\phi)}\exp\left[\frac{f(\mathbf{x})}{ T}\right]\). Our use of a neural network is inspired by the fact that deep network training involves the solving of an extremely high dimensional non-convex optimization problem, that the convergence of the neural network is typically insensitive to initialization, and that good neural network parameters can be found using backpropagation. The underlying mathematical principles outlining why gradient descent is so effective for deep network training have been revealed to some extent by computer scientists in recent years. [1, 1, 2] First, the parameter space of deep networks is a high-dimensional manifold, such that most local optima are equivalently good and the probability of converging to a bad optimum during training decreases quickly with network size. Second, these equivalently high performing local optima originate from neural network over-parameterization, which builds in redundancy in the optimization landscape that speeds up and stabilizes the gradient-based optimization process. To understand how this applies to GLOnet, we revisit our one-dimensional \(f(\mathbf{x})\) landscape in which local optima are separated by deep barriers. When the optimization landscape is transformed using \(P(\mathbf{x},\phi)\), it frames the optimization problem in a very high dimensional landscape, as the dimensionality of \(\phi\) is much higher than \(\mathbf{x}\). Solutions to the optimization problem therefore reside in a high-dimensional manifold, such that many different \(\phi\)'s serve as high performing local optima. Additionally, local optima optima in \(f(\mathbf{x})\) are no longer separated by deep barriers but are instead connected by pathways with low to no barriers in our transformed high dimensional landscape, mitigating trapping within these local optima during gradient-based optimization. The high dimensional landscape representing the transformed \(f(\mathbf{x})\) is visualized as a two-dimensional pro Figure 3: Conceptualization of the GLOnets optimization platform. (a) Visualization of key concepts that enable effective gradient-based optimization within a non-convex landscape, including: 1) transforming the optimization problem to the optimization of parameters within a distribution; 2) exponential weighing of the objective function; 3) over-parameterization of the distribution function; and 4) Effective gradient estimation during the network training procedure. b) Role of the progressively growing PG-GLOnet network architecture in optimization, visualized for a single growing block applied to a two-dimensional problem. When \(\alpha\) is zero, PG-GLOnet searches within a one dimensional slice of the two dimensional landscape. As \(\alpha\) increases, the effective dimensionality of the PG-GLOnet output distribution increases and enables searching of more of the landscape. Upon the completion of PG-GLOnet training, the generator output distribution collapses to the global optimum. jection in the rightmost plot in Figure 2(a). The global optimum is now a connected band in the optimization landscape, as opposed to a single point in \(f(\mathbf{x})\), and there are fewer energy barriers preventing gradients from converging to the global optimum, enabling gradient descent optimization to be more robust and faster. We note that neural network depth and expressivity play a large role in determining the practical impact of over-parameterization on optimization, and as a demonstration, we compare the performance of GLOnets based on linear and deep non-linear networks in the Appendix. **Gradient estimation.** A critical feature to maximizing the performance of GLOnet is ensuring that gradients used to evolve \(P(\mathbf{x})\), which are approximated using a finite batch of samples, are sufficiently accurate. There are two methods for gradient estimation that can be used for GLOnets. The first is to use a score function gradient estimator, which utilizes the evaluated derivatives of the probability distribution \(P(\mathbf{x};\phi)\) and \(f(\mathbf{x})\). This method for estimation requires explicit evaluation of derivatives to \(P(\mathbf{x};\phi)\) but only an implicit evaluation of \(\nabla_{\mathbf{x}}f\). The second is to use a path-wise gradient estimator, which relies on knowing the explicit derivatives of \(f(\mathbf{x})\) but for which the probability distribution \(P(\mathbf{x};\phi)\) can be implicit. Empirically, we find for GLOnet that the pathwise gradient estimator more consistently produces smaller gradient error compared with the score function gradient estimator, and we therefore implement the pathwise gradient estimator in Equation 4. (Xu et al., 2019; Mohamed et al., 2020) The pathwise gradient estimator is based on the principle of Monte Carlo estimation, such that the estimation error decreases with the inverse square root of batch size. Importantly, this estimation error is independent of dimension. As a result, GLOnet and specifically PG-GLOnet are able to operate for batch sizes that are independent of problem dimension, as demonstrated in Figures 1(c) and 1(d). This scaling of problem dimension without a required scaling in the number of functional evaluations allows PG-GLOnet to readily scale and address the 1000-dimensional problems in Table 1 with modest computational resources. **Progressive growth.** Direct searching within a high dimensional, non-convex landscape is an intractable problem. In the case of FC-GLOnet, which utilizes all of the features above, including distribution optimization and over-parameterization, the algorithm is still not effective in directly searching high dimensional landscapes (Table 1). With PG-GLOnet, the progressive growing architecture regularizes the optimization procedure to search first within a relatively coarse, low dimensional representation of the optimization landscape, followed by relatively local searching within increasingly higher dimensional landscape representations. This hierarchical increase of landscape dimensionality directly corresponds to the serial toggling of \(\alpha\) within the series of growing blocks in the generator. As such, the optimization landscape is evolved over the course of PG-GLOnet training in a manner that maintains the tractability of the optimization problem. To further visualize the relationship between generative network architecture and optimization search procedure, we consider a non-convex two-dimensional landscape shown in Figure 2(b). The generative network contains a single growing block, and the toggling of \(\alpha\) from zero to one modulates the effective dimensionality of the generator output from one to two. Initially, \(\alpha\) is zero and the vector outputted by the generator has the same effective dimensionality as its input vector and is one. The optimization landscape being searched is therefore a diagonal line within the two-dimensional landscape (Figure 2(b), left-most plot), and with optimal solutions near the center of the line, the outputted generator distribution (red coloring in plot) narrows towards this region. As \(\alpha\) is increased, the generator output vector becomes dominated by its linear transformation branch, as opposed to its upsampling branch, and it has an effective dimensionality that increases and eventually doubles. In our PG-GLOnet visualization, this increase in effective dimensionality corresponds to a broadening of the optimization landscape being searched, and the outputted generator distribution widens relative to the diagonal line. Upon the completion of network growth, the PG-GLOnet distribution converges to the global optimum. The success of PG-GLOnet is therefore predicated on the ability for the outputted distribution of the generative network to be narrowed down to smaller but more promising regions of a coarse optimization landscape, prior to increasing the landscape dimensionality and adding more degrees of freedom to the problem. This concept therefore works particularly well for problems where optima within a low dimensional analogue of the optimization landscape help to inform of the presence and position of optima within the high dimensional landscape. This regularization of the optimization procedure also indicates that for problems where optima within coarse variants of the optimization landscape do not inform the position of the global optimum, PG-GLOnet will not work well. In summary, we present a general global optimization algorithm metaheuristic based on progressive growing deep generative neural networks termed PG-GLOnet. Unlike other population-based algorithms, PG-GLOnet uses gradient-based optimization to evolve an expressive, complex distribution in the optimization landscape to one centered around promising optima. This complex distribution, parameterized using the deep network framework, utilizes loss function engineering and over-parameterization to facilitate effective gradient-based searching. PG-GLOnet is particularly well suited to address ultra-high dimensional problems because the required batch size is independent of problem dimension and the progressively growing network architecture facilitates a hierarchical search process within a landscape with progressively growing effective dimensionality. This use of a hierarchical search strategy also provides bounds as to the types of problems and landscapes that are suited for PG-GLOnet optimization. We anticipate that further research in the tailoring of application-specific generative network architectures to particular optimization landscapes will enable the GLOnet platform to extend and adapt to an even wider range of non-convex, high dimensional optimization problems.
2305.08740
Temporal and Heterogeneous Graph Neural Network for Financial Time Series Prediction
The price movement prediction of stock market has been a classical yet challenging problem, with the attention of both economists and computer scientists. In recent years, graph neural network has significantly improved the prediction performance by employing deep learning on company relations. However, existing relation graphs are usually constructed by handcraft human labeling or nature language processing, which are suffering from heavy resource requirement and low accuracy. Besides, they cannot effectively response to the dynamic changes in relation graphs. Therefore, in this paper, we propose a temporal and heterogeneous graph neural network-based (THGNN) approach to learn the dynamic relations among price movements in financial time series. In particular, we first generate the company relation graph for each trading day according to their historic price. Then we leverage a transformer encoder to encode the price movement information into temporal representations. Afterward, we propose a heterogeneous graph attention network to jointly optimize the embeddings of the financial time series data by transformer encoder and infer the probability of target movements. Finally, we conduct extensive experiments on the stock market in the United States and China. The results demonstrate the effectiveness and superior performance of our proposed methods compared with state-of-the-art baselines. Moreover, we also deploy the proposed THGNN in a real-world quantitative algorithm trading system, the accumulated portfolio return obtained by our method significantly outperforms other baselines.
Sheng Xiang, Dawei Cheng, Chencheng Shang, Ying Zhang, Yuqi Liang
2023-05-09T11:17:46Z
http://arxiv.org/abs/2305.08740v1
# Temporal and Heterogeneous Graph Neural Network for Financial Time Series Prediction ###### Abstract. The price movement prediction of stock market has been a classical yet challenging problem, with the attention of both economists and computer scientists. In recent years, graph neural network has significantly improved the prediction performance by employing deep learning on company relations. However, existing relation graphs are usually constructed by handcraft human labeling or nature language processing, which are suffering from heavy resource requirement and low accuracy. Besides, they cannot effectively response to the dynamic changes in relation graphs. Therefore, in this paper, we propose a temporal and heterogeneous graph neural network-based (THGNN) approach to learn the dynamic relations among price movements in financial time series. In particular, we first generate the company relation graph for each trading day according to their historic price. Then we leverage a transformer encoder to encode the price movement information into temporal representations. Afterward, we propose a heterogeneous graph attention network to jointly optimize the embeddings of the financial time series data by transformer encoder and infer the probability of target movements. Finally, we conduct extensive experiments on the stock market in the United States and China. The results demonstrate the effectiveness and superior performance of our proposed methods compared with state-of-the-art baselines. Moreover, we also deploy the proposed THGNN in a real-world quantitative algorithm trading system, the accumulated portfolio return obtained by our method significantly outperforms other baselines. 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 20222 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 20222 20222 2022 2022 2022 20222 2022 2022 2022 2022 2022 2022 20222 2022 2022 20222 2022 2022 20222 20222 2022 20222 20222 20222 20222 2022 20222 20222 20222 20222 20222 20222 2022 20222 20222 2022 20222 2022 20222 20222 20222 20222 20222 20222 2022 20222 20222 20222 20222 20222 20222 20222 2022 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 202222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 202222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 202222 20222 20222 20222 20222 20222 20222 20222 202222 20222 20222 20222 20222 20222 202222 20222 202222 20222 20222 20222 20222 20222 20222 20222 20222 20222 20222 202222 20222 202222 20222 20222 20222 20222 202222 202222 20222 20222 20222 20222 202222 20222 20222 202222 20222 202222 20222 20222 202222 202222 202222 20222 202222 20222 202222 202222 20222 20222 20222 20222 20222 202222 20222 202222 20222 20222 202222 20222 202222 202222 202222 20222 202222 20222 202222 20222 20222 22222 202222 202222 202222 20222 202222 202222 202222 202222 20222 2222 20222 222222 20222 22222 202222 202222 222222 20222 222222 20222 22222 22222 22222 22222 22222 22222 22222 222222 22222 222222 22222 222222 22222 222222 22222 22222 222222 22222 222222 222222 22222 222222 222222 222222 222222 222222 222222 2222222 222222 222222 222222 222222 222222 222222 222222 222222 222222 2222222 2222222 22222222 222222 22222222 2222222 2222222 222222222 2222222 2222222 22222222 222222222 22222222 22222222 22222222 222222222 22222222 22222222 222222222 22222222 222222222 222222222 22222222 22222222222 222222222 2222222222 2222222222 222222222 2222222222 222222222222 22222222222 2222222222 2222222222 2222222222 22222222222 22222222222 222222222222 22222222222 2222222222 22222222222 2222222222222 222222222222 2222222222222 222222222222222 2222222222222222 22222222222222222222222222 2 background which may lead to different relation graphs. Besides, the current nature language processing (NLP) techniques are still facing significant shortcomings in high accuracy relation extraction (Kumar et al., 2017). In other words, the relations may be misled by either unilateral text news or inaccurate extracting models. In addition, these relations may dynamically change in time series. For example, the main business of a company would change according to the market demands, and the supply chain graph would be upgraded because of the technique evolution (Kumar et al., 2017). Existing graph learning price prediction methods are inevitably suboptimal in learning these fluctuate and dynamical situations. To address the above challenges, we propose a novel temporal and heterogeneous graph neural network-based method for financial time series prediction. Specifically, we directly model the relations of price time series of entities based on historical data and represent them in a temporal and heterogeneous graph, i.e., company relational graph. After obtaining the company relational graph, we leverage sequential transformers encode the historical prices and graph neural networks to encode internal relations of each company. Specifically, we update each company's representations by aggregating information from their neighbors in company relational graphs in two steps. The first step is a time-aware graph attention mechanism. The second is a heterogeneous graph attention mechanism. We thoroughly evaluate our approach on both the S&P 5001 and CSI 3002 dataset in the United States and China's stock markets. The experimental results show that our method significantly outperforms state-of-the-art baselines. In order to keep the sententious of our model, we conduct ablation studies to prove the effectiveness and essential of the each component of our method, including transformer encoder, time-aware graph attention, heterogeneous graph attention. Finally, we deploy our model in real-world quantitative algorithm trading platform, hosted in EMoney Inc.3, a leading financial service provider in China. The cumulative returns of portfolios contributed by our approach is significantly better than existing models in financial industry. We will release the dataset as well as the source codes of the proposed techniques along with the paper. In conclusion, our principle contributions are summarized as follows: Footnote 1: [https://www.spglobal.com/spdlj/en/indices/equity/sp-500](https://www.spglobal.com/spdlj/en/indices/equity/sp-500) Footnote 2: [http://www.cffex.com/cn-new/CSI300/indexOptions.html](http://www.cffex.com/cn-new/CSI300/indexOptions.html) Footnote 3: [http://www.emoney.cn/](http://www.emoney.cn/) * We propose a graph learning framework to effectively model the internal relations among entities for financial time series prediction, which fits the dynamical market status and is concordant with the ground-truth price movements. * We design a temporal and heterogeneous graph neural network model to learn the dynamic relationships by two-stage attention mechanisms. The proposed model is concise and effective in joint and automatically learning from historical price sequence and internal relations. * Our proposed THGNN is simple and can be easily implemented in the industry-level system. Extensive experiments on both the Unite States and China stock markets demonstrate the superior performance of our proposed methods. We also extensively evaluated its effectiveness by real-world trading platform. ## 2. Related Works ### Financial Time Series Learning It is widely known that price movements of stocks are affected by various aspects in financial market (Kumar et al., 2017). In previous studies, a common strategy is to manually construct various factors as feature inputs (Kumar et al., 2017; Li et al., 2017). For example, Michel et. al, (Michel et al., 2017) integrate market signals with stock fundamental and technical indicators to make decisions. Li et. al, (Li et al., 2017) establish a link between news articles and the related entities for stock price movement forecasting. A large number of existing methods employ recurrent neural network and its variants, such as LSTM (Hochreiter and Schmidhuber, 2015) and GRU (Hochreiter and Schmidhuber, 2015), to learn the sequential latent features of historical information and employ them for downstream prediction task (Kumar et al., 2017). In these works, the market signals processing of each stock is carried out independently. However, this inevitably ignores the internal relationship among stocks and would lead to suboptimal performance. Some works (Hochreiter and Schmidhuber, 2015) leverage the correlation information as model inputs, but cannot automatically capture the dynamic changes of relations. In this article, we model the relationship between stocks as dynamic company relation graphs and joint learn the graph relation and historical sequence feature automatically for future price movement prediction. ### Graph Learning for Stock Prediction Researchers have shown that the price movement of stocks is not only related to its own historical prices, but also connect to its linked stocks (Hochreiter and Schmidhuber, 2015). The link relation includes suppliers and customers, shareholders and investor, etc. Existing works normally employ knowledge graphs to store and represent these relations (Kumar et al., 2017; Li et al., 2017). Recently, graph neural network (GNN) (Kumar et al., 2017) is proposed to effectively learn on graph-structured data, which has shown its superior performance in various domains, including fraud detection (Chen et al., 2016; Chen et al., 2016), computer vision (Wang et al., 2017; Wang et al., 2017), etc. Researchers also introduce the advanced GNN-based approaches in the stock price prediction task. For example, Chen et. al, (Chen et al., 2016) models the supply chain relationships of entities into knowledge graphs and uses graph convolution networks to predict stock movements. Ramit et. al, (Ramit et al., 2017) leverage attentional graph neural network on the connections constructed by social media texts and company correlations. Cheng et.al, (Cheng et al., 2016) leverage multi-modality graph neural network on the connections constructed by historical price series, media news, and associated events. However, the graph constructed by these methods are limited by constant, predefined corporate relationships, which is powered by handcraft editing or nature language processing techniques, suffering from heavy resources labeling and low extracting accuracy (Li et al., 2017). But the actual corporate diagram evolves frequently over time. Besides, the company relation graph is also heterogeneous, which means there are multiple relation types among entities. Therefore, existing methods cannot exploit the full information from real-life company relation graphs. In this paper, we construct the relation graph dynamically and automatically based on their ground-truth historical price sequences and then propose a novel temporal and heterogeneous graph neural network methods to jointly learn their sequential and relational features for more accurate stock price prediction. We demonstrate the effectiveness of our methods by extensive experiments and real-world trading platform. ## 3. The Proposed Method In this section, we introduce the framework of our proposed temporal and heterogeneous graph neural network and their each component in detail. Our model takes the historical price sequence as inputs and infer the probability of stock movements as outputs. We represent the relation of stocks in a dynamic heterogeneous graph with two types of edges. We then jointly encode the historical and relation features by transformers and heterogeneous graph attention network. We report the problem definition first and each module of our method in turn. ### Problem Definition Different from traditional graph-based methods that construct static and homogeneous graphs by handcraft labeling or nature language processing techniques to infer stock movements, our model represents the company relation graph as a collection of temporal and heterogeneous graphs, which are automatically generated by historical price sequences. In graphs, the node denotes each equity and edge represents their relations in sequential. Temporal graphs are composed of timestamped edges and timestamped nodes (Srivastava et al., 2015; Wang et al., 2016). Each node might be associated with multiple relationships and multiple timestamped edges on different trading days. There are multiple types of edge \(E=\{E_{1},...,E_{r}\}\) in company relation graphs. And the occurences of node \(V=\{V^{t_{1}},...,V^{T}\}\) and edge \(E=\{E^{t_{1}},...,E^{T}\}\) are different on different trading days. Table 1 summarizes the symbols introduced in this paper. **Definition 3.1**.: **Temporal and Relational Occurrence**_. In a temporal company relation graph, an edge \(e\) is associated with a series of temporal occurrences \(e=\{e^{t_{1}},e^{t_{2}},...\}\), which indicate the occurrences of edge \(e\) at trading days \(\{t_{1},t_{2},...\}\) in the company relation graph. Each type of relational occurrence is associated with a series of temporal occurrences with \(E=\{E_{r_{1}},E_{r_{2}},...\}\), which indicate the occurrences of edge \(e\) in different relationships \(\{r_{1},r_{2},...\}\). Same as temporal occurrences of edge \(e\), a node \(v\) is associated with a set of temporal occurrences with \(v=\{o^{t_{1}},o^{t_{2}},...\}\)._ **Definition 3.2**.: **Temporal and Heterogeneous Graph**_. A temporal and heterogeneous company relation graph \(\tilde{G}=(\tilde{V},\tilde{E})\) is formed by a set of temporal nodes \(\tilde{V}=\{o_{1}^{t_{n_{1}}},...,o_{n}^{t_{m}}\}\) and a series of sets of temporal edges \(\tilde{E}=\{\tilde{E}_{1},...,\tilde{E}_{r}\}\), where \(\tilde{E}_{r}=\{e_{1}^{t_{n_{1}}},...,e_{m}^{t_{m}}\}\), denotes the edges of relation \(r\), and \(e_{i}^{t_{n_{i}}}=(u_{e_{i}},v_{e_{i}})^{t_{n_{i}}}\) denotes the temporal edge._ In existing works (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016), the graph neighborhood \(\mathcal{N}(v)\) of node \(v\) is defined as static or homogeneous. Here, we generalize the definition of graph neighborhood to the temporal and heterogeneous graph, which is set as follows: **Definition 3.3**.: **Temporal and Heterogeneous Graph Neighborhood**_. Given a temporal node \(v\), the neighborhood of \(v\) is defined as \(\mathcal{N}(v)=\{v_{t}|f_{sp}(v_{i},v)\leq d_{N},|t_{v}-t_{v_{i}}|\leq t_{N}\}\), where \(f_{sp}(\cdot|\cdot)\) denotes the shortest path length between two nodes, \(d_{N}\) and \(t_{N}\) denote the hyper-parameters. As for heterogeneous graph, we define \(\mathcal{N}_{r}(\cdot)\) as the neighborhood function of relation \(r\)._ Finally, we formally define the stock movement prediction problem as follows: **Input:**_Historical price sequences of listed companies \(\mathbf{X}=\{x_{1},x_{2},\cdots,x_{n}\}\), where each \(x_{t}=\{x_{t,1},x_{t,2},\cdots,x_{t,t}\}\) denotes the historical price sequences. We then use the \(\mathbf{X}\) to generate the temporal and heterogeneous company relation graph \(\tilde{G}\), with multiple types of temporal edges \(\{\tilde{E}_{r_{1}},\tilde{E}_{r_{2}},...\}\), for downstream tasks._ **Output:**_The probability \(\tilde{\mathcal{V}}\) of price movements of each equity._ ### Stock Correlation Graph Generation In this subsection, we report the process of the temporal and heterogeneous graph construction. As mentioned in previous studies (Wang et al., 2016; Wang et al., 2016), there may be multiple relationships between companies (such as suppliers and customers, shareholders and invested companies). Different from conventional knowledge graph-based relations that construct relation by human labeling or NLP techniques, generating relations directly based on market trend signals has proved to be effective (Wang et al., 2016; Wang et al., 2016) in practical, which does not require additional ambiguity domain knowledge or text news sources, and is easy to be implemented. Therefore, in this paper, we obtain the correlation matrix by calculating the correlation coefficients between ground-truth stock historical market signals directly. Then, the relationship between companies is determined according to the value of each element of the correlation matrix. The relationship between companies may be positive (correlation \(>\) threshold) or negative (correlation \(<\) -threshold). In order to reduce noise, we connect the edges whose absolute value is greater than the threshold, and the rest of the edges are considered not to be connected. So far, the edges \(E\) of company relation graph are generated. Therefore, we model the inter-company relation graph as a heterogeneous graph with two relationships, i.e., \(G=(V,\{E_{r_{1}},E_{r_{2}},...\})\), with \(r\in\{pos,neg\}\). As the relationships between companies tend to be dynamic, we generate the company relation graph in temporal format as our model's inputs. In particular, within \(T\) trading days, we generate temporal and heterogeneous company relation graphs with \(T\) timestamps, which is formulated as \(\tilde{G}=(\tilde{V},\{\tilde{E}_{pos},\tilde{E}_{neg}\})\). Finally, the generated graphs and original sequence inputs are fed to downstream learning task simultaneously. \begin{table} \begin{tabular}{|c|l|} \hline **Symbol** & **Definition** \\ \hline \hline \(\mathbf{X}\) & the historical price of listed companies \\ \hline \(\tilde{Y}\) & the probability of price movements \\ \hline \(n\) & the total number of nodes \\ \hline \(m\) & the total number of edges \\ \hline \(r\) & the number of relationships in graph \(\tilde{G}\) \\ \hline \(T\) & the number of trading days in \(\tilde{G}\) \\ \hline \(\tilde{G}=(\tilde{V},\{\tilde{E}_{r_{1}},\tilde{E}_{r_{2}},...\})\) & the temporal and heterogeneous graph \\ \hline \(\tilde{V}=\{o_{1}^{t_{n_{1}}},...,o_{n}^{t_{m}}\}\) & the set of temporal nodes \\ \hline \(\tilde{E}_{r}=\{e_{1}^{t_{r_{1}}},...,e_{m}^{t_{m}}\}_{r}\) & the set of temporal edges of relation \(r\) \\ \hline \(\mathcal{N}(\cdot)\) & the neighborhood function \\ \hline \(d\) & the number of dimension \\ \hline \end{tabular} \end{table} Table 1. The summary of symbols ### Historical Price Encoding The input stock movement feature of price sequences is defined as \(\mathbf{X}^{t}\in\mathbb{R}^{n\times T\times d_{feat}}\) on trading day \(t\), where \(n\) denotes the number of stocks, \(T\) means the number of trading days before \(t\), and \(d_{feat}\) denotes the dimension of historical price features. We first leverage a linear transformation and positional encoding (PE) (Wang et al., 2017; Wang et al., 2017) on trading features to obtain the input tensor \(\mathbf{H}^{t}\in\mathbb{R}^{n\times T\times d_{in}}\), which is formulated as follows: \[\mathbf{\hat{H}}^{t}= \mathbf{W}_{in}\mathbf{X}^{t}+\mathbf{b}_{in}\] \[\mathbf{H}^{t}= \mathbf{\hat{H}}^{t}+\text{PE}\] \[\text{PE}(p,2i)= sin(p/10000^{2i/d_{in}})\] \[\text{PE}(p,2i+1)= cos(p/10000^{2i/d_{in}}) \tag{1}\] where \(p\in\{1,2,..,T\}\) is the trading day position, \(i\) is the dimension, \(d_{in}\) denotes the dimension of input features, and \(\mathbf{W}_{in}\in\mathbb{R}^{d_{feat}\times d_{in}}\) and \(\mathbf{b}_{in}\in\mathbb{R}^{d_{in}}\) denote the learnable parameters. After linear transformation, we proposed to leverage multi-head attentional transformer to encode the input feature for each stock in each day. Then, the proposed encoder outputs \(\mathbf{H}^{t}_{enc}\in\mathbb{R}^{n\times T\times d_{enc}}\) for downstream tasks, where \(d_{enc}\) denotes the output dimension of the encoder. Mathematically, we formulate the historical feature encoder's output \(\mathbf{H}^{t}_{enc}\) as follows: \[\mathbf{H}^{t}_{enc}=\text{Concat}(\text{EncHead}^{t}_{1},\dots,\text{Enc Head}^{t}_{h_{enc}})\mathbf{W}_{o} \tag{2}\] where \(\mathbf{W}_{o}\in\mathbb{R}^{h_{enc}d_{ov}d_{enc}}\) denotes the output projection matrix, \(h_{enc}\) denotes the number of heads in the encoder, \(d_{o}\) denotes the output dimension of each head, Concat denotes a concatenation of the output of heads, and \(\text{EncHead}^{t}_{i}\in\mathbb{R}^{n\times T\times d_{o}}\) denotes the output of encoder head with \(\text{EncHead}^{t}_{i}=\text{Attention}(\mathbf{Q}^{t}_{i},\mathbf{K}^{t}_{i}, \mathbf{V}^{t}_{i})\), which is formulated as follows: \[\mathbf{Q}^{t}_{i}=\mathbf{H}^{t}\mathbf{W}^{Q}_{i},\mathbf{K}^{t}_{i}= \mathbf{H}^{t}\mathbf{W}^{K}_{i},\mathbf{V}^{t}_{i}=\mathbf{H}^{t}\mathbf{W}^ {V}_{i}\] \[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})= \text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{in}}}) \mathbf{V} \tag{3}\] where \(\mathbf{W}^{Q}_{i}\in\mathbb{R}^{d_{in}\times d_{hidden}}\), \(\mathbf{W}^{K}_{i}\in\mathbb{R}^{d_{in}\times d_{hidden}}\), \(\mathbf{W}^{V}_{i}\in\mathbb{R}^{d_{in}\times d_{o}}\) denote the projection matrices, and \(d_{hidden}\) denotes the dimension of hidden layer. ### Temporal Graph Attention Mechanism Given the historical sequence encoder output \(\mathbf{H}^{t}_{enc}\) and temporal relation graph \(\tilde{G}\), we propose to employ graph attention mechanism on the sequential and heterogeneous inputs. In particular, we flatten the embeddings of all nodes to \(\mathbf{H}^{t}_{enc}\in\mathbb{R}^{n\times T\times d_{enc}}\) and leverage two-stage temporal attention mechanism to aggregate messages from graph structures and temporal sequences, which is illustrative reported in Figure 1 (c). The two-stage temporal graph attention layers could aggregate messages from both the postive and negative neighbors simultaneously. For each relationship \(r\in\{pos,neg\}\), the message aggregating is formulated as follows: \[\mathbf{H}^{t}_{r}= \text{Concat}(\text{TgaHead}_{1},...,\text{TgaHead}_{h_{req}}) \mathbf{W}_{o,r} \tag{4}\] where \(\mathbf{H}^{t}_{r}\in\mathbb{R}^{n\times d_{att}}\) denotes the output of the temporal graph attention layer on trading day \(t\), and \(\mathbf{W}_{o,r}\in\mathbb{R}^{h_{req}\times T\times d_{enc}}\) denotes the output projection matrix, \(h_{req}\) denotes the number of heads, and each head of temporal graph attention layer \(\text{TgaHead}_{i}\in\mathbb{R}^{n\times T\times d_{enc}}\) is formulated as follows: \[\text{TgaHead}_{i}=\sum_{q^{t}\in V}\sigma(\sum_{u^{t}\in\mathcal{N}_{r}(q^{t} )}a^{i}_{u^{t},q^{t}}\mathbf{h}_{u^{t}}) \tag{5}\] where \(\sigma\) denotes the activation function, \(\mathbf{h}_{u^{t}}\in\mathbb{R}^{T\times d_{enc}}\) denotes the \(u^{t}\)-th row of historical price embedding \(\mathbf{H}^{t}_{enc}\), and \(a^{i}_{u^{t},q^{t}}\) denotes the importance of temporal edge \((u^{t},q^{t})\) in \(i\)-th head, which is formulated as follows: \[\alpha^{i}_{u^{t},q^{t}}=\frac{\exp(\text{LeakyReLU}(\mathbf{a}^{T}_{r,i}[ \mathbf{h}_{u^{t}}||\mathbf{h}_{u^{t}})])}{\sum_{k^{t}\in\mathcal{N}_{r}(q^{t} )}\exp(\text{LeakyReLU}(\mathbf{a}^{T}_{r,i}[\mathbf{h}_{k^{t}}||\mathbf{h}_{ u^{t}}]))} \tag{6}\] where \(\mathbf{a}_{r,i}\in\mathbb{R}^{2Td_{enc}}\) denotes the weight vector of relation \(r\) and \(i\)-th head. Figure 1. The proposed Temporal and Heterogeneous Graph Neural Networks architecture for stock movement predictions. The first part is the generation of a stock correlation graph, which builds dynamic relations for stocks in the market every trading day. The second part is the historical price encoding, which selects a temporal node \(v^{t}\) and its neighbor nodes to encode the historical price information. Transformer encoders share their parameters. The third part is the graph attention layer, which adaptively calculates the importance of the neighbors and aggregates the information according to the neighbors’ importance. The fourth part is the heterogeneous graph attention layer, which adaptively calculates the importance and aggregates information from different types of neighbors. Then, we leverage a multi-layer perception to give the prediction of each stock’s future movement. ### Heterogeneous Graph Attention Mechanism As shown in Figure 1 (d), we already have messages from different types of neighbors through the two-stage attention mechanism. Then, we propose the heterogeneous graph attention network to learn from different relationships in relation graphs. We define message sources as three types of embeddings, namely, messages from ourselves \(\mathbf{H}_{self}^{t}\), positive neighbors \(\mathbf{H}_{pos}^{t}\), and negative neighbors \(\mathbf{H}_{neg}^{t}\), respectively. \(\mathbf{H}_{self}^{t}\in\mathbb{R}^{n\times d_{att}}\) is derived from \(\mathbf{H}_{enc}^{t}\) through a linear transformation with \(\mathbf{H}_{self}^{t}=\mathbf{W}_{self}\mathbf{H}_{enc}^{t}+\mathbf{b}_{self}\), where \(\mathbf{W}_{self}\in\mathbb{R}^{Td_{enc}\times d_{att}}\) and \(\mathbf{b}_{self}\in\mathbb{R}^{d_{att}}\) denote the learnable parameters. \(\mathbf{H}_{pos}^{t}\) and \(\mathbf{H}_{neg}^{t}\) are derived from the graph attention mechanism in section 3.4. Taking three groups of node embeddings as input, we can adaptively generate the importance of different relationships through attention mechanism. The weights of three relationships (\(\beta_{self},\beta_{pos},\beta_{neg}\)) can be shown as follows: \[(\beta_{self},\beta_{pos},\beta_{neg})=\mathrm{HGA}(\mathbf{H}_{self}^{t}, \mathbf{H}_{pos}^{t},\mathbf{H}_{neg}^{t}) \tag{7}\] We first use three Multi-Layer Perceptrons (MLP) to transform these three embeddings. Then we measure the importance of each embedding using a heterogeneous attention vector \(\mathbf{q}\). Furthermore, we average the importance of all node embeddings, which can be explained as the importance of each company relation. The importance of each company relation, denoted as \(r\in\{self,pos,neg\}\), is shown as follows: \[w_{r}=\frac{1}{|\hat{V}|}\sum_{q^{\prime}\in\hat{V}}\mathbf{q}^{T}\mathrm{ tanh}(\mathbf{W}\mathbf{h}_{q^{\prime},r}+\mathbf{b}) \tag{8}\] where \(\mathbf{W}\in\mathbb{R}^{d_{att}\times d_{q}}\) and \(\mathbf{b}\in\mathbb{R}^{d_{q}}\) are the parameters of MLP, \(\mathbf{q}\in\mathbb{R}^{d_{q}}\) denotes the attention vector, and \(\mathbf{h}_{q^{\prime},r}\) denotes the \(v_{t}\)-th row of \(\mathbf{H}_{r}^{t}\). Note that all above parameters are shared for all relationship of node embeddings. After obtaining the importance of each relationship, we calculate the contribution of each relationship and obtain the final embedding \(\mathbf{Z}^{t}\in\mathbb{R}^{n\times d_{att}}\) as follows: \[\beta_{r}= \frac{\exp(w_{r})}{\sum_{r\in\{self,pos,neg\}}\exp(w_{r})}\] \[\mathbf{Z}^{t}= \sum_{r\in\{self,pos,neg\}}\beta_{r}\cdot\mathbf{H}_{r}^{t} \tag{9}\] For a better understanding of the aggregating process of heterogeneous graph attention layer, we give a brief explanation in Figure 1 (d). Then we apply the final embedding to a semi-supervised node classification task. ### Optimization Objectives Here we give the implementation of objective functions. We model the stock movement prediction task as a semi-supervised node classification problem. Specifically, we selected 200 stocks of which future movements are ranked in top-100 or bottom-100, and labeled the corresponding nodes as 1 and 0, respectively. Then, we use one layer of MLP as the classifier to get the classification results of labeled nodes. Furthermore, we use binary cross-entropy to calculate the objective function \(L\), which is formulated as follows: \[\hat{\mathbf{Y}}= \sigma(\mathbf{W}\mathbf{Z}_{l}^{t}+\mathbf{b})\] \[\mathcal{L}= -\sum_{l\in\mathcal{Y}_{l}}\left[\mathbf{Y}_{l}^{t}\log(\hat{ \mathbf{Y}}_{l})+(1-\mathbf{Y}_{l}^{t})\log(1-\hat{\mathbf{Y}}_{l})\right] \tag{10}\] where \(\mathcal{Y}_{l}\) denotes the set of labeled nodes, \(\mathbf{Y}_{l}^{t}\) and \(\mathbf{Z}_{l}^{t}\) denote the label and embedding of the labeled node \(l\), respectively, \(\sigma\) denotes the Sigmoid activation function, and \(\mathbf{W}\) and \(\mathbf{b}\) are the parameters of MLP. With the guide of labeled data, we use Adam (Kingmaa et al., 2014) optimizer to update the parameters of our proposed method. Please note that we use this objective function to jointly optimize the parameters of historical price encoder, temporal and heterogeneous graph neural network and node classifier. ## 4. Experiments In this section, we first introduce the datasets and experimental settings. Then we detail report the experimental results in real-world dataset and applications. ### Experimental Settings **Datasets.** Extensive experiments are conducted in both the Unite States and Chinna's stock markets by choosing the constituted entities in S & P 500 and CSI 300 index. The historical price data from 2016 to 2021 are chosen as our datasets. In addition to historical price data, our input data also includes company relation graphs. The graphs are generated by the stock price correlation matrix, which is introduced in Section 3.2. The stock price correlation matrix of each day is determined by the historical price movement of past 20 trading days. Specifically, we compare the opening price, closing price, trading volume, and trading volume of each pair of two stocks, calculate the correlation coefficient between them and take the mean value as the element of the correlation matrix. **Parameter Settings.** Our temporal graph \(\tilde{G}\) contains company relationships for 20 trading days. The \(d_{\mathcal{N}}\) and \(t_{\mathcal{N}}\) of neighborhood function \(\mathcal{N}(\cdot)\) are both set as 1. During the graph generation process, the threshold for generating one edge is set as 0.6. The historical price data of the previous 20 trading days are used as input features. The feature dimension \(d_{feat}\) of the encoding layer is 6, the input dimension \(d_{in}\) and encoding dimension \(d_{enc}\) of the encoding layer are both 128. The hidden dimension \(d_{hidden}\) is 512, the dimension of value \(d_{o}\) is 128, and the number of heads \(h_{enc}\) is 8. In temporal graph attention layer, \(d_{att}\) is 256, and the number of heads \(h_{tga}\) is 4. The dimension of the attention vector in the heterogeneous graph attention layer, \(d_{q}\), is 256. **Trading Protocols.** On the basis of (Han et al., 2014), we use the daily buy-hold-sell trading strategy to evaluate the performance of stock movement prediction methods in terms of returns. During each trading day during the test period (from January 1, 2020 to December 31, 2020), we use simulated stock traders to predict transactions: 1. When the trading day \(t\) closes, traders use this method to get the prediction score, that is, the ranking of the predicted rate of return of each stock. 2. When the trading day \(t+1\) opens: the trader sells the stock bought on the trading day \(t\). Meanwhile, traders buy stocks with high expected returns, i.e., the stocks with top-\(k\) scores. 3. Please note that if a stock is continuously rated with the highest expected return, the trader holds the stock. In calculating the cumulative return on investment, we follow several simple assumptions: 1. Traders spend the same amount on each trading day (for example, $50,000). We made this assumption to eliminate the time dependence of the testing process in order to make a fair comparison. 2. There is always enough liquidity in the market to satisfy the opening price of the order on the \(t+1\) day, and the selling price is also the opening price on the \(t+1\) day. 3. Transaction costs are ignored because the cost of trading US stocks through brokers is relatively cheap, whether on a transaction-by-transaction basis or on a stock-by-stock basis.Fidelity Investments (Fidelity Investments) and Interactive Brokerage (Interactive Broker), for example, charge $4.95 and $0.005 per transaction, respectively. **Compared Baselines.** We compared our proposed method with state-of-the-art sequential-based models as well as the graph-based approaches. They are 1) Non-graph-based Methods, includes LSTM (Hochreiter and Schmidhuber, 1997), GRU (Hochreiter and Schmidhuber, 1997), Transformer (Srivastava et al., 2015), LSTM (Hochreiter and Schmidhuber, 1997). 2) Graph-based Methods: LSTM-GCN (Hochreiter and Schmidhuber, 1997), LSTM-RGCN (Hochreiter and Schmidhuber, 1997), TGC (Hochreiter and Schmidhuber, 1997), MAN-SF (Hochreiter and Schmidhuber, 1997), HATS (Hochreiter and Schmidhuber, 1997), REST (Hochreiter and Schmidhuber, 1997) and AD-GAT (Hochreiter and Schmidhuber, 1997). **Evaluating Metrics.** Since the goal is to accurately select the stocks with highest returns appropriately, we use seven metrics: prediction accuracy (ACC), annual return rate (ARR), annual volatility (AV), maximum draw down (MDD), annual sharpe ratio (ASR), calmar ratio (CR), and information ratio (IR) to report the performance of baselines and our proposed model. Prediction accuracy is widely used to evaluate classification tasks, such as stock movement prediction (Han et al., 2017; Hochreiter and Schmiduber, 1997), so we calculate the prediction accuracy of all stocks for each trading day during the test period. Because ARR directly reflects the effect of stock investment strategy, ARR is our main measure, which is calculated by adding up the returns of selected stocks on each test day in a year. AV directly reflects the average risk of investment strategy per unit time. MDD reflects the maximal draw down of investment strategy during the whole test period. ASR reflects the benefits of taking on a unit of volatility with \(ASR=\frac{ARR}{AV}\). CR reflects the benefits of taking on a unit of draw down with \(CR=\frac{ARR}{\text{abs}(MDD)}\). IR reflects the excess return under additional risk. The smaller the absolute value of AV and MDD is, the higher the value of ACC, ARR, ASR, CR, and IR is, the better the performance is. For each method, we repeated the test process five times and reported average performance to eliminate fluctuations caused by different initialization. ### Financial Prediction In this section, we evaluate the performance of financial time series prediction and portfolio, which is the main task of this paper. Table 2 reports the performance through evaluation metrics such as ACC and ARR for each method in two datasets. The first four rows of Table 2 show the performance of models that do not use graph-based technology. It it clear that, none of these four methods is satisfactory, and their performance is all lower than that of other baselines. This proves that models without using company relationship data cannot achieve optimal performance. Lines 5 to 11 of Table 2 show the performance of the baseline models using graph-based technology. According to line 6, LSTM+RGCN performs best. This proves the effectiveness of using heterogeneous graphs of inter-company relationships. Note that according to line 7, TGC's performance is also competitive and its investment strategy is less volatile. This proves the effectiveness of using the dynamic relationship between companies. According to the previous observation, the financial prediction model can be improved by using the heterogeneous or dynamic relationships of the company relation graph. Therefore, it is necessary to design an innovative model to improve the prediction performance of financial series from both dynamic and heterogeneous graph structure. According to the last row of Table 2, our proposed THGNN outperforms all baselines and proves the superiority of temporal and heterogeneous graph neural network in financial time series prediction. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & & \\ \cline{2-13} & ACC & ARR & AV & MDD & ASR & CR & IR & ACC & ARR & AV & MDD & ASR & CR & IR \\ \hline LSTM & 0.532 & 0.377 & 0.449 & -0.382 & 0.842 & 0.989 & 0.954 & 0.515 & 0.291 & **0.318** & -0.240 & 0.915 & 1.213 & 0.877 \\ GRU & 0.530 & 0.362 & **0.445** & -0.379 & 0.813 & 0.955 & 0.934 & 0.517 & 0.312 & 0.320 & -0.243 & 0.975 & 1.284 & 0.932 \\ Transformer & 0.533 & 0.385 & 0.454 & -0.384 & 0.848 & 1.005 & 0.960 & 0.518 & 0.327 & 0.322 & -0.245 & 1.016 & 1.335 & 0.969 \\ eLSTM & 0.534 & 0.434 & 0.454 & -0.373 & 0.955 & 1.163 & 1.041 & 0.520 & 0.330 & 0.323 & -0.239 & 1.022 & 1.381 & 0.991 \\ \hline LSTM+GCN & 0.538 & 0.470 & 0.442 & -0.354 & 1.062 & 1.326 & 1.103 & 0.523 & 0.351 & 0.320 & **-0.217** & 1.097 & 1.618 & 1.119 \\ LSTM+RGCN & 0.565 & 0.558 & 0.463 & -0.366 & 1.205 & 1.522 & 1.203 & 0.536 & 0.509 & 0.326 & -0.235 & 1.561 & 2.166 & 1.537 \\ TGC & 0.552 & 0.528 & 0.455 & **-0.344** & 1.163 & 1.535 & 1.180 & 0.531 & 0.453 & 0.323 & -0.224 & 1.402 & 2.022 & 1.412 \\ MAN-SF & 0.551 & 0.527 & 0.467 & -0.357 & 1.130 & 1.478 & 1.157 & 0.527 & 0.418 & 0.334 & -0.225 & 1.251 & 1.858 & 1.282 \\ HATS & 0.541 & 0.494 & 0.466 & -0.387 & 1.060 & 1.277 & 1.110 & 0.525 & 0.385 & 0.332 & -0.249 & 1.160 & 1.546 & 1.116 \\ REST & 0.549 & 0.502 & 0.466 & -0.359 & 1.079 & 1.398 & 1.117 & 0.528 & 0.425 & 0.331 & -0.228 & 1.284 & 1.864 & 1.298 \\ AD-GAT & 0.564 & 0.535 & 0.457 & -0.371 & 1.170 & 1.444 & 1.187 & 0.539 & 0.537 & 0.329 & -0.240 & 1.632 & 2.238 & 1.596 \\ \hline THGNN & **0.579** & **0.665** & 0.468 & -0.369 & **1.421** & **1.804** & **1.340** & **0.551** & **0.632** & 0.336 & -0.237 & **1.881** & **2.667** & **1.875** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance evaluation of compared models for financial time series prediction in S&P 500 and CSI 300 datasets. ACC and ARR measure the prediction performance and portfolio return rate of each prediction model, respectively, where the higher is better. AV and MDD measure the investment risk of each prediction model where the lower absolute value is better. ASR, CR, and IR measure the profit under a unit of risk, where the higher is better. ### Ablation Study In this section, we conduct the ablation experiments, that is, evaluating the performance of our methods that one part is dropped. According to the first row of Table 3, THGNN-noenc can not achieve the best performance after dropping the historical price encoding layer. This is because the encoder is responsible for extracting the time correlation in the historical price information. According to the second row, THGNN-notemp achieves unsatisfactory performance after dropping the temporal graph attention layer. This is because the temporal graph attention layer is responsible for dynamically adjusting the relationship between companies. Moreover, the relationship between companies changes dynamically over time, especially over a long period of time. According to the third row, THGNN-nohete cannot achieve the best performance after dropping the heterogeneous attention mechanism. This is because the heterogeneous graph attention layer is responsible for weighing the importance of messages from different relationships. ### Performance of the Portfolio In the performance evaluation of the portfolio strategy, we reported 6 widely-used evaluating metrics for portfolio, e.g., the annualized rate of return (ARR), annual sharp ratio (ASR), calmar ratio (CR), and information ratio (IR). Then, we show the accumulative return curve to compare the investment portfolio of our model and baselines during the test period. According to the trading protocols mentioned by section 4.1, we use the output of the prediction model to adjust our position day by day. Table 2 reports the performance of our model's and baselines' portfolio strategies. It is clear that our method has achieved the best performance in four of the six investment evaluating metrics. Specifically, our method performs best in terms of ARR, ASR, CR and IR. TGC and LSTM+GCN perform better than our model in terms of AV and MDD. This shows that our proposed THGNN takes the initiative to take more risks to pursue higher risk-return ratio than other baselines'. According to Table 3, THGNN outperforms its sub-models in terms of portfolio strategy return evaluating metrics (e.g., ARR, ASR, CR, and IR). Therefore, our model shows strong effectiveness in building profitable portfolio strategies. In order to further establish and evaluate the returns of our model during the test time, we calculate the cumulative returns for each trading day during the test period. We report the cumulative return curve of our model and other models on each trading day in Figure 2. Due to space constraints, we select some representative baselines to compare with our model. We can observe that all baselines outperform the S&P500 index. None of the models beat the others in the first four months. After starting in May, our model began to stay ahead of other models. In the following time, our model gradually widened the gap with other models. THGNN remained in the lead in the last month of 2020 and eventually achieved a profit on investment of more than 60 per cent. Experimental results on other baselines can draw similar conclusion. ### Parameter Sensitivity In this section, we report the experimental results of parameter sensitivity on the financial prediction task on S&P500 dataset with various parameters in Figure 3. According to Figure 3 (a), we can observe that the performance of the model increases with the increase of embedded dimensions. The performance of the model peaked at 256 and then degraded rapidly. This is because embedded information needs a suitable dimension to reduce information loss and noise. According to Figure 3 (b), the performance of the model slowly improves as the coding output dimension increases. The performance of the model begins to deteriorate after the dimension reaches 128. This is because the input information dimension is low, so the low-dimensional output can also achieve better performance. According to Figure 3 (c), the performance of the model increases with the increase of the attention vector dimension. When the dimension reaches 256, the performance of the model reaches its peak. Continuing to increase the dimension will lead to overfitting, which will degrade the model. According to Figure 3 (d), we can observe that the fluctuation of the model performance is low. We choose to pay attention to the number of force heads as 8. We also note that increasing the number of attention heads will make the model training more stable. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & ACC & ARR & ASR & CR & IR \\ \hline THGNN-noenc & 0.548 & 0.571 & 1.201 & 1.524 & 1.082 \\ THGNN-notemp & 0.539 & 0.486 & 0.964 & 1.292 & 0.946 \\ THGNN-nohete & 0.553 & 0.600 & 1.279 & 1.618 & 1.198 \\ \hline THGNN & **0.579** & **0.665** & **1.421** & **1.804** & **1.340** \\ \hline \hline \end{tabular} \end{table} Table 3. Performance evaluation of ablated models for financial time series prediction in S&P 500 dataset. ACC, ARR, ASR, CR, and IR measure the prediction performance and portfolio return rate of each prediction model, where the higher is better. Figure 2. The accumulated returns gained in the test set (2020) by our proposed THGNN and selected baselines. For better illustration, we select one baseline from non-graph-based model and graph-based model, respectively. ### Interpretability of Graph Neural Network The stock price fluctuation of listed companies is not only related to the historical price information of the company, but also affected by the companies related to it. Therefore, we need to integrate dynamic and heterogeneous relevant information as input to the prediction model. In our technology, the relationship between each company changes dynamically over time. The strength of the relationship between companies, that is, the proportion of contribution to the message also changes over time. Our time chart attention mechanism can dynamically adjust the importance of each company in the diagram. In addition, our heterogeneous attention mechanism can dynamically adjust the importance of each source. Therefore, our model can help to predict stock price volatility more accurately, and the experimental results verify the superiority of our model performance. Then, in order to explore the interpretability of our proposed model, we extract the attention weight of the graph in the process of the model prediction. We counted the attention weight of all nodes in the process of message delivery on the relational graph. Under different daily returns and node degrees, we take the mean value of attention weights and visualize the statistical results, which are reported in Figure 4. Figure 4 (a) shows the attention weights on the _pos_ diagram. We can see (on the y-axis) that the nodes with higher degrees have higher average attention weights. This shows that in the process of message delivery on the _pos_ graph, the degree is higher, that is, the nodes with more neighbors will contribute more messages to their neighbors. We also found that (on the x-axis), companies with larger fluctuations in daily returns also had higher average attention weights. This shows that more volatile price changes will contribute more information to their neighbors, which also means that price fluctuations will produce momentum spillover effects. According to Figure 4 (b), we can see that on the _neg_ graph, with a lower degree node, the average attention weight is higher. This indicates that during message delivery on the _neg_ graph, nodes with lower degrees contribute more messages to their neighbors. For more interpretable experimental results, we also visualize each relationship's attention weights and show the corresponding performance when using only one relationship. Specifically, we trained our proposed THGNN at two datasets, and counted the mean value of the attention weights in heterogeneous graph attention layer. Then, we used each single relationship's message as the input of the prediction model to obtain the prediction performance, as illustrated in Figure 5. It is clear that _self_ and _pos_ message resources achieve better prediction performance than _neg_. Moreover, the _pos_ message resource contribute importantly to the prediction model, which proves the contribution to the prediction model. The reason might be that the influence between companies in the similar price movement is relatively useful for prediction future price movement. Although the _neg_ message resource has an unsatisfactory performance when predicting price movement single, it still contributes to our proposed THGNN in achieving the state-of-the-art performance according to Table 3. We can see that temporal graph attention layer can reveal the difference between nodes and weights then adequately, and the heterogeneous graph attention can adjust the contribution of each message resource adaptively. The result demonstrates the effectiveness of graph-structure information and the interpretability of proposed graph neural network model. Figure 4. Visualization of attention weights, X-axis denotes the daily return of stocks. Y-axis denotes the average degree in each company relation graph. Figure 5. Prediction performance of single message source and corresponding attention value. Figure 3. Prediction performance of THGNN in terms of dimension of final embedding \(d_{att}\), dimension of encoding output \(d_{enc}\), dimension of the attention vector \(\mathbf{q}\), and number of attention head \(h\). ### System Implementation In this section, we introduce the implementation detail of our proposed methods. We first show the settings of model employment and training strategy. Then we show the web-based application, which shows how our proposed method gives customers informative advice. Our proposed THGNN is re-trained every trading day. To handle the large-scale dataset, we leverage mini-batch gradient descent optimization with a batch size of 256 and a learning rate of 3e-4. The model is implemented by PyTorch and deployed in Python and Java. Besides, we use distributed Scrapy (Pytron et al., 2017) to obtain historical stock data and utilize Neo4j (Krishnan et al., 2017) as the graph database to store relational graphs. Figure 6 shows the interface of our desktop application. The upper left part of Figure 6 is the list of stocks to be held based on the THGNN strategy. The lower left part of Figure 6 reports the price change curve of China National Petroleum Corporation (CBPC: 601857). It contains buy and sell points suggested by our THGNN. B denotes buying and S denotes selling. It can be seen that our model provides three buy signals and two sell signals. The lower right part of Figure 6 reports the relevant companies of CBPC. Companies with high stock price volatility correlations are marked with red arrowhead. The results shows that our investment strategy provides informative advice through a relational graph approach. ## 5. Conclusion and Discussion In this paper, a new temporal and heterogeneous graph neural network model is proposed for financial time series prediction. Our method addresses the limitations of the existing works of graph neural networks by adjusting the message contribution ratio of each node through temporal and heterogeneous graph attention mechanism. We evaluate the effectiveness of the proposed method comprehensively by comparing it with the most influential graph-based and non-graph-based baselines. In addition, THGNN performs better than other baselines in the actual investment strategy, and the results show that our approach based on dynamic heterogeneous graph can obtain a more profitable portfolio strategy than those based on static or homogeneous graph structure. In conclusion, this paper is the first time to model the inter-company relationship into a heterogeneous dynamic graph, and apply it to the financial time series prediction problem. This is beneficial to the more extensive research and innovation of graph-based technology in the financial field. On the one hand, we model the company relation graph as the real dynamic heterogeneous graph; on the other hand, we improve the financial time series prediction model through the latest graph neural network technology. Besides, there is still room for improvement in our work on generating real-life company relation graphs. In the future, we will focus on improving the modeling of the company's relation to help the prediction model obtain more accurate training input graph data. Figure 6. The desktop interface of investment portfolio based on our proposed THGNN method. It includes price-relevant listed companies from historical data and visualization of how does our method makes predictions on buying and selling. The part (a) lists the stocks held by our method in order from highest to lowest. And part (b) shows the ’buy’ and ’sell’ signals generated by our trading protocols. Then part (c) lists the listed companies related to China National Petroleum Corporation (CBPC: 601857) and shows which ones have higher correlation to this company according to our generated stock correlation graph. ## Acknowledgments Dawei Cheng is supported by the NSFC 62102287, Ying Zhang is supported by ARC DP210101393.
2302.02547
A Quantum Neural Network Regression for Modeling Lithium-ion Battery Capacity Degradation
Given the high power density low discharge rate and decreasing cost rechargeable lithium-ion batteries LiBs have found a wide range of applications such as power grid level storage systems electric vehicles and mobile devices. Developing a framework to accurately model the nonlinear degradation process of LiBs which is indeed a supervised learning problem becomes an important research topic. This paper presents a classical-quantum hybrid machine learning approach to capture the LiB degradation model that assesses battery cell life loss from operating profiles. Our work is motivated by recent advances in quantum computers as well as the similarity between neural networks and quantum circuits. Similar to adjusting weight parameters in conventional neural networks the parameters of the quantum circuit namely the qubits degree of freedom can be tuned to learn a nonlinear function in a supervised learning fashion. As a proof of concept paper our obtained numerical results with the battery dataset provided by NASA demonstrate the ability of the quantum neural networks in modeling the nonlinear relationship between the degraded capacity and the operating cycles. We also discuss the potential advantage of the quantum approach compared to conventional neural networks in classical computers in dealing with massive data especially in the context of future penetration of EVs and energy storage.
Anh Phuong Ngo, Nhat Le, Hieu T. Nguyen, Abdullah Eroglu, Duong T. Nguyen
2023-02-06T03:28:25Z
http://arxiv.org/abs/2302.02547v1
# A Quantum Neural Network Regression for Modeling Lithium-ion Battery Capacity Degradation ###### Abstract Given the high power density, low discharge rate, and decreasing cost, rechargeable lithium-ion batteries (LiBs) have found a wide range of applications such as power grid-level storage systems, electric vehicles, and mobile devices. Developing a framework to accurately model the nonlinear degradation process of LiBs, which is indeed a supervised learning problem, becomes an important research topic. This paper presents a classical-quantum hybrid machine learning approach to capture the LiB degradation model that assesses battery cell life loss from operating profiles. Our work is motivated by recent advances in quantum computers as well as the similarity between neural networks and quantum circuits. Similar to adjusting weight parameters in conventional neural networks, the parameters of the quantum circuit, namely the qubits' degree of freedom, can be tuned to learn a nonlinear function in a supervised learning fashion. As a proof of concept paper, our obtained numerical results with the battery dataset provided by NASA demonstrate the ability of the quantum neural networks in modeling the nonlinear relationship between the degraded capacity and the operating cycles. We also discuss the potential advantage of the quantum approach compared to conventional neural networks in classical computers in dealing with massive data, especially in the context of future penetration of EVs and energy storage. Quantum neural network, Lithium-ion battery, battery degradation, battery life estimation. ## I Introduction Lithium-ion batteries (LiBs) are the dominant player in the battery market for electric vehicles (EVs), energy storage, and mobile devices thanks to their high energy densities, low cost, and long life cycle. However, LiBs pose a concern about their capacity degradation which has a negative impact on the safety and reliability of these applications. As a result, the estimation of battery cycling capacity is essential for the utilization of battery management system operation, economical and safety aspects, and life cycle assessment. Fundamentally, the battery capacity degradation estimation approaches are classified into two groups: model-based and data-driven approaches [1]. On the one hand, the model-based approaches adopt parametric electrochemical processes to investigate the relationship between the cell capacity and cyclic aging of a lithium-ion battery [2]. However, mathematical modeling and parametric identification of the chemical properties of the elements in a LiB are irreversible and highly complex [3]. Consequently, a comprehensive result is difficult to be obtained with the model-based approaches. Furthermore, every model-based technique is specifically developed for a certain type of LiB. Therefore, its extensibility to another composition other than its origin is very limited [4]. On the other hand, the data-driven approaches are non-parametric. They are based on the extraction of empirical observations of battery operation, such as voltage, current, temperature, and capacity measurements. Hence, they are scalable and extensible to adapt to the variations in batteries' size and type. Furthermore, the rapid development of data collection techniques and computational processors has facilitated the practical application of data-driven methods. Numerous data-based analytical methods have been applied in practice for the prediction of LiBs degradation are support vector machine, relevance vector machine, Gaussian process functional regressions, and neural networks [5]. Among data-driven methods, artificial neural network (ANN) is a powerful machine learning tool with the capability to handle big data involving complex nonlinear systems [1, 6]. With demand for EVs and electronic devices on the rise, LiBs as one of the essential components required for this mobility and connectivity boom has been formulating a big challenge of managing a huge amount of data. Meanwhile, classical machine learning models usually run into a performance bottleneck trained on such kind of heavy tasks. Recent advances in quantum computing have further enabled quantum machine learning (QML) models for potentially achieving promising performance [7, 8]. In other words, QML models use quantum computers to boost up the power of machine learning [9]. Quantum neural network (QNN) is a widely-used QML model that composes of parameterized learnable quantum circuits [10]. In a QNN, the classical features are encoded into quantum states by using angle encoding. Quantum states are then used as input for a QNN model composed by layers of the learnable quantum circuit [11, 12]. The quantum circuit is constructed by a series of parameterized rotation gates along axes and the measurements of qubits are decoded into classical values in the output layer. This work develops a QNN regression model to predict the deterioration of battery capacity with the purpose of assessing potential quantum advantage in such kind of learning tasks for batteries. This paper is organized as follows. Section II presents the basics of how a Lithium-ion battery works and its capacity degradation model. We will explain carefully the similarities and differences between QNN and ANN models in Section III. The numerical results for the QNN model are given in the Section IV. Finally, Section V concludes the paper and discusses possible future research. ## II Battery capacity degradation ### _Basics of Lithium-ion Battery Model_ LiB is a type of energy storage technology constructed of an anode, cathode, separator, electrolyte, and two current collectors, one positive and other negatives. LiB utilizes insertion reactions for the cathode and anode with lithium ions as the charge carriers. The anode delivers lithium ions to the cathode, which initiates a flow of electrons between the two components. In simple words, a LiB employs its \(Li^{+}\) ions as a key component of its electrochemistry [2]. An example of this type of electrochemical reaction is Lithium Manganese Oxide spinel \(LiMnO_{2}\) illustrated as follows [13]: \[\begin{cases}LiMn\to Li_{1-x}MnO_{2}+xLi^{+}+xe^{-},&\text{at electrode}^{+}\\ mC+xLi^{+}+xe^{-}\to Li_{x}C_{m},&\text{at electrode}^{-}\end{cases}\] ### _Li-ion Battery Capacity Degradation Behavior_ The capacity degradation phenomenon is constituted by various factors: (i) formation property of the solid-electrolyte interface (SEI) layer and (ii) irreversible absorption of \(Li^{+}\) ion at the host material. As such, the thickness of the SEI layer keeps increasing over time, because it passively absorbs \(Li^{+}\) ion during the charging and discharging processes along the battery life span. A LiB will become dead when its capacity degradation drops to a failure threshold defined by its manufacturer [2]. While the degradation of LiB can be theoretically explained by the loss of lithium ions and other active materials, it is difficult to link such molecular-level degradation processes to the operational pattern of energy storage, particularly the charging and discharging cycles [3]. Machine learning with empirical datasets becomes a promising approach for estimating the LiB capacity deterioration ### _Regression models for the Li-ion Battery Capacity Degradation Estimation_ The problem of modeling the degraded battery capacity induced by operational cycles can be considered as a regression problem in the field of supervised learning. Let \(Y=(y_{1},y_{2},\cdots,y_{n})^{\top}\in\mathbb{R}^{n}\) is historical capacity measurements of a LiB over a cycle \(X=(x_{1},x_{2},\cdots,x_{n})^{\top}\). We need to construct a nonlinear mapping function \(y=f_{\theta}(x)\) that corresponds the operational cycles \(x\) to the remained capacity \(y\) where \(\theta\) is the vector of parameters in the hypothetical function \(f_{\theta}(x)\). The problem boils into finding the optimal \(\theta^{*}\) that minimizes the loss function between the estimated \(\hat{y}=f_{\theta}(x)\) and the measured capacity \(y\) of the battery. Two popular loss functions are Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE): \[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}} \tag{1}\] \[MAPE(\%)=\frac{1}{N}\sum_{i=1}^{N}\frac{|y_{i}-\hat{y}_{i}|}{y_{i}}\times 100 \tag{2}\] where \(y_{i}\) is the real measured capacity and \(\hat{y}_{i}\) is the predicted capacity at cycle \(i\in N\). Figure 1 presents a typical neural network used for regression in classical computers. The dataset, e.g., the empirical data of battery, can be first fed to the feature embedded layer \(\mathcal{F}(.)\), which consists of a sequence of operations & combination (e.g., convolution and attention) to be mapped to the feature space. Then the hidden layer \(W_{l}(.)\), typically \(l\)-fully connected, takes featured data and produces the estimation results. The regression problem boils down to the problem of tuning weight factors of neurons in \(W_{l}(.)\) such that the loss function (e.g., RMSE or MAPE) is minimized. This can be addressed by several effective numerical algorithms developed in the literature such as BFGS (Broyden-Fletcher-Goldfarb-Shannon algorithm), ADAM (Adaptive Movement Estimation algorithm), and Nelder-Mead [6]. ## III Quantum neural network regression ### _Overview of Quantum Neural Network_ Quantum neural networks are motivated by (i) the huge success of neural networks, particularly deep learning, in solving a real-world problem and (ii) the recent advances in quantum computing [11, 12]. It emerges following the similarity between the quantum circuit and the neural network architecture as illustrated in Figure 2. #### Iii-A1 Quantum bit QNN utilizes a quantum bit (qubit) as a neuron in the classical neural network. Unlike a binary digit in classical computers, a single qubit can be in a superposition of two basis states \(|0\rangle\) and \(|1\rangle\)[9], i.e., it can be represented in a two-dimensional complex vector space as follows: \[|\psi\rangle=\alpha|0\rangle+\beta|1\rangle:\alpha,\beta\in\mathbb{C}\wedge| \alpha|^{2}+|\beta|^{2}=1. \tag{3}\] Consequently, the state of a qubit can be invinventionted as a point on a complex sphere with radius 1, i.e., the Bloch sphere as in Figure 2(a). In other words, its state can be represented by two angles \(\theta\) and \(\varphi\), which are called two degrees of freedom, and tuning \(\theta\) can alternate the state of the qubit. #### Iii-A2 Quantum Circuit A quantum circuit is a simple sequence of quantum gates, measurements, initialization of qubits to known values, and other actions which are represented to a quantum computation. On a circuit, a sequence of qubits is encoded as \(q_{0},q_{1},\ldots,q_{n-1}\), in which \(n\) is the number of qubits. The qubit can be a result rotated along \(x-y-z\) axes for the probability distributions of \(|0\rangle\) and \(|1\rangle\). The probability depends on the internal state of the qubit, Fig. 1: Classical neural network model but more specifically, on the angle \(\theta\) between state vector \(|\psi\rangle\) and the measured axis. Therefore, a numerical feature can be approximated into a qubit by \(f:\mathrm{R}\rightarrow[0,\pi]\). The states of the qubit can be modulated by the quantum gates, such as a single qubit gate (e.g, Pauli-X gate) or multiple-qubit gate (e.g, two-qubit CNOT gate). Quantum gates are unitary operators and are illustrated as unitary matrices represented on some computational basis. Here, \(R_{X}\), \(R_{Y}\), and \(R_{Z}\) gates are rotation operators representing a single-qubit rotation through angle \(\theta\) (radians) around the \(x\)-axis, \(y\)-axis, and \(z\)-axis, respectively: \[R_{X}(\theta) =\begin{bmatrix}\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\ i\sin\frac{\theta}{2}&cos\frac{\theta}{2}\end{bmatrix} \tag{4}\] \[R_{Y}(\theta) =\begin{bmatrix}\cos\frac{\theta}{2}&-\sin\frac{\theta}{2}\\ sin\frac{\theta}{2}&cos\frac{\theta}{2}\end{bmatrix}\] (5) \[R_{Z}(\theta) =\begin{bmatrix}e^{-i\frac{\theta}{2}}&0\\ 0&e^{i\frac{\theta}{2}}\end{bmatrix} \tag{6}\] A digital error model (DeM) is used to control the quantum gates, and readout errors and characterizes circuit performance by a set of Pauli errors [8]. Finally, a measurement component is compulsory to read out the result of the quantum computation. The output of the measurement component is a classical variable value retrieved from a probability distribution governed by the Born rule of the quantum theory [9]. ### _QNN Implementation and training procedure_ The structure of QNN represented in Figure 2(b) follows the typical structure of the neural network shown in Figure 1. The first stage, which resembles the feature-embedded layer in the classical neural network, is _the encoding quantum circuit_\(U(X)\). It is used to encode classical data to be used in the quantum computer. This stage plays a vital role in adopting quantum algorithms to solve classical problems, particularly in quantum deep learning tasks. _The trainable quantum circuit_ resembles the classical counterpart's hidden layer where tuning circuit parameters \(\theta\) results in the change of the output in the quantum measurement. _The quantum measurement_, in turn, acts like the output layer in the classical neural network, i.e., its output after decoding represents the prediction result. The quantum circuit of the QNN model can be run in a quantum simulator or actual quantum hardware (e.g., IBM Quantum servers, Amazon Braket, Rigetti Computing, D-Wave, and Strawberry Fields) [14, 15]. However, the availability of quantum hardware is currently limited, e.g., the IBM server only allows 7 qubits for free application [16, 17]. The computer simulator back-end for quantum circuit become useful for proof-of-concept research. This paper uses the quantum simulator back-end provided by Qulacs [18] and trains the QNN using the hybrid classical-quantum procedure with BFGS optimizer [11]: * Prepare training data and encode the training data into the quantum state vectors \(|\psi_{in}\rangle\) using angle encoding. The input state is obtained as \(|\psi_{in}\rangle=U(x)\). * \(\theta\)-parameterized unitary \(U(\theta)\) is applied to the input state and generates an output state \(|\psi_{out}\rangle\). * Measure the expectation values of some chosen observable. For instance, \(Z\) expectation of the second qubit is denoted as \(\langle Z_{2}\rangle=\langle\psi_{out}|Z_{2}|\psi_{out}\rangle\). * Minimize the cost function \(L\) of between the real measured capacity \(y_{i}\) and the prediction \(\hat{y}_{i}\) by tuning the quantum circuit parameter \(\theta\) iterative until \(\theta=\theta^{*}\). In which, \(y(x,\theta^{*})\) is the desired prediction model. * Evaluate the accuracy of the QNN regression model by validating the cost function with a testing data set. ### _Potential advantages of the quantum neural network over classical neural network_ Although quantum neural networks (QNN) and their classical counterparts have similar features, there are some differences, resulting in the possible QNN advantages [10]. In order to construct a complex model for high-precision prediction, classical neural networks require the use of nonlinear basis functions or the use of kernel tricks, leading to a significant increase in computational cost. In contrast, leveraging quantum Fig. 2: Estimate Lithium-ion battery degradation using Quantum Neural Network mechanics, QNN can directly employ the exponential number of functions with respect to the number of qubits, thus capturing non-linearity without using nonlinear activation functions. Indeed, the trainable quantum circuit \(U(\theta)\) already has strong expressive power, which potentially enables a wide range of applications in capturing nonlinearity that are intractable on classical counterparts [19]. ## IV Numerical results ### _Li-ion battery data pre-processing_ The datasets used in this research are collected from the NASA Prognostics Center of Excellence Data Set Repository [20]. There are 34 different batteries in the datasets that comprise more than 6 profiles in charging and discharging states conducted on Lithium Nickel Manganese Cobalt Oxide batteries. Batteries were charged and discharged at different temperatures, and the impedance was measured at every cycle. The batteries have 18650 Lithium-ion cells with 2-Ah capacity for B05, B06 and B18, and 1.35-Ah capacity for B56, for which the charging and discharging cycle experiments were conducted repeatedly to achieve accelerated aging. A battery with state of health (SoH) reducing below \(70\%\) is considered to be discarded because its operational performance is not reliable [5]. In other words, the experiments are stopped when the battery loses over \(30\%\) of its rated capacity. ### _QNN Regression Model_ We chose the batteries B05, B06, B18, and B56 to validate the quantum regression model and the quantum circuit. Here, data of B05, B06, and B18 are quite clustered while data of B56 has a lot of noise. The train size was chosen at 80% of the original data for the first experiments on B05, B06, B18, and B56. We conduct the experiments on the quantum simulator powered by a classical computer with a configuration of 32 GB RAM and an Intel Xeon processor. Capacity aging prediction patterns and the original capacity measurements from the four LiBs demonstrated in Figure 3 are positively correlated. The QNN model works well with various data with different characteristics, neither over-fitted nor under-fitted with good RMSE and MAPE metrics We examine the impacts of training data size on the performance of QNN. Figure 4 shows results for cross-validation of the accuracy of capacity degradation estimation based on the QNN for several different sizes (i.e., 60%, 70%, 80% and 90%) of the training sample. The obtained RMSE and MAPE are quite small, which highlights the ability of the neural network empowered by the quantum circuit in capturing the nonlinear relationship of battery capacity on the operational cycle. As different kinds of capacity degradation paths are shown in the figure, we can see that the larger data size also influences on the prediction results. In detail, the larger size of the training data set leads to a more coherent prediction. Figure 5 shows the correlation between the depth and the number of qubits with the accuracy of the QNN model. In particular, the depth and the number of qubits are considered as hypo-parameters of QNN since they control the architecture or Fig. 3: Estimation of Lithium-ion batteries’ degradation using QNN topology of the QNN. The obtained results show that the qubits and depth quantities increase to a certain amount, and the RMSE and MAPE indicators drop insignificantly. Meanwhile, a quantum circuit with a large number of qubits and depth is computationally expensive. In other words, similar to the classical neural network, there is a trade-off in the complexity of the QNN and the obtained performance, which should be considered carefully in the design phase. ## V Conclusion This paper studied the quantum neural network regression model for the predicting battery life in which the qubit can resemble the neuron in classical machine learning. The QNN inherits the principle and structure of the classical neural network and the training of QNN boils into tuning the parameters of qubits in the quantum circuits, which is similar to tuning weights of classical neurons. The QNN's expression ability can be empowered thanks to the capability of a qubit in capturing complex nonlinearity. Similar to the classical neural network, the performance of QNN also depends on design parameters such as the number of qubits and depth of the quantum circuit. Numerical results show that QNN can be a very promising approach for modeling battery usage, especially for the future penetration of EVs and energy storage under the decarbonization policy.